source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Tea%20%28programming%20language%29
Tea is a high-level scripting language for the Java environment. It combines features of Scheme, Tcl, and Java. Features Integrated support for all major programming paradigms. Functional programming language. Functions are first-class objects. Scheme-like closures are intrinsic to the language. Support for object-oriented programming. Modular libraries with autoloading on-demand facilities. Large base of core functions and classes. String and list processing. Regular expressions. File and network I/O. Database access. XML processing. 100% pure Java. The Tea interpreter is implemented in Java. Tea runs anywhere with a Java 1.6 JVM or higher. Java reflection features allow the use of Java libraries directly from Tea code. Intended to be easily extended in Java. For example, Tea supports relational database access through JDBC, regular expressions through GNU Regexp, and an XML parser through a SAX parser (XML4J for example). Interpreter alternatives Tea is a proprietary language. Its interpreter is subject to a non-free license. A project called "destea", which released as Language::Tea in CPAN, provides an alternative by generating Java code based on the Tea code. TeaClipse is an open-source compiler that uses a JavaCC-generated parser to parse and then compile Tea source to the proprietary Tea bytecode.
https://en.wikipedia.org/wiki/Wine%20fraud
Wine fraud relates to the commercial aspects of wine. The most prevalent type of fraud is one where wines are adulterated, usually with the addition of cheaper products (e.g. juices) and sometimes with harmful chemicals and sweeteners (compensating for color or flavor). Counterfeiting and the relabelling of inferior and cheaper wines to more expensive brands is another common type of wine fraud. A third category of wine fraud relates to the investment wine industry. An example of this is when wines are offered to investors at excessively high prices by a company, who then go into planned liquidation. In some cases the wine is never bought for the investor. Losses in the UK have been high, prompting the Department of Trade and Industry and Police to act. In the US, investors have been duped by fraudulent investment wine firms. Independent guidelines to potential wine investors are now available. In wine production, as wine is technically defined as fermented grape juice, the term "wine fraud" can be used to describe the adulteration of wine by substances that are not related to grapes. In the retailing of wine, as wine is comparable with any other commodity, the term "wine fraud" can be used to describe the mis-selling of wine (either as an investment or in its deceitful misrepresentation) in general. Fraud in wine production refers to the use of additives in order to deceive. This may include coloring agents such as elderberry juice, and flavorings such as cinnamon at best, or less desirable additives at worst. Some varieties of wine have sought after characteristics. For example, some wines have a deep, dark color and flavor notes of spices due to the presence of various phenolic compounds found in the skin of the grapes. Fraudsters will use additives to artificially create these characteristics when they are lacking. Fraud in the selling of wine has seen much attention focused on label fraud and the investment wine market. Counterfeit labelling of rare, expen
https://en.wikipedia.org/wiki/Lumen%20%28anatomy%29
In biology, a lumen (: lumina) is the inside space of a tubular structure, such as an artery or intestine. It comes . It can refer to: the interior of a vessel, such as the central space in an artery, vein or capillary through which blood flows the interior of the gastrointestinal tract the pathways of the bronchi in the lungs the interior of renal tubules and urinary collecting ducts the pathways of the female genital tract, starting with a single pathway of the vagina, splitting up in two lumina in the uterus, both of which continue through the fallopian tubes In cell biology, a lumen is a membrane-defined space that is found inside several organelles, cellular components, or structures, including thylakoid, endoplasmic reticulum, Golgi apparatus, lysosome, mitochondrion, and microtubule. Transluminal procedures Transluminal procedures are procedures occurring through lumina, including: natural orifice transluminal endoscopic surgery in the lumina of, for example, the stomach, vagina, bladder, or colon procedures through the lumina of blood vessels, such as various interventional radiology procedures: percutaneous transluminal angioplasty percutaneous transluminal commissurotomy See also Foramen, any anatomical opening
https://en.wikipedia.org/wiki/Structured%20text
Structured text, abbreviated as ST or STX, is one of the five languages supported by the IEC 61131-3 standard, designed for programmable logic controllers (PLCs). It is a high level language that is block structured and syntactically resembles Pascal, on which it is based. All of the languages share IEC61131 Common Elements. The variables and function calls are defined by the common elements so different languages within the IEC 61131-3 standard can be used in the same program. Complex statements and nested instructions are supported: Iteration loops (REPEAT-UNTIL; WHILE-DO) Conditional execution (IF-THEN-ELSE; CASE) Functions (SQRT(), SIN()) Sample program (* simple state machine *) TxtState := STATES[StateMachine]; CASE StateMachine OF 1: ClosingValve(); StateMachine := 2; 2: OpeningValve(); ELSE BadCase(); END_CASE; Unlike in some other programming languages, there is no fallthrough for the CASE statement: the first matching condition is entered, and after running its statements, the CASE block is left without checking other conditions. Additional ST programming examples // PLC configuration CONFIGURATION DefaultCfg VAR_GLOBAL b_Start_Stop : BOOL; // Global variable to represent a boolean. b_ON_OFF : BOOL; // Global variable to represent a boolean. Start_Stop AT %IX0.0:BOOL; // Digital input of the PLC (Address 0.0) ON_OFF AT %QX0.0:BOOL; // Digital output of the PLC (Address 0.0). (Coil) END_VAR // Schedule the main program to be executed every 20 ms TASK Tick(INTERVAL := t#20ms); PROGRAM Main WITH Tick : Monitor_Start_Stop; END_CONFIGURATION PROGRAM Monitor_Start_Stop // Actual Program VAR_EXTERNAL Start_Stop : BOOL; ON_OFF : BOOL; END_VAR VAR // Temporary variables for logic handling ONS_Trig : BOOL; Rising_ONS : BOOL; END_VAR // Start of Logic
https://en.wikipedia.org/wiki/Inferior%20epigastric%20artery
In human anatomy, the inferior epigastric artery is an artery that arises from the external iliac artery. It is accompanied by the inferior epigastric vein; inferiorly, these two inferior epigastric vessels together travel within the lateral umbilical fold (which represents the lateral border of Hesselbach's triangle, the area through which direct inguinal hernias protrude.) The inferior epigastric artery then traverses the arcuate line of rectus sheath to enter the rectus sheath, then anastomoses with the superior epigastric artery within the rectus sheath. Structure Origin The inferior epigastric artery arises from the external iliac artery, immediately superior to the inguinal ligament. Course and relations It curves forward in the subperitoneal tissue, and then ascends obliquely along the medial margin of the abdominal inguinal ring; continuing its course upward, it pierces the transversalis fascia, and, passing in front of the linea semicircularis, ascends between the rectus abdominis muscle and the posterior lamella of its sheath. It finally divides into numerous branches, which anastomose, above the umbilicus, with the superior epigastric branch of the internal thoracic artery and with the lower intercostal arteries. As the inferior epigastric artery passes obliquely upward from its origin it lies along the lower and medial margins of the abdominal inguinal ring, and behind the commencement of the spermatic cord. The vas deferens, as it leaves the spermatic cord in the male, and the round ligament of the uterus in the female, winds around the lateral and posterior aspects of the artery. Anastomoses It anastomoses with the superior epigastric artery. Clinical significance Hernia The inferior epigastric artery may lie close to an inguinal hernia, so acts as a useful landmark. Surgery The inferior epigastric artery may be damaged during laparoscopic surgery. It may also be damaged when manually finding the peritoneum beneath the rectus abdominis m
https://en.wikipedia.org/wiki/IEC%2061131
IEC 61131 is an IEC standard for programmable controllers. It was first published in 1993; the current (third) edition dates from 2013. It was known as IEC 1131 before the change in numbering system by IEC. The parts of the IEC 61131 standard are prepared and maintained by working group 7, programmable control systems, of subcommittee SC 65B of Technical Committee TC65 of the IEC. Sections of IEC 61131 Standard IEC 61131 is divided into several parts: Part 1: General information. It is the introductory chapter; it contains definitions of terms that are used in the subsequent parts of the standard and outlines the main functional properties and characteristics of PLCs. Part 2: Equipment requirements and tests - establishes the requirements and associated tests for programmable controllers and their peripherals. This standard prescribes: the normal service conditions and requirements (for example, requirements related with climatic conditions, transport and storage, electrical service, etc.); functional requirements (power supply & memory, digital and analog I/Os); functional type tests and verification (requirements and tests on environmental, vibration, drop, free fall, I/O, power ports, etc.) and electromagnetic compatibility (EMC) requirements and tests that programmable controllers must implement. This standard can serve as a basis in the evaluation of safety programmable controllers to IEC 61508. Part 3: Programming languages Part 4: User guidelines Part 5: Communications Part 6: Functional safety Part 7: Fuzzy control programming Part 8: Guidelines for the application and implementation of programming languages Part 9: Single-drop digital communication interface for small sensors and actuators (SDCI, marketed as IO-Link) Part 10: PLC open XML exchange format for the export and import of IEC 61131-3 projects Related standards IEC 61499 Function Block PLCopen has developed several standards and working groups. TC1 - Standards TC2 - Functions TC3
https://en.wikipedia.org/wiki/Dynamic%20Data%20Driven%20Applications%20Systems
Dynamic Data Driven Applications Systems (DDDAS) is a new paradigm whereby the computation and instrumentation aspects of an application system are dynamically integrated in a feed-back control loop, in the sense that instrumentation data can be dynamically incorporated into the executing model of the application, and in reverse the executing model can control the instrumentation. Such approaches have been shown that can enable more accurate and faster modeling and analysis of the characteristics and behaviors of a system and can exploit data in intelligent ways to convert them to new capabilities, including decision support systems with the accuracy of full scale modeling, efficient data collection, management, and data mining. The DDDAS concept - and the term - was proposed by Frederica Darema for the National Science Foundation (NSF) workshop in March 2000. There are several affiliated annual meetings and conferences, including: DDDAS workshop at ICCS (since 2003) DyDESS conference and workshop at MIT organized by Sai Ravela and Adrian Sandu DDDAS special session at the ACC organized by Puneet Singla and Dennis Bernstein and Sai Ravela DDDAS Special Session Information Fusion DDDAS 2016 at Hartford, the first full-fledged conference hosted and sponsored by MIT and some support from UTRC. DDDAS 2017 at MIT, the second conference hosted and managed by MIT. DDDAS 2020 Online, the third conference hosted by MIT. DDDAS 2022 at MIT, the fourth conference hosted by MIT together with CLEPS22. As time progressed, it was suggested by Dr. Ravela that DDDAS grow into its own conference, adding workshops to special subjects. The first full-fledged but environmentally-focussed DDDAS conference was DyDESS, held at MIT, and the community has since not looked back. MIT sponsored and setup the DyDESS conference, and continues to be the host and event organizer through its Earth, Atmospheric and Planetary Sciences department.
https://en.wikipedia.org/wiki/Actor%20model%20and%20process%20calculi%20history
The actor model and process calculi share an interesting history and co-evolution. Early work The Actor model, first published in 1973, is a mathematical model of concurrent computation. The Actor model treats "Actors" as the universal primitives of concurrent digital computation: in response to a message that it receives, an Actor can make local decisions, create more Actors, send more messages, and determine how to respond to the next message received. As opposed to the previous approach based on composing sequential processes, the Actor model was developed as an inherently concurrent model. In the Actor model sequentiality was a special case that derived from concurrent computation as explained in Actor model theory. Robin Milner's initial published work on concurrency from the same year was also notable in that it positions mathematical semantics of communicating processes as a framework to understand a variety of interaction agents including the computer's interaction with memory. The framework of modelling was based on Scott's model of domains and as such was not based on sequential processes. His work differed from the Actor model in the following ways: There are a fixed number of processes as opposed to the Actor model which allows the number of Actors to vary dynamically The only quantities that can be passed in messages are integers and strings as opposed to the Actor model which allows the addresses of Actors to be passed in messages The processes have a fixed topology as opposed to the Actor model which allows varying topology Communication is synchronous as opposed to the Actor model in which an unbounded time can elapse between sending and receiving a message. The semantics provided bounded nondeterminism unlike the Actor model with unbounded nondeterminism. However, with bounded nondeterminism is impossible for a server to guarantee service to its clients, i.e., a client might starve. Milner later removed some of these restrictions in his
https://en.wikipedia.org/wiki/Coordinate%20time
In the theory of relativity, it is convenient to express results in terms of a spacetime coordinate system relative to an implied observer. In many (but not all) coordinate systems, an event is specified by one time coordinate and three spatial coordinates. The time specified by the time coordinate is referred to as coordinate time to distinguish it from proper time. In the special case of an inertial observer in special relativity, by convention the coordinate time at an event is the same as the proper time measured by a clock that is at the same location as the event, that is stationary relative to the observer and that has been synchronised to the observer's clock using the Einstein synchronisation convention. Coordinate time, proper time, and clock synchronization A fuller explanation of the concept of coordinate time arises from its relations with proper time and with clock synchronization. Synchronization, along with the related concept of simultaneity, has to receive careful definition in the framework of general relativity theory, because many of the assumptions inherent in classical mechanics and classical accounts of space and time had to be removed. Specific clock synchronization procedures were defined by Einstein and give rise to a limited concept of simultaneity. Two events are called simultaneous in a chosen reference frame if and only if the chosen coordinate time has the same value for both of them; and this condition allows for the physical possibility and likelihood that they will not be simultaneous from the standpoint of another reference frame. But outside special relativity, the coordinate time is not a time that could be measured by a clock located at the place that nominally defines the reference frame, e.g. a clock located at the solar system barycenter would not measure the coordinate time of the barycentric reference frame, and a clock located at the geocenter would not measure the coordinate time of a geocentric reference fra
https://en.wikipedia.org/wiki/Surface-enhanced%20Raman%20spectroscopy
Surface-enhanced Raman spectroscopy or surface-enhanced Raman scattering (SERS) is a surface-sensitive technique that enhances Raman scattering by molecules adsorbed on rough metal surfaces or by nanostructures such as plasmonic-magnetic silica nanotubes. The enhancement factor can be as much as 1010 to 1011, which means the technique may detect single molecules. History SERS from pyridine adsorbed on electrochemically roughened silver was first observed by Martin Fleischmann, Patrick J. Hendra and A. James McQuillan at the Department of Chemistry at the University of Southampton, UK in 1973. This initial publication has been cited over 6000 times. The 40th Anniversary of the first observation of the SERS effect has been marked by the Royal Society of Chemistry by the award of a National Chemical Landmark plaque to the University of Southampton. In 1977, two groups independently noted that the concentration of scattering species could not account for the enhanced signal and each proposed a mechanism for the observed enhancement. Their theories are still accepted as explaining the SERS effect. Jeanmaire and Richard Van Duyne proposed an electromagnetic effect, while Albrecht and Creighton proposed a charge-transfer effect. Rufus Ritchie, of Oak Ridge National Laboratory's Health Sciences Research Division, predicted the existence of the surface plasmon. Mechanisms The exact mechanism of the enhancement effect of SERS is still a matter of debate in the literature. There are two primary theories and while their mechanisms differ substantially, distinguishing them experimentally has not been straightforward. The electromagnetic theory proposes the excitation of localized surface plasmons, while the chemical theory proposes the formation of charge-transfer complexes. The chemical theory is based on resonance Raman spectroscopy, in which the frequency coincidence (or resonance) of the incident photon energy and electron transition greatly enhances Raman scattering inten
https://en.wikipedia.org/wiki/Microbial%20corrosion
Microbial corrosion, also called microbiologically influenced corrosion (MIC), microbially induced corrosion (MIC) or biocorrosion, is "corrosion affected by the presence or activity (or both) of microorganisms in biofilms on the surface of the corroding material." This corroding material can be either a metal (such as steel or aluminum alloys) or a nonmetal (such as concrete or glass). Bacteria Some sulfate-reducing bacteria produce hydrogen sulfide, which can cause sulfide stress cracking. Acidithiobacillus bacteria produce sulfuric acid; Acidothiobacillus thiooxidans frequently damages sewer pipes. Ferrobacillus ferrooxidans directly oxidizes iron to iron oxides and iron hydroxides; the rusticles forming on the RMS Titanic wreck are caused by bacterial activity. Other bacteria produce various acids, both organic and mineral, or ammonia. In presence of oxygen, aerobic bacteria like Acidithiobacillus thiooxidans, Thiobacillus thioparus, and Thiobacillus concretivorus, all three widely present in the environment, are the common corrosion-causing factors resulting in biogenic sulfide corrosion. Without presence of oxygen, anaerobic bacteria, especially Desulfovibrio and Desulfotomaculum, are common. Desulfovibrio salixigens requires at least 2.5% concentration of sodium chloride, but D. vulgaris and D. desulfuricans can grow in both fresh and salt water. D. africanus is another common corrosion-causing microorganism. The genus Desulfotomaculum comprises sulfate-reducing spore-forming bacteria; Dtm. orientis and Dtm. nigrificans are involved in corrosion processes. Sulfate-reducers require a reducing environment; an electrode potential lower than -100 mV is required for them to thrive. However, even a small amount of produced hydrogen sulfide can achieve this shift, so the growth, once started, tends to accelerate. Layers of anaerobic bacteria can exist in the inner parts of the corrosion deposits, while the outer parts are inhabited by aerobic bacteria. Some ba
https://en.wikipedia.org/wiki/Animal%20tooth%20development
Tooth development or odontogenesis is the process in which teeth develop and grow into the mouth. Tooth development varies among species. Tooth development in vertebrates Fish In fish, Hox gene expression regulates mechanisms for tooth initiation. However, sharks continuously produce new teeth throughout their lives via a drastically different mechanism. Shark teeth form from modified scales near the tongue and move outward on the jaw in rows until they are eventually dislodged. Their scales, called dermal denticles, and teeth are homologous organs. Mammals Generally, tooth development in non-human mammals is similar to human tooth development. The variations usually lie in the morphology, number, development timeline, and types of teeth. However, some mammals' teeth do develop differently than humans'. In mice, WNT signals are required for the initiation of tooth development. Rodents' teeth continually grow, forcing them to wear down their teeth by gnawing on various materials. If rodents are prevented from gnawing, their teeth eventually puncture the roofs of their mouths. In addition, rodent incisors consist of two halves, known as the crown and root analogues. The labial half is made of enamel and resembles a crown, while the lingual half is made of dentin and resembles a root. The mineral distribution in rodent enamel is different from that of monkeys, dogs, pigs, and humans. In horse teeth, enamel and dentin layers are intertwined, which increases the strength and decreases the wear rate of the teeth. Contrary to popular belief, horse teeth do not "grow" indefinitely. Rather, existing tooth erupts from below the gumline. Horses start to "run out" of erupting tooth in their early 30s and in the rare case they live long enough, the roots of their teeth will fall out completely in the middle to latter part of their third decade. In manatees, mandibular molars develop separately from the jaw and are encased in a bony shell separated by soft tissue. This a
https://en.wikipedia.org/wiki/Equiconsistency
In mathematical logic, two theories are equiconsistent if the consistency of one theory implies the consistency of the other theory, and vice versa. In this case, they are, roughly speaking, "as consistent as each other". In general, it is not possible to prove the absolute consistency of a theory T. Instead we usually take a theory S, believed to be consistent, and try to prove the weaker statement that if S is consistent then T must also be consistent—if we can do this we say that T is consistent relative to S. If S is also consistent relative to T then we say that S and T are equiconsistent. Consistency In mathematical logic, formal theories are studied as mathematical objects. Since some theories are powerful enough to model different mathematical objects, it is natural to wonder about their own consistency. Hilbert proposed a program at the beginning of the 20th century whose ultimate goal was to show, using mathematical methods, the consistency of mathematics. Since most mathematical disciplines can be reduced to arithmetic, the program quickly became the establishment of the consistency of arithmetic by methods formalizable within arithmetic itself. Gödel's incompleteness theorems show that Hilbert's program cannot be realized: if a consistent recursively enumerable theory is strong enough to formalize its own metamathematics (whether something is a proof or not), i.e. strong enough to model a weak fragment of arithmetic (Robinson arithmetic suffices), then the theory cannot prove its own consistency. There are some technical caveats as to what requirements the formal statement representing the metamathematical statement "The theory is consistent" needs to satisfy, but the outcome is that if a (sufficiently strong) theory can prove its own consistency then either there is no computable way of identifying whether a statement is even an axiom of the theory or not, or else the theory itself is inconsistent (in which case it can prove anything, includin
https://en.wikipedia.org/wiki/Atomic%20packing%20factor
In crystallography, atomic packing factor (APF), packing efficiency, or packing fraction is the fraction of volume in a crystal structure that is occupied by constituent particles. It is a dimensionless quantity and always less than unity. In atomic systems, by convention, the APF is determined by assuming that atoms are rigid spheres. The radius of the spheres is taken to be the maximum value such that the atoms do not overlap. For one-component crystals (those that contain only one type of particle), the packing fraction is represented mathematically by where Nparticle is the number of particles in the unit cell, Vparticle is the volume of each particle, and Vunit cell is the volume occupied by the unit cell. It can be proven mathematically that for one-component structures, the most dense arrangement of atoms has an APF of about 0.74 (see Kepler conjecture), obtained by the close-packed structures. For multiple-component structures (such as with interstitial alloys), the APF can exceed 0.74. The atomic packing factor of a unit cell is relevant to the study of materials science, where it explains many properties of materials. For example, metals with a high atomic packing factor will have a higher "workability" (malleability or ductility), similar to how a road is smoother when the stones are closer together, allowing metal atoms to slide past one another more easily. Single component crystal structures Common sphere packings taken on by atomic systems are listed below with their corresponding packing fraction. Hexagonal close-packed (HCP): 0.74 Face-centered cubic (FCC): 0.74 (also called cubic close-packed, CCP) Body-centered cubic (BCC): 0.68 Simple cubic: 0.52 Diamond cubic: 0.34 The majority of metals take on either the HCP, FCC, or BCC structure. Simple cubic For a simple cubic packing, the number of atoms per unit cell is one. The side of the unit cell is of length 2r, where r is the radius of the atom. Face-centered cubic For a face-centered
https://en.wikipedia.org/wiki/Aerotolerant%20anaerobe
Aerotolerant anaerobes use fermentation to produce ATP. They do not use oxygen, but they can protect themselves from reactive oxygen molecules. In contrast, obligate anaerobes can be harmed by reactive oxygen molecules. There are three categories of anaerobes. Where obligate aerobes require oxygen to grow, obligate anaerobes are damaged by oxygen, aerotolerant organisms cannot use oxygen but tolerate its presence, and facultative anaerobes use oxygen if it is present but can grow without it. Most aerotolerant anaerobes have superoxide dismutase and (non-catalase) peroxidase but don't have catalase. More specifically, they may use a NADH oxidase/NADH peroxidase (NOX/NPR) system or a glutathione peroxidase system. An example of an aerotolerant anaerobe is Cutibacterium acnes.
https://en.wikipedia.org/wiki/Malthusian%20growth%20model
A Malthusian growth model, sometimes called a simple exponential growth model, is essentially exponential growth based on the idea of the function being proportional to the speed to which the function grows. The model is named after Thomas Robert Malthus, who wrote An Essay on the Principle of Population (1798), one of the earliest and most influential books on population. Malthusian models have the following form: where P0 = P(0) is the initial population size, r = the population growth rate, which Ronald Fisher called the Malthusian parameter of population growth in The Genetical Theory of Natural Selection, and Alfred J. Lotka called the intrinsic rate of increase, t = time. The model can also been written in the form of a differential equation: with initial condition: P(0)= P0 This model is often referred to as the exponential law. It is widely regarded in the field of population ecology as the first principle of population dynamics, with Malthus as the founder. The exponential law is therefore also sometimes referred to as the Malthusian Law. By now, it is a widely accepted view to analogize Malthusian growth in Ecology to Newton's First Law of uniform motion in physics. Malthus wrote that all life forms, including humans, have a propensity to exponential population growth when resources are abundant but that actual growth is limited by available resources: A model of population growth bounded by resource limitations was developed by Pierre Francois Verhulst in 1838, after he had read Malthus' essay. Verhulst named the model a logistic function. See also Albert Allen Bartlett – a leading proponent of the Malthusian Growth Model Exogenous growth model – related growth model from economics Growth theory – related ideas from economics Human overpopulation Irruptive growth – an extension of the Malthusian model accounting for population explosions and crashes Malthusian catastrophe Neo-malthusianism The Genetical Theory of Natural Selection
https://en.wikipedia.org/wiki/Science%2C%20technology%2C%20engineering%2C%20and%20mathematics
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
https://en.wikipedia.org/wiki/Forbidden%20mechanism
In spectroscopy, a forbidden mechanism (forbidden transition or forbidden line) is a spectral line associated with absorption or emission of photons by atomic nuclei, atoms, or molecules which undergo a transition that is not allowed by a particular selection rule but is allowed if the approximation associated with that rule is not made. For example, in a situation where, according to usual approximations (such as the electric dipole approximation for the interaction with light), the process cannot happen, but at a higher level of approximation (e.g. magnetic dipole, or electric quadrupole) the process is allowed but at a low rate. An example is phosphorescent glow-in-the-dark materials, which absorb light and form an excited state whose decay involves a spin flip, and is therefore forbidden by electric dipole transitions. The result is emission of light slowly over minutes or hours. Should an atomic nucleus, atom or molecule be raised to an excited state and should the transitions be nominally forbidden, then there is still a small probability of their spontaneous occurrence. More precisely, there is a certain probability that such an excited entity will make a forbidden transition to a lower energy state per unit time; by definition, this probability is much lower than that for any transition permitted or allowed by the selection rules. Therefore, if a state can de-excite via a permitted transition (or otherwise, e.g. via collisions) it will almost certainly do so before any transition occurs via a forbidden route. Nevertheless, most forbidden transitions are only relatively unlikely: states that can only decay in this way (so-called meta-stable states) usually have lifetimes on the order milliseconds to seconds, compared to less than a microsecond for decay via permitted transitions. In some radioactive decay systems, multiple levels of forbiddenness can stretch life times by many orders of magnitude for each additional unit by which the system changes beyond
https://en.wikipedia.org/wiki/Armodafinil
Armodafinil (trade name Nuvigil) is the enantiopure compound of the eugeroic modafinil (Provigil). It consists of only the (R)-(−)-enantiomer of the racemic modafinil. Armodafinil is produced by the pharmaceutical company Cephalon Inc. and was approved by the U.S. Food and Drug Administration (FDA) in June 2007. In 2016, the FDA granted Mylan rights for the first generic version of Cephalon's Nuvigil to be marketed in the U.S. Because armodafinil has a longer half-life than modafinil does, it may be more effective at improving wakefulness in patients with excessive daytime sleepiness. Medical uses Armodafinil is currently FDA-approved to treat excessive daytime sleepiness associated with obstructive sleep apnea, narcolepsy, and shift work disorder. It is commonly used off-label to treat attention deficit hyperactivity disorder, chronic fatigue syndrome, and major depressive disorder, and has been repurposed as an adjunctive treatment for bipolar disorder. It has been shown to improve vigilance in air traffic controllers, however in the United States, sleep prevention medications such as Provigil (Modafinil) and Nuvigil (Armodafinil) are not approved by the FAA for civilian controllers or pilots. Psychiatry Bipolar disorder Armodafinil, along with racemic modafinil, has been repurposed as an adjunctive treatment for acute depression in people with bipolar disorder. Meta-analytic evidence showed that add-on modafinil and armodafinil were more effective than placebo on response to treatment, clinical remission, and reduction in depressive symptoms, with only minor side effects, but the effect sizes are small and the quality of evidence has to be considered low, limiting the clinical relevance of current evidence. However current dosage for bipolar disorder is 150 mg OD. Also, paradoxical tiredness and sleeping is observed in some cases. Schizophrenia In June 2010, it was revealed that a phase II study of armodafinil as an adjunctive therapy in adults with schizo
https://en.wikipedia.org/wiki/In-phase%20and%20quadrature%20components
A sinusoid with modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are in quadrature phase, i.e., with a phase offset of one-quarter cycle (90 degrees or /2 radians). All three sinusoids have the same center frequency. The two amplitude-modulated sinusoids are known as the in-phase (I) and quadrature (Q) components, which describes their relationships with the amplitude- and phase-modulated carrier. Or in other words, it is possible to create an arbitrarily phase-shifted sine wave, by mixing together two sine waves that are 90° out of phase in different proportions. The implication is that the modulations in some signal can be treated separately from the carrier wave of the signal. This has extensive use in many radio and signal processing applications. I/Q data is used to represent the modulations of some carrier, independent of that carrier's frequency. Orthogonality In vector analysis, a vector with polar coordinates and Cartesian coordinates can be represented as the sum of orthogonal components: Similarly in trigonometry, the angle sum identity expresses: And in functional analysis, when is a linear function of some variable, such as time, these components are sinusoids, and they are orthogonal functions. A phase-shift of changes the identity to: , in which case is the in-phase component. In both conventions is the in-phase amplitude modulation, which explains why some authors refer to it as the actual in-phase component. Narrowband signal model In an angle modulation application, with carrier frequency φ is also a time-variant function, giving: When all three terms above are multiplied by an optional amplitude function, the left-hand side of the equality is known as the amplitude/phase form, and the right-hand side is the quadrature-carrier or IQ form. Because of the modulation, the components are no longer completely orthogonal functions. But when and are slowly varying functions compared
https://en.wikipedia.org/wiki/Treatment-resistant%20depression
Treatment-resistant depression is a term used in psychiatry to describe people with major depressive disorder (MDD) who do not respond adequately to a course of appropriate antidepressant medication within a certain time. Definitions of treatment-resistant depression vary, and they do not include a resistance to psychological therapies. Inadequate response has most commonly been defined as less than 50% reduction in depressive symptoms following treatment with at least one antidepressant medication, although definitions vary widely. Some factors that contribute to inadequate treatment are: a history of repeated or severe adverse childhood experiences, early discontinuation of treatment, insufficient dosage of medication, patient noncompliance, misdiagnosis, cognitive impairment, low income and other socio-economic variables, and concurrent medical conditions, including comorbid psychiatric disorders. Cases of treatment-resistant depression may also be referred to by which medications people with treatment-resistant depression are resistant to (e.g.: SSRI-resistant). In treatment-resistant depression adding further treatments such as psychotherapy, lithium, or aripiprazole is weakly supported as of 2019. Risk factors Comorbid psychiatric disorders Comorbid psychiatric disorders commonly go undetected in the treatment of depression. If left untreated, the symptoms of these disorders can interfere with both evaluation and treatment. Anxiety disorders are one of the most common disorder types associated with treatment-resistant depression. The two disorders commonly co-exist, and have some similar symptoms. Some studies have shown that patients with both MDD and panic disorder are the most likely to be nonresponsive to treatment. Substance abuse may also be a predictor of treatment-resistant depression. It may cause depressed patients to be noncompliant in their treatment, and the effects of certain substances can worsen the effects of depression. Other psychiatri
https://en.wikipedia.org/wiki/Common%20Base%20Event
Common Base Event (CBE) is an IBM implementation of the Web Services Distributed Management (WSDM) Event Format standard. IBM also implemented the Common Event Infrastructure, a unified set of APIs and infrastructure for the creation, transmission, persistence and distribution of a wide range of business, system and network Common Base Event formatted events. External links Understanding Common Base Events Specification V1.0.1, IBM. Common Base Events Best Practices, IBM. Web services IBM software
https://en.wikipedia.org/wiki/D-module
In mathematics, a D-module is a module over a ring D of differential operators. The major interest of such D-modules is as an approach to the theory of linear partial differential equations. Since around 1970, D-module theory has been built up, mainly as a response to the ideas of Mikio Sato on algebraic analysis, and expanding on the work of Sato and Joseph Bernstein on the Bernstein–Sato polynomial. Early major results were the Kashiwara constructibility theorem and Kashiwara index theorem of Masaki Kashiwara. The methods of D-module theory have always been drawn from sheaf theory and other techniques with inspiration from the work of Alexander Grothendieck in algebraic geometry. The approach is global in character, and differs from the functional analysis techniques traditionally used to study differential operators. The strongest results are obtained for over-determined systems (holonomic systems), and on the characteristic variety cut out by the symbols, which in the good case is a Lagrangian submanifold of the cotangent bundle of maximal dimension (involutive systems). The techniques were taken up from the side of the Grothendieck school by Zoghman Mebkhout, who obtained a general, derived category version of the Riemann–Hilbert correspondence in all dimensions. Introduction: modules over the Weyl algebra The first case of algebraic D-modules are modules over the Weyl algebra An(K) over a field K of characteristic zero. It is the algebra consisting of polynomials in the following variables x1, ..., xn, ∂1, ..., ∂n. where the variables xi and ∂j separately commute with each other, and xi and ∂j commute for i ≠ j, but the commutator satisfies the relation [∂i, xi] = ∂ixi − xi∂i = 1. For any polynomial f(x1, ..., xn), this implies the relation [∂i, f] = ∂f / ∂xi, thereby relating the Weyl algebra to differential equations. An (algebraic) D-module is, by definition, a left module over the ring An(K). Examples for D-modules include the Weyl algebra itself (acti
https://en.wikipedia.org/wiki/Residue-class-wise%20affine%20group
In mathematics, specifically in group theory, residue-class-wise affine groups are certain permutation groups acting on (the integers), whose elements are bijective residue-class-wise affine mappings. A mapping is called residue-class-wise affine if there is a nonzero integer such that the restrictions of to the residue classes (mod ) are all affine. This means that for any residue class there are coefficients such that the restriction of the mapping to the set is given by . Residue-class-wise affine groups are countable, and they are accessible to computational investigations. Many of them act multiply transitively on or on subsets thereof. A particularly basic type of residue-class-wise affine permutations are the class transpositions: given disjoint residue classes and , the corresponding class transposition is the permutation of which interchanges and for every and which fixes everything else. Here it is assumed that and that . The set of all class transpositions of generates a countable simple group which has the following properties: It is not finitely generated. Every finite group, every free product of finite groups and every free group of finite rank embeds into it. The class of its subgroups is closed under taking direct products, under taking wreath products with finite groups, and under taking restricted wreath products with the infinite cyclic group. It has finitely generated subgroups which do not have finite presentations. It has finitely generated subgroups with algorithmically unsolvable membership problem. It has an uncountable series of simple subgroups which is parametrized by the sets of odd primes. It is straightforward to generalize the notion of a residue-class-wise affine group to groups acting on suitable rings other than , though only little work in this direction has been done so far. See also the Collatz conjecture, which is an assertion about a surjective, but not injective residue-class-wise affine mapping
https://en.wikipedia.org/wiki/Rate%20of%20return
In finance, return is a profit on an investment. It comprises any change in value of the investment, and/or cash flows (or securities, or other investments) which the investor receives from that investment over a specified time period, such as interest payments, coupons, cash dividends and stock dividends. It may be measured either in absolute terms (e.g., dollars) or as a percentage of the amount invested. The latter is also called the holding period return. A loss instead of a profit is described as a negative return, assuming the amount invested is greater than zero. To compare returns over time periods of different lengths on an equal basis, it is useful to convert each return into a return over a period of time of a standard length. The result of the conversion is called the rate of return. Typically, the period of time is a year, in which case the rate of return is also called the annualized return, and the conversion process, described below, is called annualization. The return on investment (ROI) is return per dollar invested. It is a measure of investment performance, as opposed to size (c.f. return on equity, return on assets, return on capital employed). Calculation The return, or the holding period return, can be calculated over a single period. The single period may last any length of time. The overall period may, however, instead be divided into contiguous subperiods. This means that there is more than one time period, each sub-period beginning at the point in time where the previous one ended. In such a case, where there are multiple contiguous subperiods, the return or the holding period return over the overall period can be calculated by combining the returns within each of the subperiods. Single-period Return The direct method to calculate the return or the holding period return over a single period of any length of time is: where: = final value, including dividends and interest = initial value For example, if someone purchases 100 s
https://en.wikipedia.org/wiki/Tolman%20length
The Tolman length (also known as Tolman's delta) measures the extent by which the surface tension of a small liquid drop deviates from its planar value. It is conveniently defined in terms of an expansion in , with the equimolar radius (defined below) of the liquid drop, of the pressure difference across the droplet's surface: (1) In this expression, is the pressure difference between the (bulk) pressure of the liquid inside and the pressure of the vapour outside, and is the surface tension of the planar interface, i.e. the interface with zero curvature . The Tolman length is thus defined as the leading order correction in an expansion in . The equimolar radius is defined so that the superficial density is zero, i.e., it is defined by imagining a sharp mathematical dividing surface with a uniform internal and external density, but where the total mass of the pure fluid is exactly equal to the real situation. At the atomic scale in a real drop, the surface is not sharp, rather the density gradually drops to zero, and the Tolman length captures the fact that the idealized equimolar surface does not necessarily coincide with the idealized tension surface. Another way to define the Tolman length is to consider the radius dependence of the surface tension, . To leading order in one has: (2) Here denotes the surface tension (or (excess) surface free energy) of a liquid drop with radius , whereas denotes its value in the planar limit. In both definitions (1) and (2) the Tolman length is defined as a coefficient in an expansion in and therefore does not depend on . Furthermore, the Tolman length can be related to the radius of spontaneous curvature when one compares the free energy method of Helfrich with the method of Tolman: Any result for the Tolman length therefore gives information about the radius of spontaneous curvature, . If the Tolman length is known to be positive (with ) the interface tends to curve towards the liquid ph
https://en.wikipedia.org/wiki/Coalescent%20theory
Coalescent theory is a model of how alleles sampled from a population may have originated from a common ancestor. In the simplest case, coalescent theory assumes no recombination, no natural selection, and no gene flow or population structure, meaning that each variant is equally likely to have been passed from one generation to the next. The model looks backward in time, merging alleles into a single ancestral copy according to a random process in coalescence events. Under this model, the expected time between successive coalescence events increases almost exponentially back in time (with wide variance). Variance in the model comes from both the random passing of alleles from one generation to the next, and the random occurrence of mutations in these alleles. The mathematical theory of the coalescent was developed independently by several groups in the early 1980s as a natural extension of classical population genetics theory and models, but can be primarily attributed to John Kingman. Advances in coalescent theory include recombination, selection, overlapping generations and virtually any arbitrarily complex evolutionary or demographic model in population genetic analysis. The model can be used to produce many theoretical genealogies, and then compare observed data to these simulations to test assumptions about the demographic history of a population. Coalescent theory can be used to make inferences about population genetic parameters, such as migration, population size and recombination. Theory Time to coalescence Consider a single gene locus sampled from two haploid individuals in a population. The ancestry of this sample is traced backwards in time to the point where these two lineages coalesce in their most recent common ancestor (MRCA). Coalescent theory seeks to estimate the expectation of this time period and its variance. The probability that two lineages coalesce in the immediately preceding generation is the probability that they share a parental D
https://en.wikipedia.org/wiki/International%20Association%20of%20Privacy%20Professionals
The International Association of Privacy Professionals (IAPP) is a nonprofit, non-advocacy membership association founded in 2000. It provides a forum for privacy professionals to share best practices, track trends, advance privacy management issues, standardize the designations for privacy professionals, provide education and guidance on career opportunities in the field of information privacy. The IAPP offers a full suite of educational and professional development services, including privacy training, certification programs, publications and annual conferences. It is headquartered in Portsmouth, New Hampshire. History Founded in 2000, IAPP was originally constituted as the Privacy Officers Association (POA). In 2002, it became the International Association of Privacy Officers (IAPO) when the POA merged with a competing group, the Association of Corporate Privacy Officers (ACPO). The group was renamed to the International Association of Privacy Professionals in 2003 to reflect a broadened mission that includes the ranks of corporate personnel, beyond the position of Chief Privacy Officer, engaged in privacy-related tasks. Membership reached 10,000 in 2012 and in 2019, the organization reported it had surpassed the 50,000 member mark. The rapid growth was the result of increased demand for privacy expertise in the face of emerging laws such as the EU's General Data Protection Regulation (GDPR). Half of the association's members are women. Professional certifications The IAPP is responsible for developing and launching a global credentialing programs in information privacy. The CIPM, CIPP/E, CIPP/US and CIPT credentials are accredited by the American National Standards Institute (ANSI) under the International Organization for Standardization (ISO) standard for Personnel Certification Bodies 17024:2012. These certifications have been described as "the gold standard" for validating privacy expertise. Certified Information Privacy Professional (CIPP) The CIPP cur
https://en.wikipedia.org/wiki/Spike%20sorting
Spike sorting is a class of techniques used in the analysis of electrophysiological data. Spike sorting algorithms use the shape(s) of waveforms collected with one or more electrodes in the brain to distinguish the activity of one or more neurons from background electrical noise. Neurons produce action potentials that are referred to as 'spikes' in laboratory jargon. Frequently this term is used for electrical signals recorded in the vicinity of individual neurons with a microelectrode (exception: 'spikes' in EEG recordings). In these recordings action potentials appear as sharp spikes (deviations from the baseline). These extracellular electrodes pick up all the components constituting the field at the point of its contact. This includes the component due to the synaptic currents and the action potentials. The synaptic currents have slower time course and the spikes have faster time course. They are thus easily separated by filtering: highpass for spikes and low pass for the synaptic mechanisms. The component of the field due to the synaptic mechanism is referred to as the local field potential (LFP). Spike sorting refers to the process of assigning spikes to different neurons. The background to this is that the exact time course of a spike event as recorded by the electrode depends on the size and shape of the neuron, the position of the recording electrode relative to the neuron, etc. These electrodes, positioned outside of the cells in the tissue, however, often 'see' the spikes generated by several neurons in their vicinity. Since the spike shapes are unique and quite reproducible for each neuron they can be used to distinguish spikes produced by different neurons, i.e. to separate the activity produced by each. Technically this is often achieved based on different sizes of the spikes (simple but inaccurate version) or more sophisticated analyses which make use of the entire waveform of the spikes. The techniques often use tools such as principal components
https://en.wikipedia.org/wiki/FAAM%20Airborne%20Laboratory
The FAAM Airborne Laboratory is an atmospheric science research facility. It is based on the Cranfield University campus alongside Cranfield Airport in Bedfordshire, England. It was formed by a collaboration between the Met Office and the Natural Environment Research Council (NERC) in 2001. The facility FAAM was established jointly by the Natural Environmental Research Council and the Met Office. Initial funding was provided to prepare an aircraft for instrumentation. The main aircraft used is a modified BAe 146-301 aircraft, registration G-LUXE, owned by NERC and operated by Airtask. Work carried out by FAAM includes Radiative transfer studies in clear and cloudy air; Tropospheric chemistry measurements; Cloud physics and dynamic studies; Dynamics of mesoscale weather systems; Boundary layer and turbulence studies; Remote sensing: verification of ground-based instruments; Satellite ground truth: radiometric measurements and winds; Satellite instrument test-bed; FAAM is staffed by a mixture of NERC, University of Leeds and Met Office personnel, and acts as a servant to numerous UK and occasionally overseas science organisations; primarily the Met Office itself, or UK universities funded by NERC. It flies around 400 hours annually, most commonly on large campaigns where a team of typically 30 will spend around a month at a base location, potentially anywhere in the world, delivering a specific science campaign, although some flying from Cranfield also takes place. An emergency response role exists, which has been used three times - at the 2005 Buncefield fire, the 2010 Eyjafjallajökull volcanic eruption and 2012 Total Elgin gas platform leak: after Eyjafjallajökull a new aircraft, MOCCA - the Met Office Civil Contingency Aircraft, has been commissioned as the "first responder" to British volcanic ash emergencies. The facility was originally established in 2001, with an intended operating base of the BAe site at Woodford, in Cheshire. However, b
https://en.wikipedia.org/wiki/Polygraphic%20substitution
Polygraphic substitution is a cipher in which a uniform substitution is performed on blocks of letters. When the length of the block is specifically known, more precise terms are used: for instance, a cipher in which pairs of letters are substituted is bigraphic. As a concept, polygraphic substitution contrasts with monoalphabetic (or simple) substitutions in which individual letters are uniformly substituted, or polyalphabetic substitutions in which individual letters are substituted in different ways depending on their position in the text. In theory, there is some overlap in these definitions; one could conceivably consider a Vigenère cipher with an eight-letter key to be an octographic substitution. In practice, this is not a useful observation since it is far more fruitful to consider it to be a polyalphabetic substitution cipher. Specific ciphers In 1563, Giambattista della Porta devised the first bigraphic substitution. However, it was nothing more than a matrix of symbols. In practice, it would have been all but impossible to memorize, and carrying around the table would lead to risks of falling into enemy hands. In 1854, Charles Wheatstone came up with the Playfair cipher, a keyword-based system that could be performed on paper in the field. This was followed up over the next fifty years with the closely related four-square and two-square ciphers, which are slightly more cumbersome but offer slightly better security. In 1929, Lester S. Hill developed the Hill cipher, which uses matrix algebra to encrypt blocks of any desired length. However, encryption is very difficult to perform by hand for any sufficiently large block size, although it has been implemented by machine or computer. This is therefore on the frontier between classical and modern cryptography. Cryptanalysis of general polygraphic substitutions Polygraphic systems do provide a significant improvement in security over monoalphabetic substitutions. Given an individual letter 'E' in
https://en.wikipedia.org/wiki/Antiunitary%20operator
In mathematics, an antiunitary transformation is a bijective antilinear map between two complex Hilbert spaces such that for all and in , where the horizontal bar represents the complex conjugate. If additionally one has then is called an antiunitary operator. Antiunitary operators are important in quantum theory because they are used to represent certain symmetries, such as time reversal. Their fundamental importance in quantum physics is further demonstrated by Wigner's theorem. Invariance transformations In quantum mechanics, the invariance transformations of complex Hilbert space leave the absolute value of scalar product invariant: for all and in . Due to Wigner's theorem these transformations can either be unitary or antiunitary. Geometric Interpretation Congruences of the plane form two distinct classes. The first conserves the orientation and is generated by translations and rotations. The second does not conserve the orientation and is obtained from the first class by applying a reflection. On the complex plane these two classes correspond (up to translation) to unitaries and antiunitaries, respectively. Properties holds for all elements of the Hilbert space and an antiunitary . When is antiunitary then is unitary. This follows from For unitary operator the operator , where is complex conjugate operator, is antiunitary. The reverse is also true, for antiunitary the operator is unitary. For antiunitary the definition of the adjoint operator is changed to compensate the complex conjugation, becoming The adjoint of an antiunitary is also antiunitary and (This is not to be confused with the definition of unitary operators, as the antiunitary operator is not complex linear.) Examples The complex conjugate operator is an antiunitary operator on the complex plane. The operator where is the second Pauli matrix and is the complex conjugate operator, is antiunitary. It satisfies . Decomposition of an antiunitary oper
https://en.wikipedia.org/wiki/Pencil%20%28optics%29
In optics, a pencil or pencil of rays is a geometric construct used to describe a beam or portion of a beam of electromagnetic radiation or charged particles, typically in the form of a narrow beam (conical or cylindrical). Antennas which strongly bundle in azimuth and elevation are often described as "pencil-beam" antennas. For example, a phased array antenna can send out a beam that is extremely thin. Such antennas are used for tracking radar, and the process is known as beamforming. In optics, the focusing action of a lens is often described in terms of pencils of rays. In addition to conical and cylindrical pencils, optics deals with astigmatic pencils as well. In electron optics, scanning electron microscopes use narrow pencil beams to achieve a deep depth of field. Ionizing radiation used in radiation therapy, whether photons or charged particles, such as proton therapy and electron therapy machines, is sometimes delivered through the use of pencil beam scanning. In backscatter X-ray imaging a pencil beam of x-ray radiation is used to scan over an object to create an intensity image of the Compton-scattered radiation. History A 1675 work describes a pencil as "a double cone of rays, joined together at the base." In his 1829 A System of Optics, Henry Coddington defines a pencil as being "a parcel of light proceeding from some one point", whose form is "generally understood to be that of a right cone" and which "becomes cylindrical when the origin is very remote". See also Collimated beam Pencil (mathematics), a family of geometric objects having a common property such as passage through a given point. Fan beam Pencil beam scanning (Medical physics) Microwave transmission
https://en.wikipedia.org/wiki/Phytase
A phytase (myo-inositol hexakisphosphate phosphohydrolase) is any type of phosphatase enzyme that catalyzes the hydrolysis of phytic acid (myo-inositol hexakisphosphate) – an indigestible, organic form of phosphorus that is found in many plant tissues, especially in grains and oil seeds – and releases a usable form of inorganic phosphorus. While phytases have been found to occur in animals, plants, fungi and bacteria, phytases have been most commonly detected and characterized from fungi. History The first plant phytase was found in 1907 from rice bran and in 1908 from an animal (calf's liver and blood). In 1962 began the first attempt at commercializing phytases for animal feed nutrition enhancing purposes when International Minerals & Chemicals (IMC) studied over 2000 microorganisms to find the most suitable ones for phytase production. This project was launched in part due to concerns about mineable sources for inorganic phosphorus eventually running out (see peak phosphorus), which IMC was supplying for the feed industry at the time. Aspergillus (ficuum) niger fungal strain NRRL 3135 (ATCC 66876) was identified as a promising candidate as it was able to produce large amounts of extracellular phytases. However, the organism's efficiency was not enough for commercialization so the project ended in 1968 as a failure. Still, identifying A. niger led in 1984 to a new attempt with A. niger mutants made with the relatively recently invented recombinant DNA technology. This USDA funded project was initiated by Dr. Rudy Wodzinski who formerly participated in the IMC's project. This 1984 project led in 1991 to the first partially cloned phytase gene phyA (from A. niger NRRL 31235) and later on in 1993 to the cloning of the full gene and its overexpression in A. niger. In 1991 BASF began to sell the first commercial phytase produced in A. niger under the trademark Natuphos which was used to increase the nutrient content of animal feed. In 1999 Escherichia coli bacteri
https://en.wikipedia.org/wiki/Pfeiffer%20syndrome
Pfeiffer syndrome is a rare genetic disorder, characterized by the premature fusion of certain bones of the skull (craniosynostosis), which affects the shape of the head and face. The syndrome includes abnormalities of the hands and feet, such as wide and deviated thumbs and big toes. Pfeiffer syndrome is caused by mutations in the fibroblast growth factor receptors FGFR1 and FGFR2. The syndrome is grouped into three types: type 1 (classic Pfeiffer syndrome) is milder and caused by mutations in either gene; types 2 and 3 are more severe, often leading to death in infancy, caused by mutations in FGFR2. There is no cure for the syndrome. Treatment is supportive and often involves surgery in the earliest years of life to correct skull deformities and respiratory function. Most persons with Pfeiffer syndrome type 1 have a normal intelligence and life span; types 2 and 3 typically cause neurodevelopmental disorders and early death. Later in life, surgery can help in bone formation and facial construction. Pfeiffer syndrome affects about 1 in 100,000 persons. The syndrome is named after a German geneticist, Rudolf Arthur Pfeiffer (1931–2012), who described it in 1964. Signs and symptoms Many of the facial characteristics result from the premature fusion of the skull bones (craniosynostosis). The head is unable to grow normally, which leads to a high, prominent forehead (turribrachycephaly) and eyes that appear to bulge (proptosis) and are set wide (hypertelorism). In addition, there is an underdeveloped upper jaw (maxillary hypoplasia). More than half of children with Pfeiffer syndrome have hearing loss; dental problems are common. A baby with Pfeiffer syndrome may have a small, beak-shaped nose; crowded, crooked teeth; and sleep apnea, due to nasal blockage. There are three main types of Pfeiffer syndrome: type I is the mildest and most common; type II is the most severe, with neurological problems and a cloverleaf deformity; and type III is similar to type II, but
https://en.wikipedia.org/wiki/Push%E2%80%93relabel%20maximum%20flow%20algorithm
In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink. The push–relabel algorithm is considered one of the most efficient maximum flow algorithms. The generic algorithm has a strongly polynomial time complexity, which is asymptotically more efficient than the Edmonds–Karp algorithm. Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has time complexity and is generally regarded as the benchmark for maximum flow algorithms. Subcubic time complexity can be achieved using dynamic trees, although in practice it is less efficient. The push–relabel algorithm has been extended to compute minimum cost flows. The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance. History The concept of a preflow was originally designed by Alexander V. Karzanov and was published in 1974 in Soviet Mathematical Dokladi 15. This pre-flow algorithm also used a push operation; however, it used distances in the auxiliary network to determine where to push the flow instead of a labeling system. The push-relabel algorithm was designed by Andrew V. Goldberg and Robert Tarjan. The algorithm was initially presented in November 1986 in STOC '86: Proceedings of the eighteenth annu
https://en.wikipedia.org/wiki/Comparison%20of%20assemblers
This is an incomplete list of assemblers: computer programs that translate assembly language source code into binary programs. Some assemblers are components of a compiler system for a high level language and may have limited or no usable functionality outside of the compiler system. Some assemblers are hosted on the target processor and operating system, while other assemblers (cross-assemblers) may run under an unrelated operating system or processor. For example, assemblers for embedded systems are not usually hosted on the target system since it would not have the storage and terminal I/O to permit entry of a program from a keyboard. An assembler may have a single target processor or may have options to support multiple processor types. Very simple assemblers may lack features, such as macros, present in more powerful versions. As part of a compiler suite GNU Assembler (GAS): GPL: many target instruction sets, including ARM architecture, Atmel AVR, x86, x86-64, Freescale 68HC11, Freescale v4e, Motorola 680x0, MIPS, PowerPC, IBM System z, TI MSP430, Zilog Z80. SDAS (fork of ASxxxx Cross Assemblers and part of the Small Device C Compiler project): GPL: several target instruction sets including Intel 8051, Zilog Z80, Freescale 68HC08, PIC microcontroller. The Amsterdam Compiler Kit (ACK) targets many architectures of the 1980s, including 6502, 6800, 680x0, ARM, x86, Zilog Z80 and Z8000. LLVM targets many platforms, however its main focus is not machine-dependent code generation; instead a more high-level typed assembly-like intermediate representation is used. Nevertheless for the most common targets the LLVM MC (machine code) project provides an assembler both as an integrated component of the compilers and as an external tool. Some other self-hosted native-targeted language implementations (like Go, Free Pascal, SBCL) have their own assemblers with multiple targets. They may be used for inline assembly inside the language, or even included as a library, bu
https://en.wikipedia.org/wiki/St%C3%B8rmer%27s%20theorem
In number theory, Størmer's theorem, named after Carl Størmer, gives a finite bound on the number of consecutive pairs of smooth numbers that exist, for a given degree of smoothness, and provides a method for finding all such pairs using Pell equations. It follows from the Thue–Siegel–Roth theorem that there are only a finite number of pairs of this type, but Størmer gave a procedure for finding them all. Statement If one chooses a finite set of prime numbers then the -smooth numbers are defined as the set of integers that can be generated by products of numbers in . Then Størmer's theorem states that, for every choice of , there are only finitely many pairs of consecutive -smooth numbers. Further, it gives a method of finding them all using Pell equations. The procedure Størmer's original procedure involves solving a set of roughly 3k Pell equations, in each one finding only the smallest solution. A simplified version of the procedure, due to D. H. Lehmer, is described below; it solves fewer equations but finds more solutions in each equation. Let P be the given set of primes, and define a number to be P-smooth if all its prime factors belong to P. Assume p1 = 2; otherwise there could be no consecutive P-smooth numbers, because all P-smooth numbers would be odd. Lehmer's method involves solving the Pell equation for each P-smooth square-free number q other than 2. Each such number q is generated as a product of a subset of P, so there are 2k − 1 Pell equations to solve. For each such equation, let xi, yi be the generated solutions, for i in the range from 1 to max(3, (pk + 1)/2) (inclusive), where pk is the largest of the primes in P. Then, as Lehmer shows, all consecutive pairs of P-smooth numbers are of the form (xi − 1)/2, (xi + 1)/2. Thus one can find all such pairs by testing the numbers of this form for P-smoothness. Example To find the ten consecutive pairs of {2,3,5}-smooth numbers (in music theory, giving the superparticular ratios for just tunin
https://en.wikipedia.org/wiki/Tafazzin
Tafazzin is a protein that in humans is encoded by the TAFAZZIN gene. Tafazzin is highly expressed in cardiac and skeletal muscle, and functions as a phospholipid-lysophospholipid transacylase (it belongs to phospholipid:diacylglycerol acyltransferases). It catalyzes remodeling of immature cardiolipin to its mature composition containing a predominance of tetralinoleoyl moieties. Several different isoforms of the tafazzin protein are produced from the TAFAZZIN gene. A long form and a short form of each of these isoforms is produced; the short form lacks a hydrophobic leader sequence and may exist as a cytoplasmic protein rather than being membrane-bound. Other alternatively spliced transcripts have been described but the full-length nature of all these transcripts is not known. Most isoforms are found in all tissues, but some are found only in certain types of cells. Mutations in the TAFAZZIN gene have been associated with mitochondrial deficiency, Barth syndrome, dilated cardiomyopathy (DCM), hypertrophic DCM, endocardial fibroelastosis, left ventricular noncompaction (LVNC), breast cancer, papillary thyroid carcinoma, non-small cell lung cancer, glioma, gastric cancer, thyroid neoplasms, and rectal cancer. It is important to note that the TAZ gene was frequently confused with a protein called TAZ (transcriptional coactivator with PDZ-binding motif, a 50 kDA protein). which is a part of the Hippo pathway and entirely unrelated to the gene of interest. The Hippo pathway TAZ protein has an official gene symbol of WWTR1. Structure The TAFAZZIN gene is located on the q arm of chromosome X at position 28 and it spans 10,208 base pairs. The TAFAZZIN gene produces a 21.3 kDa protein composed of 184 amino acids. The structure of the encoded protein has been found to differ at their N terminus and the central region, which are two functionally notable regions. A 30 residue hydrophobic stretch at the N terminus may function as a membrane anchor, which does not exist in th
https://en.wikipedia.org/wiki/PCLSRing
PCLSRing (also known as Program Counter Lusering) is the term used in the ITS operating system for a consistency principle in the way one process accesses the state of another process. Problem scenario This scenario presents particular complications: Process A makes a time-consuming system call. By "time-consuming", it is meant that the system needs to put Process A into a wait queue and can schedule another process for execution if one is ready-to-run. A common example is an I/O operation. While Process A is in this wait state, Process B tries to interact with or access Process A, for example, send it a signal. What should be the visible state of the context of Process A at the time of the access by Process B? In fact, Process A is in the middle of a system call, but ITS enforces the appearance that system calls are not visible to other processes (or even to the same process). ITS-solution: transparent restart If the system call cannot complete before the access, then it must be restartable. This means that the context is backed up to the point of entry to the system call, while the call arguments are updated to reflect whatever portion of the operation has already been completed. For an I/O operation, this means that the buffer start address must be advanced over the data already transferred, while the length of data to be transferred must be decremented accordingly. After the Process B interaction is complete, Process A can resume execution, and the system call resumes from where it left off. This technique mirrors in software what the PDP-10 does in hardware. Some PDP-10 instructions like BLT may not run to completion, either due to an interrupt or a page fault. In the course of processing the instruction, the PDP-10 would modify the registers containing arguments to the instruction, so that later the instruction could be run again with new arguments that would complete any remaining work to be done. PCLSRing applies the same technique to system calls. This
https://en.wikipedia.org/wiki/Photoassimilate
In botany, a photoassimilate is one of a number of biological compounds formed by assimilation using light-dependent reactions. This term is most commonly used to refer to the energy-storing monosaccharides produced by photosynthesis in the leaves of plants. Only NADPH, ATP and water are made in the "light" reactions. Monosaccharides, though generally more complex sugars, are made in the "dark" reactions. The term "light" reaction can be confusing as some "dark" reactions require light to be active. Photoassimilate movement through plants from "source to sink" using xylem and phloem is of biological significance. This movement is mimicked by many infectious particles - namely viroids - to accomplish long ranged movement and consequently infection of an entire plant.
https://en.wikipedia.org/wiki/Palm%20OS%20viruses
While some viruses did exist for Palm OS based devices, very few were ever designed. Typically, mobile devices are difficult for virus writers to target, since their simplicity provides fewer security holes to target compared to a desktop. Viruses for Palm OS
https://en.wikipedia.org/wiki/Protection%20ring
In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults (by improving fault tolerance) and malicious behavior (by providing computer security). Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number). On most operating systems, Ring 0 is the level with the most privileges and interacts most directly with the physical hardware such as certain CPU functionality (e.g. the control registers) and I/O controllers. Special call gates between rings are provided to allow an outer ring to access an inner ring's resources in a predefined manner, as opposed to allowing arbitrary usage. Correctly gating access between rings can improve security by preventing programs from one ring or privilege level from misusing resources intended for programs in another. For example, spyware running as a user program in Ring 3 should be prevented from turning on a web camera without informing the user, since hardware access should be a Ring 1 function reserved for device drivers. Programs such as web browsers running in higher numbered rings must request access to the network, a resource restricted to a lower numbered ring. Implementations Multiple rings of protection were among the most revolutionary concepts introduced by the Multics operating system, a highly secure predecessor of today's Unix family of operating systems. The GE 645 mainframe computer did have some hardware access control, but that was not sufficient to provide full support for rings in hardwar
https://en.wikipedia.org/wiki/Piezoelectric%20sensor
A piezoelectric sensor is a device that uses the piezoelectric effect to measure changes in pressure, acceleration, temperature, strain, or force by converting them to an electrical charge. The prefix piezo- is Greek for 'press' or 'squeeze'. Applications Piezoelectric sensors are versatile tools for the measurement of various processes. They are used for quality assurance, process control, and for research and development in many industries. Pierre Curie discovered the piezoelectric effect in 1880, but only in the 1950s did manufacturers begin to use the piezoelectric effect in industrial sensing applications. Since then, this measuring principle has been increasingly used, and has become a mature technology with excellent inherent reliability. They have been successfully used in various applications, such as in medical, aerospace, nuclear instrumentation, and as a tilt sensor in consumer electronics or a pressure sensor in the touch pads of mobile phones. In the automotive industry, piezoelectric elements are used to monitor combustion when developing internal combustion engines. The sensors are either directly mounted into additional holes into the cylinder head or the spark/glow plug is equipped with a built-in miniature piezoelectric sensor. The rise of piezoelectric technology is directly related to a set of inherent advantages. The high modulus of elasticity of many piezoelectric materials is comparable to that of many metals and goes up to 106 N/m². Even though piezoelectric sensors are electromechanical systems that react to compression, the sensing elements show almost zero deflection. This gives piezoelectric sensors ruggedness, an extremely high natural frequency and an excellent linearity over a wide amplitude range. Additionally, piezoelectric technology is insensitive to electromagnetic fields and radiation, enabling measurements under harsh conditions. Some materials used (especially gallium phosphate or tourmaline) are extremely stable at high t
https://en.wikipedia.org/wiki/Spot-On%20models
Spot-On models was a brand name for a line of diecast toy cars made by Tri-ang from 1959 through about 1967. They were manufactured in 1:42 scale in Belfast, Northern Ireland, of the United Kingdom. Competition for Spot-On in the British Isles were Corgi Toys and Dinky Toys. The line was particularly British and rarely produced marques from other countries. A Tri-Ang product Spot-On models was a range of diecast vehicles from Tri-ang, a division of Lines Brothers, which had been established as a toy maker in 1935. The Lines Brothers made just about everything toy related, from push-along and rocking horses in the first decades of the 1900s to their main staple of trains. After World War II, Lines Brothers claimed to be the largest toy maker in the world. In the 1950s, Dinky Toys from Liverpool, had developed a successful range of vehicles to be purchased apart from railroad sets and then in 1956 Corgi Toys, made by Mettoy, followed suit. Not wishing to miss out on a commercial opportunity, Lines Brothers started its own range in 1959 – made in its factory in Northern Ireland. Murray Lines himself chose a uniquely British model selection. The factory had been built in the Castlereagh area of Belfast just after World War II when the Lines Brothers expanded (another factory in southern Wales was opened at the same time). At various times, the Northern Ireland factory produced several ranges of toys for various names within the Lines Brothers' group, including Pedigree Soft Toys Ltd., Rovex Industries Ltd. and Lines Brothers (Ireland) Ltd. About 1960, a smaller factory was opened on the grounds of the Belfast site – specifically for the Spot-On range. There were three main product ranges: Spot-On cars, Spot-On doll house furniture, and Arkitex construction kits. A definitive history of all Spot-On products was published in October 2013. "The Ultimate Book of Spot-On Models Ltd" by Brian Salter (with Nigel Lee and Graham Thompson) is a 500-page large format book cont
https://en.wikipedia.org/wiki/Remote%20scripting
Remote scripting is a technology which allows scripts and programs that are running inside a browser to exchange information with a server. The local scripts can invoke scripts on the remote side and process the returned information. The earliest form of asynchronous remote scripting was developed before XMLHttpRequest existed, and made use of very simple process: a static web page opens a dynamic web page (e.g. at other target frame) that is reloaded with new JavaScript content, generated remotely on the server side. The XMLHttpRequest and similar "client-side script remote procedure call" functions, open the possibility of use and triggering web services from the web page interface. The web development community subsequently developed a range of techniques for remote scripting in order to enable consistent results across different browsers. Early examples include JSRS library from 2000, the introduction of the Image/Cookie technique in 2000. JavaScript Remote Scripting JavaScript Remote Scripting (JSRS) is a web development technique for creating interactive web applications using a combination of: HTML (or XHTML) The Document Object Model manipulated through JavaScript to dynamically display and interact with the information presented A transport layer. Different technologies may be used, though using a script tag or an iframe is used the most because it has better browser support than XMLHttpRequest A data format. XML with WDDX can be used as well as JSON or any other text format. Schematic A similar approach is Ajax, though it depends on the XmlHttpRequest in newer web browsers. Libraries Brent Ashley's original JSRS library released in 2000 BlueShoes JSRS with added encoding and OO RPC abstractions MSDN article See also Ajax Rich Internet application External links Web development
https://en.wikipedia.org/wiki/S100%20protein
The S100 proteins are a family of low molecular-weight proteins found in vertebrates characterized by two calcium-binding sites that have helix-loop-helix ("EF-hand-type") conformation. At least 21 different S100 proteins are known. They are encoded by a family of genes whose symbols use the S100 prefix, for example, S100A1, S100A2, S100A3. They are also considered as damage-associated molecular pattern molecules (DAMPs), and knockdown of aryl hydrocarbon receptor downregulates the expression of S100 proteins in THP-1 cells. Structure Most S100 proteins consist of two identical polypeptides (homodimeric), which are held together by noncovalent bonds. They are structurally similar to calmodulin. They differ from calmodulin, though, on the other features. For instance, their expression pattern is cell-specific, i.e. they are expressed in particular cell types. Their expression depends on environmental factors. In contrast, calmodulin is a ubiquitous and universal intracellular Ca2+ receptor widely expressed in many cells. Normal function S100 proteins are normally present in cells derived from the neural crest (Schwann cells, and melanocytes), chondrocytes, adipocytes, myoepithelial cells, macrophages, Langerhans cells, dendritic cells, and keratinocytes. They may be present in some breast epithelial cells. S100 proteins have been implicated in a variety of intracellular and extracellular functions, such as regulation of protein phosphorylation, transcription factors, Ca2+ homeostasis, the dynamics of cytoskeleton constituents, enzyme activities, cell growth and differentiation, and the inflammatory response. S100A7 (psoriasin) and S100A15 have been found to act as cytokines in inflammation, particularly in autoimmune skin conditions such as psoriasis. Pathology Several members of the S100 protein family are useful as markers for certain tumors and epidermal differentiation. They can be found in melanomas, 100% of schwannomas, 100% of neurofibromas (weaker t
https://en.wikipedia.org/wiki/Parallel%20computation%20thesis
In computational complexity theory, the parallel computation thesis is a hypothesis which states that the time used by a (reasonable) parallel machine is polynomially related to the space used by a sequential machine. The parallel computation thesis was set forth by Chandra and Stockmeyer in 1976. In other words, for a computational model which allows computations to branch and run in parallel without bound, a formal language which is decidable under the model using no more than steps for inputs of length n is decidable by a non-branching machine using no more than units of storage for some constant k. Similarly, if a machine in the unbranching model decides a language using no more than storage, a machine in the parallel model can decide the language in no more than steps for some constant k. The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. A parallel machine must be sufficiently powerful to emulate the sequential machine in time polynomially related to the sequential space; compare Turing machine, non-deterministic Turing machine, and alternating Turing machine. N. Blum (1983) introduced a model for which the thesis does not hold. However, the model allows parallel threads of computation after steps. (See Big O notation.) Parberry (1986) suggested a more "reasonable" bound would be or , in defense of the thesis. Goldschlager (1982) proposed a model which is sufficiently universal to emulate all "reasonable" parallel models, which adheres to the thesis. Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.
https://en.wikipedia.org/wiki/Carbon-based%20life
Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously
https://en.wikipedia.org/wiki/SiteKey
SiteKey is a web-based security system that provides one type of mutual authentication between end-users and websites. Its primary purpose is to deter phishing. SiteKey was deployed by several large financial institutions in 2006, including Bank of America and The Vanguard Group. Both Bank of America and The Vanguard Group discontinued use in 2015. The product is owned by RSA Data Security which in 2006 acquired its original maker, Passmark Security. How it works SiteKey uses the following challenge–response technique: The user identifies (not authenticates) themself to the site by entering their username (but not their password). If the username is a valid one, the site proceeds. If the user's browser does not contain a client-side state token (such as a Web cookie or a Flash cookie) from a previous visit, the user is prompted for answers to one or more of the "security questions" the user-specified at site sign-up time, such as "Which school did you last attend?" The site authenticates itself to the user by displaying an image and/or accompanying phrase that they have earlier configured. If the user does not recognize these as their own, they are to assume the site is a phishing site and immediately abandon it. If the user does recognize them, they may consider the site authentic and proceed. The user authenticates themself to the site by entering their password. If the password is not valid for that username, the whole process begins again. If it is valid, the user is considered authenticated and logged in. If the user is at a phishing site with a different Web site domain than the legitimate domain, the user's browser will refuse to send the state token in step (2); the phishing site owner will either need to skip displaying the correct security image, or prompt the user for the security question(s) obtained from the legitimate domain and pass on the answers. In theory, this could cause the user to become suspicious, since the user might be surprised
https://en.wikipedia.org/wiki/Kusudama
The Japanese kusudama (薬玉; lit. medicine ball) is a paper model that is usually (although not always) created by sewing multiple identical pyramidal units together using underlying geometric principles of polyhedra to form a spherical shape. Alternately the individual components may be glued together. (e.g. the kusudama in the lower photo is not threaded together) Occasionally, a tassel is attached to the bottom for decoration. The term kusudama originates from ancient Japanese culture, where they were used for incense and potpourri; possibly originally being actual bunches of flowers or herbs. The word itself is a combination of two Japanese words kusuri ("medicine") and tama ("ball"). They are now typically used as decorations, or as gifts. The kusudama is important in origami particularly as a precursor to modular origami. It is often confused with modular origami, but is not such because the units are strung or pasted together, instead of folded together as most modular construction are made. It is, however, still origami, although origami purists frown upon threading or gluing the units together, while others recognize that early traditional Japanese origami often used both cutting (see thousand origami cranes or senbazuru) and pasting, and respect kusudama as an ingenious traditional paper folding craft in the origami world. Modern origami masters such as Tomoko Fuse have created new kusudama designs that are entirely assembled without cutting, glue, or thread except as a hanger. Waritama Kusudama can also be used to refer to a type of decoration that is displayed and split open for celebrations. This decoration is more specifically called waritama (割り玉; lit. split ball). Waritama are large, spherical decorations that split in half to release confetti, streamers, balloons, etc. They can be used for a variety of events, including school events, graduation ceremonies, enterprise founding anniversaries, and sports competitions. An emoji depicting a waritam
https://en.wikipedia.org/wiki/HMB-45
HMB-45 is a monoclonal antibody that reacts against an antigen present in melanocytic tumors such as melanomas, and stands for Human Melanoma Black. It is used in anatomic pathology as a marker for such tumors. The specific antigen recognized by HMB-45 is now known as Pmel 17. History HMB-45 was discovered by Drs. Allen M. Gown and Arthur M. Vogel in 1986. The antibody was generated to an extract of melanoma. Cancer diagnostics In a study to determine diagnostic usefulness of specific antibodies used to identify melanoma, HMB-45 had a 92% sensitivity when used to identify melanoma. The antibody also reacts positively against junctional nevus cells and fetal melanocytes. Despite this relatively high sensitivity—HMB-45 does have its drawbacks. HMB-45 can be detected in only 50-70% of melanomas. HMB-45 does not react well against intradermal nevi, normal adult melanocytes, spindle cell melanomas and desmoplastic melanomas. HMB-45 is nonreactive with almost all non-melanoma human malignancies, with the exception of rare tumors showing evidence of melanogenesis (e.g., pigmented schwannoma, clear cell sarcoma) or tumors associated with tuberous sclerosis complex (angiomyolipoma and lymphangiomyoma). Storage HMB-45 should be stored at 4 degree Celsius. At 4 degrees Celsius the antibody will be stable for up to 2 months without any loss of quality. For best results it should be used before the expiration date. Alternatives When conducting an immunocytochemical studies to identify melanoma for scientific or clinical studies, scientist and medical professionals can also use S-100, Melan-A, Tyrosinase, and Mitf to identify tumors. See also List of histologic stains that aid in diagnosis of cutaneous conditions
https://en.wikipedia.org/wiki/Atom%20laser
An atom laser is a coherent state of propagating atoms. They are created out of a Bose–Einstein condensate of atoms that are output coupled using various techniques. Much like an optical laser, an atom laser is a coherent beam that behaves like a wave. There has been some argument that the term "atom laser" is misleading. Indeed, "laser" stands for light amplification by stimulated emission of radiation which is not particularly related to the physical object called an atom laser, and perhaps describes more accurately the Bose–Einstein condensate (BEC). The terminology most widely used in the community today is to distinguish between the BEC, typically obtained by evaporation in a conservative trap, from the atom laser itself, which is a propagating atomic wave obtained by extraction from a previously realized BEC. Some ongoing experimental research tries to obtain directly an atom laser from a "hot" beam of atoms without making a trapped BEC first. Introduction The first pulsed atom laser was demonstrated at MIT by Professor Wolfgang Ketterle et al. in November 1996. Ketterle used an isotope of sodium and used an oscillating magnetic field as their output coupling technique, letting gravity pull off partial pieces looking much like a dripping tap (See movie in External Links). From the creation of the first atom laser there has been a surge in the recreation of atom lasers along with different techniques for output coupling and in general research. The current developmental stage of the atom laser is analogous to that of the optical laser during its discovery in the 1960s. To that effect the equipment and techniques are in their earliest developmental phases and still strictly in the domain of research laboratories. The brightest atom laser so far has been demonstrated at IESL-FORTH, Crete, Greece. Physics The physics of an atom laser is similar to that of an optical laser. The main differences between an optical and an atom laser are that atoms interact with t
https://en.wikipedia.org/wiki/Quantum%20eraser%20experiment
In quantum mechanics, a quantum eraser experiment is an interferometer experiment that demonstrates several fundamental aspects of quantum mechanics, including quantum entanglement and complementarity. The quantum eraser experiment is a variation of Thomas Young's classic double-slit experiment. It establishes that when action is taken to determine which of 2 slits a photon has passed through, the photon cannot interfere with itself. When a stream of photons is marked in this way, then the interference fringes characteristic of the Young experiment will not be seen. The experiment also creates situations in which a photon that has been "marked" to reveal through which slit it has passed can later be "unmarked." A photon that has been "unmarked" will interfere with itself and once again produce the fringes characteristic of Young's experiment. The experiment Concept This experiment involves an apparatus with two main sections. After two entangled photons are created, each is directed into its own section of the apparatus. Anything done to learn the path of the entangled partner of the photon being examined in the double-slit part of the apparatus will influence the second photon, and vice versa. The advantage of manipulating the entangled partners of the photons in the double-slit part of the experimental apparatus is that experimenters can destroy or restore the interference pattern in the latter without changing anything in that part of the apparatus. Experimenters do so by manipulating the entangled photon, and they can do so before or after its partner has passed through the slits and other elements of experimental apparatus between the photon emitter and the detection screen. Under conditions where the double-slit part of the experiment has been set up to prevent the appearance of interference phenomena (because there is definitive "which path" information present), the quantum eraser can be used to effectively erase that information. In doing so, the experi
https://en.wikipedia.org/wiki/Ovary%20%28botany%29
In the flowering plants, an ovary is a part of the female reproductive organ of the flower or gynoecium. Specifically, it is the part of the pistil which holds the ovule(s) and is located above or below or at the point of connection with the base of the petals and sepals. The pistil may be made up of one carpel or of several fused carpels (e.g. dicarpel or tricarpel), and therefore the ovary can contain part of one carpel or parts of several fused carpels. Above the ovary is the style and the stigma, which is where the pollen lands and germinates to grow down through the style to the ovary, and, for each individual pollen grain, to fertilize one individual ovule. Some wind pollinated flowers have much reduced and modified ovaries. Fruits A fruit is the mature, ripened ovary of a flower following double fertilization in an angiosperm. Because gymnosperms do not have an ovary but reproduce through double fertilization of unprotected ovules, they produce naked seeds that do not have a surrounding fruit, this meaning that juniper and yew "berries" are not fruits, but modified cones. Fruits are responsible for the dispersal and protection of seeds in angiosperms and cannot be easily characterized due to the differences in defining culinary and botanical fruits. Development After double fertilization and ripening, the ovary becomes the fruit, the ovules inside the ovary become the seeds of that fruit, and the egg within the ovule becomes the zygote. Double fertilization of the central cell in the ovule produces the nutritious endosperm tissue that surrounds the developing zygote within the seed. Angiosperm ovaries do not always produce a fruit after the ovary has been fertilized. Problems that can arise during the developmental process of the fruit include genetic issues, harsh environmental conditions, and insufficient energy which may be caused by competition for resources between ovaries; any of these situations may prevent maturation of the ovary. Dispersal a
https://en.wikipedia.org/wiki/Speech%20error
A speech error, commonly referred to as a slip of the tongue (Latin: , or occasionally self-demonstratingly, ) or misspeaking, is a deviation (conscious or unconscious) from the apparently intended form of an utterance. They can be subdivided into spontaneously and inadvertently produced speech errors and intentionally produced word-plays or puns. Another distinction can be drawn between production and comprehension errors. Errors in speech production and perception are also called performance errors. Some examples of speech error include sound exchange or sound anticipation errors. In sound exchange errors, the order of two individual morphemes is reversed, while in sound anticipation errors a sound from a later syllable replaces one from an earlier syllable. Slips of the tongue are a normal and common occurrence. One study shows that most people can make up to as much as 22 slips of the tongue per day. Speech errors are common among children, who have yet to refine their speech, and can frequently continue into adulthood. When errors continue past the age of 9 they are referred to as "residual speech errors" or RSEs. They sometimes lead to embarrassment and betrayal of the speaker's regional or ethnic origins. However, it is also common for them to enter the popular culture as a kind of linguistic "flavoring". Speech errors may be used intentionally for humorous effect, as with spoonerisms. Within the field of psycholinguistics, speech errors fall under the category of language production. Types of speech errors include: exchange errors, perseveration, anticipation, shift, substitution, blends, additions, and deletions. The study of speech errors has contributed to the establishment/refinement of models of speech production since Victoria Fromkin's pioneering work on this topic. Psycholinguistic explanations Speech errors are made on an occasional basis by all speakers. They occur more often when speakers are nervous, tired, anxious or intoxicated. During liv
https://en.wikipedia.org/wiki/Apple%20Loops%20Utility
The Apple Loops Utility software was a small companion utility for Soundtrack Pro, GarageBand, Logic Express, and Logic Pro, all made by Apple Inc. that allowed users to create loops of audio that could be time-stretched. Audio files converted to Apple Loops via Apple Loops Utility could also be tagged with their publishing (Author, Comments tagged at the same time, a process known but it would convert the latter to AIFF when saved with tagging information). The most recent version available without purchasing the aforementioned software was 3.0.1, which was available from Apple's Developer Web site. Version 1.4, which was the first version of the software, was available with Logic Pro or Express 7.2. It allowed multiple files to have multiple tags added to them, and it also allowed content merging to occur with Logic Audio Express. Only version 1.4 and beyond worked natively with Intel Macs. Version 1.3.1 appeared to allow edits to be made and file information to be saved, but none of the essential tagging information could be retained on an Intel Mac. Version 3.0.1, the last one released, fully supported Intel Macs up to macOS Sierra 10.12.6. External links and references Apple Loops Utility (DMG) Apple Loops Utility SDK 3.0.1 (DMG) Loops Utility Audio editors MacOS multimedia software
https://en.wikipedia.org/wiki/Routing%20Policy%20Specification%20Language
The Routing Policy Specification Language (RPSL) is a language commonly used by Internet Service Providers to describe their routing policies. The routing policies are stored at various whois databases including RIPE, RADB and APNIC. ISPs (using automated tools) then generate router configuration files that match their business and technical policies. RFC2622 describes RPSL, and replaced RIPE-181. RFC2650 provides a reference tutorial to using RPSL in practice to support IPv6 routing policies. RPSL Tools and Programs RtConfig - automatically generate router configuration files from RPSL registry entries (This software is part of the IRRToolSet) irrPT - Tools for ISPs to collect and use information from Internet Routing Registry (IRR) databases External links RIPE RPSL page Internet architecture Routing
https://en.wikipedia.org/wiki/Partial%20equivalence%20relation
In mathematics, a partial equivalence relation (often abbreviated as PER, in older literature also called restricted equivalence relation) is a homogeneous binary relation that is symmetric and transitive. If the relation is also reflexive, then the relation is an equivalence relation. Definition Formally, a relation on a set is a PER if it holds for all that: if , then (symmetry) if and , then (transitivity) Another more intuitive definition is that on a set is a PER if there is some subset of such that and is an equivalence relation on . The two definitions are seen to be equivalent by taking . Properties and applications The following properties hold for a partial equivalence relation on a set : is an equivalence relation on the subset . difunctional: the relation is the set for two partial functions and some indicator set right and left Euclidean: For , and implies and similarly for left Euclideanness and imply quasi-reflexive: If and , then and . None of these properties is sufficient to imply that the relation is a PER. In non-set-theory settings In type theory, constructive mathematics and their applications to computer science, constructing analogues of subsets is often problematic—in these contexts PERs are therefore more commonly used, particularly to define setoids, sometimes called partial setoids. Forming a partial setoid from a type and a PER is analogous to forming subsets and quotients in classical set-theoretic mathematics. The algebraic notion of congruence can also be generalized to partial equivalences, yielding the notion of subcongruence, i.e. a homomorphic relation that is symmetric and transitive, but not necessarily reflexive. Examples A simple example of a PER that is not an equivalence relation is the empty relation , if is not empty. Kernels of partial functions If is a partial function on a set , then the relation defined by if is defined at , is defined at , and is a partial equiva
https://en.wikipedia.org/wiki/Separase
Separase, also known as separin, is a cysteine protease responsible for triggering anaphase by hydrolysing cohesin, which is the protein responsible for binding sister chromatids during the early stage of anaphase. In humans, separin is encoded by the ESPL1 gene. History In S. cerevisiae, separase is encoded by the esp1 gene. Esp1 was discovered by Kim Nasmyth and coworkers in 1998. In 2021, structures of human separase were determined in complex with either securin or CDK1-cyclin B1-CKS1 using cryo-EM by scientists of the University of Geneva. Function Stable cohesion between sister chromatids before anaphase and their timely separation during anaphase are critical for cell division and chromosome inheritance. In vertebrates, sister chromatid cohesion is released in 2 steps via distinct mechanisms. The first step involves phosphorylation of STAG1 or STAG2 in the cohesin complex. The second step involves cleavage of the cohesin subunit SCC1 (RAD21) by separase, which initiates the final separation of sister chromatids. In S. cerevisiae, Esp1 is coded by ESP1 and is regulated by the securin Pds1. The two sister chromatids are initially bound together by the cohesin complex until the beginning of anaphase, at which point the mitotic spindle pulls the two sister chromatids apart, leaving each of the two daughter cells with an equivalent number of sister chromatids. The proteins that bind the two sister chromatids, disallowing any premature sister chromatid separation, are a part of the cohesin protein family. One of these cohesin proteins crucial for sister chromatid cohesion is Scc1. Esp1 is a separase protein that cleaves the cohesin subunit Scc1 (RAD21), allowing sister chromatids to separate at the onset of anaphase during mitosis. Regulation When the cell is not dividing, separase is prevented from cleaving cohesin through its association with either securin or upon phosphorylation of a specific serine residue in separase by the cyclin-CDK complex. Sepa
https://en.wikipedia.org/wiki/Securin
Securin is a protein involved in control of the metaphase-anaphase transition and anaphase onset. Following bi-orientation of chromosome pairs and inactivation of the spindle checkpoint system, the underlying regulatory system, which includes securin, produces an abrupt stimulus that induces highly synchronous chromosome separation in anaphase. Securin and Separase Securin is initially present in the cytoplasm and binds to separase, a protease that degrades the cohesin rings that link the two sister chromatids. Separase is vital for onset of anaphase. This securin-separase complex is maintained when securin is phosphorylated by Cdk1, inhibiting ubiquitination. When bound to securin, separase is not functional. In addition, both securin and separase are well-conserved proteins (Figure 1). Note that separase cannot function without initially forming the securin-separase complex. This is because securin helps properly fold separase into the functional conformation. However, yeast does not appear to require securin to form functional separase as anaphase occurs in yeast with a securin deletion mutation. Role of Securin in the onset of Anaphase Basic mechanism Securin has 5 known phosphorylation sites that are targets of Cdk1; 2 sites at the N-terminal in the Ken-Box and D-box region are known to affect APC recognition and ubiquitination (Figure 2). To initiate the onset of anaphase, securin is dephosphorylated by Cdc14 and other phosphatases. Dephosphorylated securin is recognized by the Anaphase-Promoting Complex (APC) bound primarily to Cdc20 (Cdh1 is also an activating substrate of APC). The APCCdc20 complex ubiquitinates securin and targets it for degradation by 26S proteasome. This results in free separase that is able to destroy cohesin and initiate chromosome separation. Network characteristics It is thought that securin integrates multiple regulatory inputs to make separase activation switch-like, resulting in sudden, coordinated anaphase. This
https://en.wikipedia.org/wiki/Propagule
In biology, a propagule is any material that functions in propagating an organism to the next stage in its life cycle, such as by dispersal. The propagule is usually distinct in form from the parent organism. Propagules are produced by organisms such as plants (in the form of seeds or spores), fungi (in the form of spores), and bacteria (for example endospores or microbial cysts). In disease biology, pathogens are said to generate infectious propagules, the units that transmit a disease. These can refer to bacteria, viruses, fungi, or protists, and can be contained within host material. For instance, for influenza, the infectious propagules are carried in droplets of host saliva or mucus that are expelled during coughing or sneezing. In horticulture, a propagule is any plant material used for the purpose of plant propagation. In asexual reproduction, a propagule is often a stem cutting. In some plants, a leaf section or a portion of root can be used. In sexual reproduction, a propagule is a seed or spore. In micropropagation, a type of asexual reproduction, any part of the plant may be used, though it is usually a highly meristematic part such as root and stem ends or buds. See also Disseminule Gemma (botany) Plantlet Propagule pressure Seed dispersal
https://en.wikipedia.org/wiki/Pyonephrosis
Pyonephrosis (Greek pyon "pus" + nephros "kidney") is an infection of the kidneys' collecting system. Pus collects in the renal pelvis and causes distension of the kidney. It can cause kidney failure. Cause Pyonephrosis is sometimes a complication of kidney stones, which can be a source of persisting infection. It may also occur spontaneously. It can occur as a complication of hydronephrosis or pyelonephritis. Diagnosis CECT is investigation of choice. Treatment This requires drainage, best performed by ureteral stent placement or nephrostomy. Surgery - Nephrectomy, if the ureter affects - Nephroureterectomy. If the second kidney is not healthy, only drainage of the kidney or puncture nephrostomy is performed. See also Pyelonephritis Nephrotic syndrome
https://en.wikipedia.org/wiki/Acorn%20MOS
The Machine Operating System (MOS) or OS is a discontinued computer operating system (OS) used in Acorn Computers' BBC computer range. It included support for four-channel sound, graphics, file system abstraction, and digital and analogue input/output (I/O) including a daisy-chained expansion bus. The system was single-tasking, monolithic and non-reentrant. Versions 0.10 to 1.20 were used on the BBC Micro, version 1.00 on the Electron, version 2 was used on the B+, and versions 3 to 5 were used in the BBC Master series. The final BBC computer, the BBC A3000, was 32-bit and ran RISC OS, which kept on portions of the Acorn MOS architecture and shared a number of characteristics (e.g. "star commands" CLI, "VDU" video control codes and screen modes) with the earlier 8-bit MOS. Versions 0 to 2 of the MOS were 16 KiB in size, written in 6502 machine code, and held in read-only memory (ROM) on the motherboard. The upper quarter of the 16-bit address space (0xC000 to 0xFFFF) is reserved for its ROM code and I/O space. Versions 3 to 5 were still restricted to a 16 KiB address space, but managed to hold more code and hence more complex routines, partly because of the alternative 65C102 central processing unit (CPU) with its denser instruction set plus the careful use of paging. User interface The original MOS versions, from 0 to 2, did not have a user interface per se: applications were expected to forward operating system command lines to the OS on its behalf, and the programming language BBC BASIC ROM, with 6502 assembler built in, supplied with the BBC Micro is the default application used for this purpose. The BBC Micro would halt with a Language? error if no ROM is present that advertises to the OS an ability to provide a user interface (called language ROMs). MOS version 3 onwards did feature a simple command-line interface, normally only seen when the CMOS memory did not contain a setting for the default language ROM. Application programs on ROM, and some casset
https://en.wikipedia.org/wiki/Prenol
Prenol, or 3-methyl-2-buten-1-ol, is a natural alcohol. It is one of the most simple terpenoids. It is a clear colorless oil that is reasonably soluble in water and miscible with most common organic solvents. It has a fruity odor and is used occasionally in perfumery. Prenol occurs naturally in citrus fruits, cranberry, bilberry, currants, grapes, raspberry, blackberry, tomato, white bread, hop oil, coffee, arctic bramble, cloudberry and passion fruit. It is also manufactured industrially by BASF (in Ludwigshafen, Germany) and by Kuraray (in Asia) as an intermediate to pharmaceuticals and aroma compounds. Global production in 2001 was between 6000 and 13,000 tons. Industrial production Prenol is produced industrially by the reaction of formaldehyde with isobutene, followed by the isomerization of the resulting isoprenol (3-methyl-3-buten-1-ol). Polyprenols Prenol is a building block of isoprenoid alcohols, which have the general formula: H–[CH2CCH3=CHCH2]n–OH The repeating C5H8 moiety in the brackets is called isoprene, and these compounds are sometimes called 'isoprenols'. They should not be confused with isoprenol, which is an isomer of prenol with a terminal double bond. The simplest isoprenoid alcohol is geraniol (n = 2): higher oligomers include farnesol (n = 3) and geranylgeraniol (n = 4). When the isoprene unit attached to the alcohol is saturated, the compound is referred to as a dolichol. Dolichols are important as glycosyl carriers in the synthesis of polysaccharides. They also play a major role in protecting cellular membranes, stabilising cell proteins and supporting the body's immune system. Prenol is polymerized by dehydration reactions; when there are at least five isoprene units (n in the above formula is greater than or equal to five), the polymer is called a polyprenol. Polyprenols can contain up to 100 isoprene units (n = 100) linked end to end with the hydroxyl group (–OH) remaining at the end. These long-chain isoprenoid alcohols are
https://en.wikipedia.org/wiki/Topological%20string%20theory
In theoretical physics, topological string theory is a version of string theory. Topological string theory appeared in papers by theoretical physicists, such as Edward Witten and Cumrun Vafa, by analogy with Witten's earlier idea of topological quantum field theory. Overview There are two main versions of topological string theory: the topological A-model and the topological B-model. The results of the calculations in topological string theory generically encode all holomorphic quantities within the full string theory whose values are protected by spacetime supersymmetry. Various calculations in topological string theory are closely related to Chern–Simons theory, Gromov–Witten invariants, mirror symmetry, geometric Langlands Program, and many other topics. The operators in topological string theory represent the algebra of operators in the full string theory that preserve a certain amount of supersymmetry. Topological string theory is obtained by a topological twist of the worldsheet description of ordinary string theory: the operators are given different spins. The operation is fully analogous to the construction of topological field theory which is a related concept. Consequently, there are no local degrees of freedom in topological string theory. Admissible spacetimes The fundamental strings of string theory are two-dimensional surfaces. A quantum field theory known as the N = (1,1) sigma model is defined on each surface. This theory consist of maps from the surface to a supermanifold. Physically the supermanifold is interpreted as spacetime and each map is interpreted as the embedding of the string in spacetime. Only special spacetimes admit topological strings. Classically, one must choose a spacetime such that the theory respects an additional pair of supersymmetries, making the spacetime an N = (2,2) sigma model. A particular case of this is if the spacetime is a Kähler manifold and the H-flux is identically equal to zero. Generalized Kähler manifolds
https://en.wikipedia.org/wiki/Phenoxy%20herbicide
Phenoxy herbicides (or "phenoxies") are two families of chemicals that have been developed as commercially important herbicides, widely used in agriculture. They share the part structure of phenoxyacetic acid. Auxins The first group to be discovered act by mimicking the auxin growth hormone indoleacetic acid (IAA). When sprayed on broad-leaf plants they induce rapid, uncontrolled growth ("growing to death"). Thus when applied to monocotyledonous crops such as wheat or maize (corn), they selectively kill broad-leaf weeds, leaving the crops relatively unaffected. Introduced in 1946, these herbicides were in widespread use in agriculture by the middle of the 1950s. The best known phenoxy herbicides are (4-chloro-2-methylphenoxy)acetic acid (MCPA), 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T). Analogues of each of these three compounds, with an extra methyl group attached next to the carboxylic acid, were subsequently commercialised as mecoprop, dichlorprop and fenoprop. The addition of the methyl group creates a chiral centre in these molecules and biological activity is found only in the (2R)-isomer (illustrated for dichlorprop). Other members of this group include 4-(2,4-dichlorophenoxy)butyric acid (2,4-DB) and 4-(4-chloro-2-methylphenoxy)butyric acid (MCPB) which act as propesticides for 2,4-D and MCPA respectively: that is, they are converted in plants to these active ingredients. All the auxin herbicides retain activity when applied as salts and esters since these are also capable of producing the parent acid in situ. The use of herbicides in US agriculture is mapped by the US Geological Survey. , 2,4-D was the most used of the auxins. were sprayed that year, compared to of the next most heavily applied, MCPA. The other auxin now used in comparable amounts to 2,4-D is dicamba, where the 2019 figure was . It is a benzoic acid rather than a phenoxyacetic acid whose use has grown rapidly since 2016 as crops genetically
https://en.wikipedia.org/wiki/ViroPharma
ViroPharma Incorporated was a pharmaceutical company that developed and sold drugs that addressed serious diseases treated by physician specialists and in hospital settings. The company focused on product development activities on viruses and human disease, including those caused by cytomegalovirus (CMV) and hepatitis C virus (HCV) infections. It was purchased by Shire in 2013, with Shire paying around $4.2 billion for the company in a deal that was finalized in January 2014. ViroPharma was a member of the NASDAQ Biotechnology Index and the S&P 600. The company had strategic relationships with GlaxoSmithKline, Schering-Plough, and Sanofi-Aventis. ViroPharma acquired Lev Pharmaceuticals in a merger in 2008. History ViroPharma Incorporated was founded in 1994 by Claude H. Nash (Chief Executive Officer), Mark A. McKinlay (Vice President, Research & Development), Marc S. Collett (Vice President, Discovery Research), Johanna A. Griffin (Vice President, Business Development), and Guy D. Diana (Vice President, Chemistry Research.) None of the founders are still with the company. In November 2014, Shire plc acquired ViroPharma for $4.2 billion. Products Marketed products Vancocin Pulvules HCl: licensed from Eli Lilly in 2004. Oral Vancocin is an antibiotic for treatment of staphylococcal enterocolitis and antibiotic associated pseudomembranous colitis caused by Clostridium difficile. Pipeline Maribavir is an oral antiviral drug candidate licensed from GlaxoSmithKline in 2003 for the prevention and treatment of human cytomegalovirus disease in hematopoietic stem cell/bone marrow transplant patients. In February 2006, ViroPharma announced that the United States Food and Drug Administration (FDA) had granted the company fast track status for maribavir. In March 2006, the company announced that a Phase II study with maribavir demonstrated that prophylaxis with maribavir displays strong antiviral activity, as measured by statistically significant reduction in the rate
https://en.wikipedia.org/wiki/E-Bullion
e-Bullion was an Internet-based digital gold currency founded by Jim and Pamela Fayed of Moorpark, California, as part of their Goldfinger Coin & Bullion group of companies. The company was incorporated in 2000 and launched on July 4, 2001. Similar to competing systems such as e-gold, e-Bullion allowed for the instant transfer of gold and silver between user accounts. e-Bullion was a registered legal corporate entity of Panama. From 2001 to 2008 e-Bullion grew to have over one million users and substantial account transaction volume, and reserves of approximately 50,000 ounces of gold bullion. The company was a competitor to e-gold.com and goldmoney.com. In 2008, co-founder, Pamela Fayed, was murdered, leading to the indictment, trial and conviction of her husband Jim Fayed for hiring her murder. Fayed was sentenced to death, and is currently on death row in California. As a result of the murder, the U.S. Government seized all of the assets of e-Bullion, resulting in the closure of the company in August 2008. Features E-Bullion simply provided a way for users to hold and transfer balances in gold and silver. The company also offered a debit card to U.S. customers, which enabled them to convert their bullion balances to USD and withdraw at an automated teller machine (ATM) or use it for debit purchases. e-Bullion provided its own in-house currency exchange service through Goldfinger Coin & Bullion, Inc. An e-Bullion account could be funded directly via wire transfer from a bank account. e-Bullion was the first DGC to issue a debit card linked to an account. e-Bullion was the first DGC to use CRYPTOCard security tokens to protect user accounts from unauthorized access. Goldfinger Bullion Reserve Corporation, a sister company of e-Bullion, held the precious metals in bullion storage vaults located in Los Angeles, and at the Perth Mint in Australia. 2008 Murder of e-Bullion Principal The Fayeds had a troubled marriage which eventually led to divorce proceedings
https://en.wikipedia.org/wiki/Ghosting%20%28television%29
In television, a ghost is a replica of the transmitted image, offset in position, that is superimposed on top of the main image. It is often caused when a TV signal travels by two different paths to a receiving antenna, with a slight difference in timing. Analog ghosting Common causes of ghosts (in the more specific sense) are: Mismatched impedance along the communication channel, which causes unwanted reflections. The technical term for this phenomenon is ringing. Multipath distortion, because radio frequency waves may take paths of different length (by reflecting from buildings, transmission lines, aircraft, clouds, etc.) to reach the receiver. In addition, RF leaks may allow a signal to enter the set by a different path; this is most common in a large building such as a tower block or hotel where one TV antenna feeds many different rooms, each fitted with a TV aerial socket (known as pre-echo). By getting a better antenna or cable system it can be eliminated or mitigated. Note that ghosts are a problem specific to the video portion of television, largely because it uses AM for transmission. The audio portion uses FM, which has the desirable property that a stronger signal tends to overpower interference from weaker signals due to the capture effect. Even when ghosts are particularly bad in the picture, there may be little audio interference. SECAM TV uses FM for the chrominance signal, hence ghosting only affects the luma portion of its signal. TV is broadcast on VHF and UHF, which have line-of-sight propagation, and easily reflect off of buildings, mountains, and other objects. Pre-echo If the ghost is seen on the left of the main picture, then it is likely that the problem is pre-echo, which is seen in buildings with very long TV downleads where an RF leakage has allowed the TV signal to enter the tuner by a second route. For instance, plugging in an additional aerial to a TV which already has a communal TV aerial connection (or cable TV) can cause thi
https://en.wikipedia.org/wiki/Ribbon%20controller
A ribbon controller is a tactile sensor used to control synthesizers. It generally consists of a resistive strip that acts as a potentiometer. Because of its continuous control, ribbon controllers are often used to produce glissando effects. Early examples of the use of ribbon controllers in a musical instrument are in the Ondes Martenot and Trautonium. In some early instruments, the slider of the potentiometer was worn as a ring by the player. In later ribbon controllers, the ring was replaced by a conductive layer that covered the resistive element. Ribbon controllers are found in early Moog synthesizers, but were omitted from most later synthesizers. The Yamaha CS-80 synthesizer is well-known for its inclusion of a ribbon controller, used by Vangelis to create many of the characteristic sounds in the Blade Runner soundtrack. Although ribbon controllers are less common in later synthesizers, they were used in the Moog Liberation and Micromoog. Roland incorporated a ribbon controller in their JP-8000 synthesizer. , ribbon controllers are available as control voltage and MIDI peripherals. An example of a modern synthesizer that uses a ribbon controller is the Swarmatron. Later in 2010/2011, Korg released a series of minisynths called Monotron using the ribbon controller, it became so popular that it still in production in 2023.
https://en.wikipedia.org/wiki/Cable%20Internet%20access
In telecommunications, cable Internet access, shortened to cable Internet, is a form of broadband internet access which uses the same infrastructure as cable television. Like digital subscriber line and fiber to the premises services, cable Internet access provides network edge connectivity (last mile access) from the Internet service provider to an end user. It is integrated into the cable television infrastructure analogously to DSL which uses the existing telephone network. Cable TV networks and telecommunications networks are the two predominant forms of residential Internet access. Recently, both have seen increased competition from fiber deployments, wireless, mobile networks and satellite internet access. Hardware and bit rates Broadband cable Internet access requires a cable modem at the customer's premises and a cable modem termination system (CMTS) at a cable operator facility, typically a cable television headend. The two are connected via coaxial cable to a hybrid fibre-coaxial (HFC) network. While access networks are referred to as last-mile technologies, cable Internet systems can typically operate where the distance between the modem and the termination system is up to . If the HFC network is large, the cable modem termination system can be grouped into hubs for efficient management. Several standards have been used for cable internet, but the most common is DOCSIS. A cable modem at the customer is connected via coaxial cable to an optical node, and thus into an HFC network. An optical node serves many modems as the modems are connected with coaxial cable to a coaxial cable "trunk" via distribution "taps" on the trunk, which then connects to the node, possibly using amplifiers along the trunk. The optical node converts the Radiofrequency (RF) signal in the coaxial cable trunk into light pulses to be sent through optical fibers in the HFC network. At the other end of the network, an optics platform or headend platform converts the light pulses into
https://en.wikipedia.org/wiki/Structural%20gene
A structural gene is a gene that codes for any RNA or protein product other than a regulatory factor (i.e. regulatory protein). A term derived from the lac operon, structural genes are typically viewed as those containing sequences of DNA corresponding to the amino acids of a protein that will be produced, as long as said protein does not function to regulate gene expression. Structural gene products include enzymes and structural proteins. Also encoded by structural genes are non-coding RNAs, such as rRNAs and tRNAs (but excluding any regulatory miRNAs and siRNAs). Placement in the genome In prokaryotes, structural genes of related function are typically adjacent to one another on a single strand of DNA, forming an operon. This permits simpler regulation of gene expression, as a single regulatory factor can affect transcription of all associated genes. This is best illustrated by the well-studied lac operon, in which three structural genes (lacZ, lacY, and lacA) are all regulated by a single promoter and a single operator. Prokaryotic structural genes are transcribed into a polycistronic mRNA and subsequently translated. In eukaryotes, structural genes are not sequentially placed. Each gene is instead composed of coding exons and interspersed non-coding introns. Regulatory sequences are typically found in non-coding regions upstream and downstream from the gene. Structural gene mRNAs must be spliced prior to translation to remove intronic sequences. This in turn lends itself to the eukaryotic phenomenon of alternative splicing, in which a single mRNA from a single structural gene can produce several different proteins based on which exons are included. Despite the complexity of this process, it is estimated that up to 94% of human genes are spliced in some way. Furthermore, different splicing patterns occur in different tissue types. An exception to this layout in eukaryotes are genes for histone proteins, which lack introns entirely. Also distinct are the rDNA
https://en.wikipedia.org/wiki/Respiratory%20tract%20infection
Respiratory tract infections (RTIs) are infectious diseases involving the respiratory tract. An infection of this type usually is further classified as an upper respiratory tract infection (URI or URTI) or a lower respiratory tract infection (LRI or LRTI). Lower respiratory infections, such as pneumonia, tend to be far more severe than upper respiratory infections, such as the common cold. Types Upper respiratory tract infection The upper respiratory tract is considered the airway above the glottis or vocal cords; sometimes, it is taken as the tract above the cricoid cartilage. This part of the tract includes the nose, sinuses, pharynx, and larynx. Typical infections of the upper respiratory tract include tonsillitis, pharyngitis, laryngitis, sinusitis, otitis media, certain influenza types, and the common cold. Symptoms of URIs can include cough, sore throat, runny nose, nasal congestion, headache, low-grade fever, facial pressure, and sneezing. Lower respiratory tract infection The lower respiratory tract consists of the trachea (windpipe), bronchial tubes, bronchioles, and the lungs. Lower respiratory tract infections are generally more severe than upper respiratory infections. LRIs are the leading cause of death among all infectious diseases. The two most common LRIs are bronchitis and pneumonia. Influenza affects both the upper and lower respiratory tracts, but more dangerous strains such as the highly pernicious H5N1 tend to bind to receptors deep in the lungs. Diagnosis Pulmonary Function Testing (PFT) allows for the evaluation and assessment of airways, lung function, as well as specific benchmarks to diagnose an array of respiratory tract infections. Methods such as gas dilution techniques and plethysmography help determine the functional residual capacity and total lung capacity. To discover whether or not to perform a set of advanced Pulmonary Function Testing will be based on abnormally high values in previous test results. A 2014 systematic rev
https://en.wikipedia.org/wiki/Pseudorandom%20function%20family
In cryptography, a pseudorandom function family, abbreviated PRF, is a collection of efficiently-computable functions which emulate a random oracle in the following way: no efficient algorithm can distinguish (with significant advantage) between a function chosen randomly from the PRF family and a random oracle (a function whose outputs are fixed completely at random). Pseudorandom functions are vital tools in the construction of cryptographic primitives, especially secure encryption schemes. Pseudorandom functions are not to be confused with pseudorandom generators (PRGs). The guarantee of a PRG is that a single output appears random if the input was chosen at random. On the other hand, the guarantee of a PRF is that all its outputs appear random, regardless of how the corresponding inputs were chosen, as long as the function was drawn at random from the PRF family. A pseudorandom function family can be constructed from any pseudorandom generator, using, for example, the "GGM" construction given by Goldreich, Goldwasser, and Micali. While in practice, block ciphers are used in most instances where a pseudorandom function is needed, they do not, in general, constitute a pseudorandom function family, as block ciphers such as AES are defined for only limited numbers of input and key sizes. Motivations from random functions A PRF is an efficient (i.e. computable in polynomial time), deterministic function that maps two distinct sets (domain and range) and looks like a truly random function. Essentially, a truly random function would just be composed of a lookup table filled with uniformly distributed random entries. However, in practice, a PRF is given an input string in the domain and a hidden random seed and runs multiple times with the same input string and seed, always returning the same value. Nonetheless, given an arbitrary input string, the output looks random if the seed is taken from a uniform distribution. A PRF is considered to be good if its behavior
https://en.wikipedia.org/wiki/Regulator%20gene
A regulator gene, regulator, or regulatory gene is a gene involved in controlling the expression of one or more other genes. Regulatory sequences, which encode regulatory genes, are often at the five prime end (5') to the start site of transcription of the gene they regulate. In addition, these sequences can also be found at the three prime end (3') to the transcription start site. In both cases, whether the regulatory sequence occurs before (5') or after (3') the gene it regulates, the sequence is often many kilobases away from the transcription start site. A regulator gene may encode a protein, or it may work at the level of RNA, as in the case of genes encoding microRNAs. An example of a regulator gene is a gene that codes for a repressor protein that inhibits the activity of an operator (a gene which binds repressor proteins thus inhibiting the translation of RNA to protein via RNA polymerase). In prokaryotes, regulator genes often code for repressor proteins. Repressor proteins bind to operators or promoters, preventing RNA polymerase from transcribing RNA. They are usually constantly expressed so the cell always has a supply of repressor molecules on hand. Inducers cause repressor proteins to change shape or otherwise become unable to bind DNA, allowing RNA polymerase to continue transcription. Regulator genes can be located within an operon, adjacent to it, or far away from it. Other regulatory genes code for activator proteins. An activator binds to a site on the DNA molecule and causes an increase in transcription of a nearby gene. In prokaryotes, a well-known activator protein is the catabolite activator protein (CAP), involved in positive control of the lac operon. In the regulation of gene expression, studied in evolutionary developmental biology (evo-devo), both activators and repressors play important roles. Regulatory genes can also be described as positive or negative regulators, based on the environmental conditions that surround the ce
https://en.wikipedia.org/wiki/E.%20B.%20Babcock
Ernest Brown Babcock (July 10, 1877 – December 8, 1954) was an American plant geneticist who pioneered the under standing of plant evolution in terms of genetics. He is particularly known for seeking to understand by field investigations and extensive experiments, the entire polyploid apomictic genus Crepis, in which he recognize 196 species. He published more than 100 articles and books explaining plant genetics, including the seminal textbook (with Roy Elwood Clausen) Genetics in Relation to Agriculture. He instructed Marion Elizabeth Stilwell Cave.
https://en.wikipedia.org/wiki/Genetic%20discrimination
Genetic discrimination occurs when people treat others (or are treated) differently because they have or are perceived to have a gene mutation(s) that causes or increases the risk of an inherited disorder. It may also refer to any and all discrimination based on the genotype of a person rather than their individual merits, including that related to race, although the latter would be more appropriately included under racial discrimination. Some legal scholars have argued for a more precise and broader definition of genetic discrimination: "Genetic discrimination should be defined as when an individual is subjected to negative treatment, not as a result of the individual's physical manifestation of disease or disability, but solely because of the individual's genetic composition." Genetic Discrimination is considered to have its foundations in genetic determinism and genetic essentialism, and is based on the concept of genism, i.e. distinctive human characteristics and capacities are determined by genes. Genetic discrimination takes different forms depending on the country and the protections that have been taken to limit genetic discrimination, such as GINA in the United States that protects people from being barred from working or from receiving healthcare as a result of their genetic makeup. The umbrella of genetic discrimination includes the notion of informed consent, which refers to an individual's right to make a decision about their participation in research with complete comprehension of the research study. Within the United States, genetic discrimination is an ever-evolving concept that remains prominent across different domains. Emerging technology such as direct-to-consumer genetic tests have allowed for broad genetic health information to be more accessible to the public but raises concerns about privacy. In addition, the COVID-19 pandemic has exacerbated difficulties of those with genetic conditions as they have faced discrimination within the U.S. hea
https://en.wikipedia.org/wiki/Chromomere
A chromomere, also known as an idiomere, is one of the serially aligned beads or granules of a eukaryotic chromosome, resulting from local coiling of a continuous DNA thread. Chromomeres are regions of chromatin that have been compacted through localized contraction. In areas of chromatin with the absence of transcription, condensing of DNA and protein complexes will result in the formation of chromomeres. It is visible on a chromosome during the prophase of meiosis and mitosis. Giant banded (Polytene) chromosomes resulting from the replication of the chromosomes and the synapsis of homologs without cell division is a process called endomitosis. These chromosomes consist of more than 1000 copies of the same chromatid that are aligned and produce alternating dark and light bands when stained. The dark bands are the chromomere. It is unknown when chromomeres first appear on the chromosome. Chromomeres can be observed best when chromosomes are highly condensed. The chromomeres are present during leptotene phase of prophase I during meiosis. During zygotene phase of prophase I, the chromomeres of homologs align with each other to form homologous rough pairing (homology searching). These chromomeres helps provide a unique identity for each homologous pairs. They appear as dense granules during leptotene stage There are more than 2000 chromomeres on 20 chromosomes of maize. Physical properties Chromomeres are organized in a discontinuous linear pattern along the condensed chromosomes (pachytene chromosomes) during the prophase stage of meiosis. The linear pattern of chromomeres is linked to the arrangement of genes along the chromosome. Chromomeres contain genes and sometimes clusters of genes within their structure. Aggregates of chromomeres are known as chromonemata. Cohesive proteins SMC3 and hRAD21(plays a role in sister chromatid cohesion) are found within chromomeres at high concentrations, and maintain the proper structure of chromomeres. The protein XCAP-
https://en.wikipedia.org/wiki/Spawning%20trigger
Spawning triggers are environmental cues that cause marine animals to breed. Most commonly they involve sudden changes in the environment, such as changes in temperature, salinity, and/or the abundance of food. Catfish of the genus Corydoras, for example, spawn immediately after heavy rain, the specific cues being an increase in water level and a decrease in temperature. When water levels rise, it allows many fish access to areas further upstream, that are better suited for reproduction, that was not previously accessible. This is a dangerous game that fish play, though. If they wait too long, they may get trapped in small pockets of water, and eventually, die when the levels recede.Discus will breed when the temperature goes up and there is an overabundance of much needed such as mosquito larvae. Many fish stock up on food to ensure they make it through this exhausting period of time that is very hard on their bodies, while others go without eating during the spawning process because they are so focused on their offspring. Spawning triggers is what allows many fish to synchronize their breeding, making it more probable that individual fish will find a mate. In most cases, if these triggers were not present, male and female fish would not be on the same page and the offspring would show for it. However, many fish do not respond to specific spawning triggers and will breed either constantly (e.g., guppies); at specific times of the year (e.g., grunion); or only at a certain point in their life cycle (e.g., eels). Some fish, like salmon, spend their whole life getting mature in the ocean, just to swim up many miles into various rivers to lay their eggs. This is such a grueling thing for their bodies, that some species die after laying their eggs. Although most commonly associated with fish, spawning triggers are also present in bivalves and corals. In certain cases, aquarists can trigger spawning by duplicating the natural conditions where fish would breed. This c
https://en.wikipedia.org/wiki/Interrupt%20priority%20level
The interrupt priority level (IPL) is a part of the current system interrupt state, which indicates the interrupt requests that will currently be accepted. The IPL may be indicated in hardware by the registers in a programmable interrupt controller, or in software by a bitmask or integer value and source code of threads. Overview An integer based IPL may be as small as a single bit, with just two values: 0 (all interrupts enabled) or 1 (all interrupts disabled), as in the MOS Technology 6502. However, some architectures permit a greater range of values, where each value enables interrupt requests that specify a higher level, while blocking ones from the same or lower level. Assigning different priorities to interrupt requests can be useful in trying to balance system throughput versus interrupt latency. Some kinds of interrupts need to be responded to more quickly than others, but the amount of processing might not be large, so it makes sense to assign a higher priority to that kind of interrupt. Control of interrupt level was also used to synchronize access to kernel data structures. Thus, the level-3 scheduler interrupt handler would temporarily raise IPL to 7 before accessing any scheduler data structures, then lower back to 3 before switching process contexts. However, it was not allowed for an interrupt handler to lower IPL below that at which it was entered, since to do so could destroy the integrity of the synchronization system. Of course, multiprocessor systems add their complications, which are not addressed here. Regardless of what the hardware might support, typical UNIX-type systems only use two levels: the minimum (all interrupts disabled) and the maximum (all interrupts enabled). OpenVMS IPLs As an example of one of the more elaborate IPL-handling systems ever deployed, the VAX computer and associated VMS operating system supports 32 priority levels, from 0 to 31. Priorities 16 and above are for requests from external hardware, while values belo
https://en.wikipedia.org/wiki/Gravity%20model%20of%20trade
The gravity model of international trade in international economics is a model that, in its traditional form, predicts bilateral trade flows based on the economic sizes and distance between two units. Research shows that there is "overwhelming evidence that trade tends to fall with distance." The model was first introduced by Walter Isard in 1954. The basic model for trade between two countries (i and j) takes the form of In this formula G is a constant, F stands for trade flow, D stands for the distance and M stands for the economic dimensions of the countries that are being measured. The equation can be changed into a linear form for the purpose of econometric analyses by employing logarithms. The model has been used by economists to analyse the determinants of bilateral trade flows such as common borders, common languages, common legal systems, common currencies, common colonial legacies, and it has been used to test the effectiveness of trade agreements and organizations such as the North American Free Trade Agreement (NAFTA) and the World Trade Organization (WTO) (Head and Mayer 2014). The model has also been used in international relations to evaluate the impact of treaties and alliances on trade (Head and Mayer). The model has also been applied to other bilateral flow data (also known as "dyadic" data) such as migration, traffic, remittances and foreign direct investment. Theoretical justifications and research The model has been an empirical success in that it accurately predicts trade flows between countries for many goods and services, but for a long time some scholars believed that there was no theoretical justification for the gravity equation. However, a gravity relationship can arise in almost any trade model that includes trade costs that increase with distance. The gravity model estimates the pattern of international trade. While the model’s basic form consists of factors that have more to do with geography and spatiality, the gravity model h
https://en.wikipedia.org/wiki/6264
The 6264 is a JEDEC-standard static RAM integrated circuit. It has a capacity of 64 Kbit (8 KB). It is produced by a wide variety of different vendors, including Hitachi, Hynix, and Cypress Semiconductor. It is available in a variety of different configurations, such as DIP, SPDIP, and SOIC. Some versions of the 6264 can run in ultra-low-power mode and retain memory when not in use, thus making them suitable for battery backup applications. External links 6264 Datasheet (Cypress, PDF format) Computer memory
https://en.wikipedia.org/wiki/Pole%E2%80%93zero%20plot
In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as: Stability Causal system / anticausal system Region of convergence (ROC) Minimum phase / non minimum phase A pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot is plotted in the plane of a complex frequency domain, which can represent either a continuous-time or a discrete-time system: Continuous-time systems use the Laplace transform and are plotted in the s-plane: Real frequency components are along its vertical axis (the imaginary line where ) Discrete-time systems use the Z-transform and are plotted in the z-plane: Real frequency components are along its unit circle Continuous-time systems In general, a rational transfer function for a continuous-time LTI system has the form: where and are polynomials in , is the order of the numerator polynomial, is the coefficient of the numerator polynomial, is the order of the denominator polynomial, and is the coefficient of the denominator polynomial. Either or or both may be zero, but in real systems, it should be the case that ; otherwise the gain would be unbounded at high frequencies. Poles and zeros the zeros of the system are roots of the numerator polynomial: such that the poles of the system are roots of the denominator polynomial: such that Region of convergence The region of convergence (ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC
https://en.wikipedia.org/wiki/Event%20generator
Event generators are software libraries that generate simulated high-energy particle physics events. They randomly generate events as those produced in particle accelerators, collider experiments or the early universe. Events come in different types called processes as discussed in the Automatic calculation of particle interaction or decay article. Despite the simple structure of the tree-level perturbative quantum field theory description of the collision and decay processes in an event, the observed high-energy process usually contains significant amount of modifications, like photon and gluon bremsstrahlung or loop diagram corrections, that usually are too complex to be easily evaluated in real calculations directly on the diagrammatic level. Furthermore, the non-perturbative nature of QCD bound states makes it necessary to include information that is well beyond the reach of perturbative quantum field theory, and also beyond present ability of computation in lattice QCD. And in collisional systems more complex than a few leptons and hadrons (e.g. heavy-ion collisions), the collective behavior of the system would involve a phenomenological description that also cannot be easily obtained from the fundamental field theory by a simple calculus. Use in simulations As said above, the experimental calibration involves processes that usually are too complicated to be easily evaluated in calculations directly, so any realistic test of the underlying physical process in a particle accelerator experiment, therefore, requires an adequate inclusion of these complex behaviors surrounding the actual process. Based on the fact that in most processes a factorization of the full process into individual problems is possible (which means a negligible effect from interference), these individual processes are calculated separately, and the probabilistic branching between them are performed using Monte Carlo methods. The final-state particles generated by event generators can be f
https://en.wikipedia.org/wiki/Sony%20Watchman
The Sony Watchman is a line of portable pocket televisions trademarked and produced by Sony. The line was introduced in 1982 and discontinued in 2000. Its name came from a portmanteau formed of "Watch" (watching television) and "man" from Sony's Walkman personal cassette audio players. There were more than 65 models of the Watchman made before its discontinuation. As the models progressed, display size increased and new features were added. Due to the switch to digital broadcasting, most models of the Sony Watchman can no longer be used to receive live television broadcasts, without the use of a digital converter box. FD-210 The initial model was introduced in 1982 as the FD-210, which had a black & white five-centimeter (2") Cathode-ray tube display. The device weighed around 650 grams (23 oz), with a measurement of 87 x 198 x 33 millimeters (3½" x 7¾" x 1¼"). The device was sold in Japan with a price of 54,800 yen. Roughly two years later, in 1984, the device was introduced to Europe and North America. Later releases Sony manufactured more than 65 models of the Watchman before its discontinuation in 2000. Upon the release of further models after the FD-210, the display size increased, and new features were introduced. The FD-3, introduced in 1987, had a built-in digital clock. The FD-30, introduced in 1984 had a built-in AM/FM Stereo radio. The FD-40/42/44/45 were among the largest Watchmen, utilizing a 4" CRT display. The FD-40 introduced a single composite A/V input. The FD-45, introduced in 1986, was water-resistant. In 1988/1989, the FDL 330S color Watchman TV/Monitor with LCD display was introduced. In 1990, the FDL-310, a Watchman with a color LCD display was introduced. The FD-280/285, made from 1990 to 1994, was the last Watchman to use a black and white CRT display. One of the last Watchmen was the FDL-22 introduced in 1998, which featured an ergonomic body which made it easier to hold, and introduced Sony's Straptenna, where the wrist strap served as
https://en.wikipedia.org/wiki/Beta-2%20transferrin
Beta-2 transferrin is a carbohydrate-free (desialated) isoform of transferrin, which is almost exclusively found in the cerebrospinal fluid. It is not found in blood, mucus or tears, thus making it a specific marker of cerebrospinal fluid, applied as an assay in cases where cerebrospinal fluid leakage is suspected. Beta-2 transferrin would also be positive in patients with perilymph fluid leaks, as it is also present in inner ear perilymph. Thus, beta-2 transferrin in otorrhea would be suggestive of either a CSF leak or a perilymph leak.
https://en.wikipedia.org/wiki/Information%20ratio
The information ratio measures and compares the active return of an investment (e.g., a security or portfolio) compared to a benchmark index relative to the volatility of the active return (also known as active risk or benchmark tracking risk). It is defined as the active return (the difference between the returns of the investment and the returns of the benchmark) divided by the tracking error (the standard deviation of the active return, i.e., the additional risk). It represents the additional amount of return that an investor receives per unit of increase in risk. The information ratio is simply the ratio of the active return of the portfolio divided by the tracking error of its return, with both components measured relative to the performance of the agreed-on benchmark. It is often used to gauge the skill of managers of mutual funds, hedge funds, etc. It measures the active return of the manager's portfolio divided by the amount of risk that the manager takes relative to the benchmark. The higher the information ratio, the higher the active return of the portfolio, given the amount of risk taken, and the better the manager. The information ratio is similar to the Sharpe ratio, the main difference being that the Sharpe ratio uses a risk-free return as benchmark (such as a U.S. Treasury security) whereas the information ratio uses a risky index as benchmark (such as the S&P500). The Sharpe ratio is useful for an attribution of the absolute returns of a portfolio, and the information ratio is useful for an attribution of the relative returns of a portfolio. Definition The information ratio is defined as: , where is the portfolio return, is the benchmark return, is the expected value of the active return, and is the standard deviation of the active return, which is an alternate definition of the aforementioned tracking error. Note in this case, is defined as excess return, not the risk-adjusted excess return or Jensen's alpha calculated using regress
https://en.wikipedia.org/wiki/Lefschetz%20hyperplane%20theorem
In mathematics, specifically in algebraic geometry and algebraic topology, the Lefschetz hyperplane theorem is a precise statement of certain relations between the shape of an algebraic variety and the shape of its subvarieties. More precisely, the theorem says that for a variety X embedded in projective space and a hyperplane section Y, the homology, cohomology, and homotopy groups of X determine those of Y. A result of this kind was first stated by Solomon Lefschetz for homology groups of complex algebraic varieties. Similar results have since been found for homotopy groups, in positive characteristic, and in other homology and cohomology theories. A far-reaching generalization of the hard Lefschetz theorem is given by the decomposition theorem. The Lefschetz hyperplane theorem for complex projective varieties Let X be an n-dimensional complex projective algebraic variety in CPN, and let Y be a hyperplane section of X such that U = X ∖ Y is smooth. The Lefschetz theorem refers to any of the following statements: The natural map Hk(Y, Z) → Hk(X, Z) in singular homology is an isomorphism for k < n − 1 and is surjective for k = n − 1. The natural map Hk(X, Z) → Hk(Y, Z) in singular cohomology is an isomorphism for k < n − 1 and is injective for k = n − 1. The natural map πk(Y, Z) → πk(X, Z) is an isomorphism for k < n − 1 and is surjective for k = n − 1. Using a long exact sequence, one can show that each of these statements is equivalent to a vanishing theorem for certain relative topological invariants. In order, these are: The relative singular homology groups Hk(X, Y, Z) are zero for . The relative singular cohomology groups Hk(X, Y, Z) are zero for . The relative homotopy groups πk(X, Y) are zero for . Lefschetz's proof Solomon Lefschetz used his idea of a Lefschetz pencil to prove the theorem. Rather than considering the hyperplane section Y alone, he put it into a family of hyperplane sections Yt, where Y = Y0. Because a generic hyperplane section i
https://en.wikipedia.org/wiki/Hyperplane%20section
In mathematics, a hyperplane section of a subset X of projective space Pn is the intersection of X with some hyperplane H. In other words, we look at the subset XH of those elements x of X that satisfy the single linear condition L = 0 defining H as a linear subspace. Here L or H can range over the dual projective space of non-zero linear forms in the homogeneous coordinates, up to scalar multiplication. From a geometrical point of view, the most interesting case is when X is an algebraic subvariety; for more general cases, in mathematical analysis, some analogue of the Radon transform applies. In algebraic geometry, assuming therefore that X is V, a subvariety not lying completely in any H, the hyperplane sections are algebraic sets with irreducible components all of dimension dim(V) − 1. What more can be said is addressed by a collection of results known collectively as Bertini's theorem. The topology of hyperplane sections is studied in the topic of the Lefschetz hyperplane theorem and its refinements. Because the dimension drops by one in taking hyperplane sections, the process is potentially an inductive method for understanding varieties of higher dimension. A basic tool for that is the Lefschetz pencil.
https://en.wikipedia.org/wiki/Time%20base%20correction
Time base correction (TBC) is a technique to reduce or eliminate errors caused by mechanical instability present in analog recordings on mechanical media. Without time base correction, a signal from a videotape recorder (VTR) or videocassette recorder (VCR) cannot be mixed with other, more time-stable devices found in television studios and post-production facilities. Most broadcast quality VCRs have simple time base correctors built in though external TBCs are often used. Some high-end domestic analog video recorders and camcorders also include a TBC circuit, which typically can be switched off if required. Time base correction counteracts errors by buffering the video signal as it comes off the videotape at an unsteady rate, and releasing it at a steady rate. TBCs also allow a variable delay in the video stream. By adjusting the rate and delay using a waveform monitor and a vectorscope, the corrected signal can now match the timing of the other devices in the system. If all of the devices in a system are adjusted so their signals meet the video switcher at the same time and at the same rate, the signals can be mixed. A single master clock or sync generator provides the reference for all of the devices' clocks. Video correction As far back as 1956, professional reel-to-reel audio tape recorders relying on mechanical stability alone were stable enough that pitch distortion could be below audible level without time base correction. However, the higher sensitivity of video recordings meant that even the best mechanical solutions still resulted in detectable distortion of the video signals and difficulty locking to downstream devices. A video signal consists of picture information but also sync and subcarrier signals which allow the image to be framed up square on the monitor, reproduce colors accurately and, importantly, allow the combination and switching of two or more video signals. Types Physically there are only 4 types, dedicated IC, add-in cards for p
https://en.wikipedia.org/wiki/Ray-tracing%20hardware
Ray-tracing hardware is special-purpose computer hardware designed for accelerating ray tracing calculations. Introduction: Ray tracing and rasterization The problem of rendering 3D graphics can be conceptually presented as finding all intersections between a set of "primitives" (typically triangles or polygons) and a set of "rays" (typically one or more per pixel). Up to 2010, all typical graphic acceleration boards, called graphics processing units (GPUs), used rasterization algorithms. The ray tracing algorithm solves the rendering problem in a different way. In each step, it finds all intersections of a ray with a set of relevant primitives of the scene. Both approaches have their own benefits and drawbacks. Rasterization can be performed using devices based on a stream computing model, one triangle at the time, and access to the complete scene is needed only once. The drawback of rasterization is that non-local effects, required for an accurate simulation of a scene, such as reflections and shadows are difficult; and refractions nearly impossible to compute. The ray tracing algorithm is inherently suitable for scaling by parallelization of individual ray renders. However, anything other than ray casting requires recursion of the ray tracing algorithm (and random access to the scene graph) to complete their analysis, since reflected, refracted, and scattered rays require that various parts of the scene be re-accessed in a way not easily predicted. But it can easily compute various kinds of physically correct effects, providing much more realistic impression than rasterization. The complexity of a well implemented ray tracing algorithm scales logarithmically; this is due to objects (triangles and collections of triangles) being placed into BSP trees or similar structures, and only being analyzed if a ray intersects with the bounding volume of the binary space partition. Implementations Various implementations of ray tracing hardware have been created, both
https://en.wikipedia.org/wiki/IEEE%201901
The IEEE 1901 Standard, established in 2010, set the first worldwide benchmark for powerline communication tailored for uses like multimedia home networks, audio-video, and the smart grid. This standard underwent an amendment in IEEE 1901a-2019, introducing improvements to the HD-PLC physical layer (wavelet) for Internet of Things (IoT) applications. It was further updated in 2020, known as IEEE 1901-2020. The IEEE Std 1901 is a standard for high speed (up to 500 Mbit/s at the physical layer) communication devices via electric power lines, often called broadband over power lines (BPL). The standard uses transmission frequencies below 100 MHz. This standard is usable by all classes of BPL devices, including BPL devices used for the connection (<1500m to the premises) to Internet access services as well as BPL devices used within buildings for local area networks, smart energy applications, transportation platforms (vehicle), and other data distribution applications (<100m between devices). The IEEE Std 1901 standard replaced a dozen previous powerline specifications. It includes a mandatory coexistence Inter-System Protocol (ISP). The IEEE 1901 ISP prevents interference when the different BPL implementations are operated within close proximity of one another. To handle multiple devices attempting to use the line at the same time, IEEE Std 1901 supports TDMA, but CSMA/CA (also used in WiFi) is most commonly implemented by devices sold. The 1901 standard is mandatory to initiate SAE J1772 electric vehicle DC charging (AC uses PWM) and the sole powerline protocol for IEEE 1905.1 heterogeneous networking. It was highly recommended in the IEEE P1909.1 smart grid standards because those are primarily for control of AC devices, which by definition always have AC power connections - thus no additional connections are required. Updates overview The IEEE 1901 Standard was a significant step in the development of powerline communication (PLC) technologies. PLC allows for
https://en.wikipedia.org/wiki/Configuration%20Menu%20Language
Configuration Menu Language (CML) was used, in Linux kernel versions prior to 2.5.45, to configure the values that determine the composition and exact functionality of the kernel. Many possible variations in kernel functionality can exist; and customization is possible, for instance for the specifications of the exact hardware it will run on. It can also be tuned for administrator preferences. CML was written by Raymond Chen in 1993. Its question-and-answer interface allowed systematic selection particular behaviors without editing multiple system files. Eric S. Raymond wrote a menu-driven module named CML2 to replace it, but it was officially rejected. Linus Torvalds attributed the rejection in a 2007 lkml.org post to a preference for small incremental changes, and concern that the maintainer had not been involved in the rewrite. "You can't just...go do your own thing and expect it to be merged," he said, noting that Raymond "left with a splash" over the rejection. LinuxKernelConf replaced CML in kernel version 2.5.45, and remains in use for the 4.0 kernel.
https://en.wikipedia.org/wiki/Prostanoid
Prostanoids are active lipid mediators that regulate inflammatory response. Prostanoids are a subclass of eicosanoids consisting of the prostaglandins (mediators of inflammatory and anaphylactic reactions), the thromboxanes (mediators of vasoconstriction), and the prostacyclins (active in the resolution phase of inflammation). Prostanoids are seen to target NSAIDS which allow for therapeutic potential. Prostanoids are present within areas of the body such as the gastrointestinal tract, urinary tract, respiratory and cardiology systems, reproductive tract and vascular system. Prostanoids can even be seen with aid to the water and ion transportation within cells. Prostanoids help release prostaglandins upon activation, receptors may open possibilities for treatments within different systems. History Prostanoids were discovered through biological research studies conducted in the 1930s. The first discovery was seen through semen by a Swedish Physiologist Ulf von Euler, who assumed they originated from the prostate. After intensive study throughout the 1960-1970s Sune K. Bergström and Bengt Ingemar Samuelsson and British biochemist Sir John Robert Vane were able to understand the function and chemical formation of Prostanoids: receiving a Nobel Prize for their analysis of prostanoids. Biosynthesis of prostaglandins Cyclooxygenase (COX) catalyzes the conversion of the free essential fatty acids to prostanoids by a two-step process. In the first step, two molecules of O2 are added as two peroxide linkages and a 5-member carbon ring is forged near the middle of the fatty acid chain. This forms the short-lived, unstable intermediate Prostaglandin G (PGG). One of the peroxide linkages sheds a single oxygen, forming PGH. (See diagrams and more detail at Cyclooxygenase). All other prostanoids originate from PGH (as PGH1, PGH2, or PGH3). The image at right shows how PGH2 (derived from Arachidonic acid) is converted: By PGE synthetase into PGE2 (which in turn
https://en.wikipedia.org/wiki/Miacis
Miacis ("small point") is an extinct genus of placental mammals from clade Carnivoraformes, that lived in North America from early to middle Eocene. Description Miacis was five-clawed, about the size of a weasel (~30 cm), and lived on the North American continent. It retained some primitive characteristics such as low skulls, long slender bodies, long tails, and short legs. Miacis retained 44 teeth, although some reductions in this number were apparently in progress and some of the teeth were reduced in size. The hind limbs were longer than the forelimbs, the pelvis was dog-like in form and structure, and some specialized traits were present in the vertebrae. It had retractable claws, agile joints for climbing, and binocular vision. Miacis and related forms had brains that were relatively larger than those of the creodonts, and the larger brain size as compared with body size probably reflects an increase in intelligence. Like many other early carnivoramorphans, it was well suited for an arboreal climbing lifestyle with needle-sharp claws, limbs, and joints resembling modern carnivorans. Miacis was probably a very agile forest dweller that preyed upon smaller animals, such as small mammals, reptiles, and birds, and might also have eaten eggs and fruits. Classification and phylogeny Classification History of taxonomy Since Edward Drinker Cope first described the genus Miacis in 1872, at least twenty other species have been assigned to Miacis. However, these species share few synapomorphies other than plesiomorphic characteristics of miacids in general. This reflects the fact that Miacis has been treated as a wastebasket taxon and contains a diverse collection of species that belong to the stemgroup within the Carnivoraformes. Many of the species originally assigned to Miacis have since been assigned to other genera and, apart from the type species, Miacis parvivorus, the remaining species are often referred to with Miacis in quotations (e.g. "Miacis" latidens)
https://en.wikipedia.org/wiki/Park%20Royal
Park Royal is an area in London, England, partly in the London Borough of Ealing and partly the London Borough of Brent. It is the site of the largest business park in London, but despite intensive existing use, the area is, together with adjacent Old Oak Common, intended to become the UK's largest regeneration scheme. This arises from the area's relatively central location and also the strong and improving transport links which will include (at Old Oak Common), HS2 and the Elizabeth line. The scale of redevelopment has led to the Park Royal and Old Oak area being described as a potential "Canary Wharf of West London". Location To the north of Park Royal is Harlesden in the northeast, West Twyford, an outlying area of Ealing, in the northwest, and a Network Rail depot at Stonebridge Park in the far north, which also has London Underground Bakerloo line tracks running through it (and Harlesden station nearby). On the eastern side, Park Royal is bounded by Acton Lane and Park Royal Road (B4492). The Central Middlesex Hospital is located here. The Grand Union Canal runs through the middle of the Park Royal industrial estate, with pedestrian access via the towpath. History The name Park Royal derives from the short-lived showgrounds opened in 1903 by the Royal Agricultural Society as a permanent exhibition site for the society's annual show. After only three years the society sold the site, and returned to a touring format for its shows. With its road, rail and canal links, Park Royal was subsequently developed for industrial use, mainly during the 1930s. For many years it was a centre of engineering, with firms including Park Royal Vehicles, GKN and Landis and Gyr. Queens Park Rangers F.C. played on two grounds in Park Royal. The first was the Horse Ring, later the site of the Guinness brewery, which had a capacity of 40,000. When the Royal Agricultural Society sold the grounds in 1907, QPR moved to the Park Royal Ground, south, an almost exact replica of Ayres
https://en.wikipedia.org/wiki/Treebog
A treebog is a type of low-tech compost toilet. It consists of a raised platform above a compost pile surrounded by densely planted willow trees or other nutrient-hungry vegetation. It can be considered an example of permaculture design, as it functions as a system for converting urine and feces to biomass, without the need to handle excreta. Defecating in nature is frowned upon in most countries, as it pollutes the environment and causes health problems. High levels of open defecation are linked to high child mortality, poor nutrition, poverty, and large disparities between the rich and the poor. Human faeces normally take about a year to biodegrade outdoors. In the UK, a system like this is potentially legal, so long as it not in a public place, i.e. on a large private estate. Etymology The term "Treebog" was coined by Jay Abrahams. Bog is a British English slang word for toilet, not to be confused with its other meaning of wetland. History The treebog is a simple method of composting wastes. Abrahams claims that from 1995-2011, around 1500 treebogs may have been built in Britain. In 2011, Abrahams claimed that the treebog had attracted the attention of NGOs and aid workers who hope to develop its potential for shanty towns or refugee camps - anywhere that water is scarce and the population pressure on resources is high. Plant growth A treebog is simply a controlled compost heap whose function has been enhanced by use of moisture or nutrient-hungry trees. They use no water, purify waste as they create a biomass resource, and also contain the organic waste material, thus preventing the spread of disease. The main requirement is that the planted species should be nutrient-hungry. It is a bonus if they can be harvested or pollarded for productive uses, e.g. willow cultivars. Apart from willows, mint will thrive around a treebog. If left unmanaged, a treebog will soon be surrounded by weed species, such as nettles. Both the solids and liquids are deposited w
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Babai
László "Laci" Babai (born July 20, 1950, in Budapest) is a Hungarian professor of computer science and mathematics at the University of Chicago. His research focuses on computational complexity theory, algorithms, combinatorics, and finite groups, with an emphasis on the interactions between these fields. Life In 1968, Babai won a gold medal at the International Mathematical Olympiad. Babai studied mathematics at Faculty of Science of the Eötvös Loránd University from 1968 to 1973, received a PhD from the Hungarian Academy of Sciences in 1975, and received a DSc from the Hungarian Academy of Sciences in 1984. He held a teaching position at Eötvös Loránd University since 1971; in 1987 he took joint positions as a professor in algebra at Eötvös Loránd and in computer science at the University of Chicago. In 1995, he began a joint appointment in the mathematics department at Chicago and gave up his position at Eötvös Loránd. Work He is the author of over 180 academic papers. His notable accomplishments include the introduction of interactive proof systems, the introduction of the term Las Vegas algorithm, and the introduction of group theoretic methods in graph isomorphism testing. In November 2015, he announced a quasipolynomial time algorithm for the graph isomorphism problem. He is editor-in-chief of the refereed online journal Theory of Computing. Babai was also involved in the creation of the Budapest Semesters in Mathematics program and first coined the name. Graph isomorphism in quasipolynomial time After announcing the result in 2015, Babai presented a paper proving that the graph isomorphism problem can be solved in quasi-polynomial time in 2016, at the ACM Symposium on Theory of Computing. In response to an error discovered by Harald Helfgott, he posted an update in 2017. Honors In 1988, Babai won the Hungarian State Prize, in 1990 he was elected as a corresponding member of the Hungarian Academy of Sciences, and in 1994 he became a full member. In 1
https://en.wikipedia.org/wiki/Tracking%20error
In finance, tracking error or active risk is a measure of the risk in an investment portfolio that is due to active management decisions made by the portfolio manager; it indicates how closely a portfolio follows the index to which it is benchmarked. The best measure is the standard deviation of the difference between the portfolio and index returns. Many portfolios are managed to a benchmark, typically an index. Some portfolios are expected to replicate, before trading and other costs, the returns of an index exactly (e.g., an index fund), while others are expected to 'actively manage' the portfolio by deviating slightly from the index in order to generate active returns. Tracking error is a measure of the deviation from the benchmark; the aforementioned index fund would have a tracking error close to zero, while an actively managed portfolio would normally have a higher tracking error. Thus the tracking error does not include any risk (return) that is merely a function of the market's movement. In addition to risk (return) from specific stock selection or industry and factor "betas", it can also include risk (return) from market timing decisions. Dividing portfolio active return by portfolio tracking error gives the information ratio, which is a risk adjusted performance measure. Definition If tracking error is measured historically, it is called 'realized' or 'ex post' tracking error. If a model is used to predict tracking error, it is called 'ex ante' tracking error. Ex-post tracking error is more useful for reporting performance, whereas ex-ante tracking error is generally used by portfolio managers to control risk. Various types of ex-ante tracking error models exist, from simple equity models which use beta as a primary determinant to more complicated multi-factor fixed income models. In a factor model of a portfolio, the non-systematic risk (i.e., the standard deviation of the residuals) is called "tracking error" in the investment field. The latter way t