source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Number%20%28sports%29 | In team sports, the number, often referred to as the uniform number, squad number, jersey number, shirt number, sweater number, or similar (with such naming differences varying by sport and region) is the number worn on a player's uniform, to identify and distinguish each player (and sometimes others, such as coaches and officials) from others wearing the same or similar uniforms. The number is typically displayed on the rear of the jersey, often accompanied by the surname. Sometimes it is also displayed on the front and/or sleeves, or on the player's shorts or headgear. It is used to identify the player to officials, other players, official scorers, and spectators; in some sports, it is also indicative of the player's position.
The International Federation of Football History and Statistics, an organization of association football historians, traces the origin of numbers to a 1911 association football match in Sydney, although photographic evidence exists of numbers being used in Australia as early as May 1903 in a Fitzroy v Collingwood Australian rules football match. Player numbers were used in a Queensland vs. New Zealand rugby match played on 17 July 1897, in Brisbane, Australia, as reported in the Brisbane Courier.
Association football
In association football, the first record of numbered jerseys date back to 1911, with Australian teams Sydney Leichhardt and HMS Powerful being the first to use squad numbers on their backs. One year later, numbering in football was mandated in New South Wales.
In South America, Argentina was the first country with numbered shirts. It was during the Scottish team Third Lanark's tour to South America of 1923, they played a friendly match vs. a local combined team ("Zona Norte") on 10 June. Both squads were numbered from 1–11.
North America saw its first football match with squad numbers on 30 March 1924, when St. Louis Vesper Buick and Fall River F.C. (winners of St. Louis and American soccer leagues, respectively) played t |
https://en.wikipedia.org/wiki/Metal%20aromaticity | Metal aromaticity or metalloaromaticity is the concept of aromaticity, found in many organic compounds, extended to metals and metal-containing compounds. The first experimental evidence for the existence of aromaticity in metals was found in aluminium cluster compounds of the type where M stands for lithium, sodium or copper. These anions can be generated in a helium gas by laser vaporization of an aluminium / lithium carbonate composite or a copper or sodium / aluminium alloy, separated and selected by mass spectrometry and analyzed by photoelectron spectroscopy. The evidence for aromaticity in these compounds is based on several considerations. Computational chemistry shows that these aluminium clusters consist of a tetranuclear plane and a counterion at the apex of a square pyramid. The unit is perfectly planar and is not perturbed the presence of the counterion or even the presence of two counterions in the neutral compound . In addition its HOMO is calculated to be a doubly occupied delocalized pi system making it obey Hückel's rule. Finally a match exists between the calculated values and the experimental photoelectron values for the energy required to remove the first 4 valence electrons. The first fully metal aromatic compound was a cyclogallane with a Ga32- core discovered by Gregory Robinson in 1995.
D-orbital aromaticity is found in trinuclear tungsten and molybdenum metal clusters generated by laser vaporization of the pure metals in the presence of oxygen in a helium stream. In these clusters the three metal centers are bridged by oxygen and each metal has two terminal oxygen atoms. The first signal in the photoelectron spectrum corresponds to the removal of the valence electron with the lowest energy in the anion to the neutral compound. This energy turns out to be comparable to that of bulk tungsten trioxide and molybdenum trioxide. The photoelectric signal is also broad which suggests a large difference in conformation between the anion and |
https://en.wikipedia.org/wiki/Base%20excess | In physiology, base excess and base deficit refer to an excess or deficit, respectively, in the amount of base present in the blood. The value is usually reported as a concentration in units of mEq/L (mmol/L), with positive numbers indicating an excess of base and negative a deficit. A typical reference range for base excess is −2 to +2 mEq/L.
Comparison of the base excess with the reference range assists in determining whether an acid/base disturbance is caused by a respiratory, metabolic, or mixed metabolic/respiratory problem. While carbon dioxide defines the respiratory component of acid–base balance, base excess defines the metabolic component. Accordingly, measurement of base excess is defined, under a standardized pressure of carbon dioxide, by titrating back to a standardized blood pH of 7.40.
The predominant base contributing to base excess is bicarbonate. Thus, a deviation of serum bicarbonate from the reference range is ordinarily mirrored by a deviation in base excess. However, base excess is a more comprehensive measurement, encompassing all metabolic contributions.
Definition
Base excess is defined as the amount of strong acid that must be added to each liter of fully oxygenated blood to return the pH to 7.40 at a temperature of 37°C and a pCO2 of . A base deficit (i.e., a negative base excess) can be correspondingly defined by the amount of strong base that must be added.
A further distinction can be made between actual and standard base excess: actual base excess is that present in the blood, while standard base excess is the value when the hemoglobin is at 5 g/dl. The latter gives a better view of the base excess of the entire extracellular fluid.
Base excess (or deficit) is one of several values typically reported with arterial blood gas analysis that is derived from other measured data.
The term and concept of base excess were first introduced by Poul Astrup and Ole Siggaard-Andersen in 1958.
Estimation
Base excess can be estimated from th |
https://en.wikipedia.org/wiki/Banked%20turn | A banked turn (or banking turn) is a turn or change of direction in which the vehicle banks or inclines, usually towards the inside of the turn. For a road or railroad this is usually due to the roadbed having a transverse down-slope towards the inside of the curve. The bank angle is the angle at which the vehicle is inclined about its longitudinal axis with respect to the horizontal.
Turn on flat surfaces
If the bank angle is zero, the surface is flat and the normal force is vertically upward. The only force keeping the vehicle turning on its path is friction, or traction. This must be large enough to provide the centripetal force, a relationship that can be expressed as an inequality, assuming the car is driving in a circle of radius r:
The expression on the right hand side is the centripetal acceleration multiplied by mass, the force required to turn the vehicle. The left hand side is the maximum frictional force, which equals the coefficient of friction μ multiplied by the normal force. Rearranging the maximum cornering speed is
Note that μ can be the coefficient for static or dynamic friction. In the latter case, where the vehicle is skidding around a bend, the friction is at its limit and the inequalities becomes equations. This also ignores effects such as downforce, which can increase the normal force and cornering speed.
Frictionless banked turn
As opposed to a vehicle riding along a flat circle, inclined edges add an additional force that keeps the vehicle in its path and prevents a car from being "dragged into" or "pushed out of" the circle (or a railroad wheel from moving sideways so as to nearly rub on the wheel flange). This force is the horizontal component of the vehicle's normal force (N). In the absence of friction, the normal force is the only one acting on the vehicle in the direction of the center of the circle. Therefore, as per Newton's second law, we can set the horizontal component of the normal force equal to mass multiplied by ce |
https://en.wikipedia.org/wiki/Radiofrequency%20ablation | Radiofrequency ablation (RFA), also called fulguration, is a medical procedure in which part of the electrical conduction system of the heart, tumor or other dysfunctional tissue is ablated using the heat generated from medium frequency alternating current (in the range of 350–500 kHz). RFA is generally conducted in the outpatient setting, using either local anesthetics or twilight anesthesia. When it is delivered via catheter, it is called radiofrequency catheter ablation.
Two important advantages of radio frequency current (over previously used low frequency AC or pulses of DC) are that it does not directly stimulate nerves or heart muscle and therefore can often be used without the need for general anesthesia, and that it is very specific for treating the desired tissue without significant collateral damage; due to this, it is gaining in popularity as an alternative for eligible patients who do not want to undergo surgery.
Documented benefits have led to RFA becoming widely used during the 21st century. RFA procedures are performed under image guidance (such as X-ray screening, CT scan or ultrasound) by an interventional pain specialist (such as an anesthesiologist), interventional radiologist, otolaryngologists, a gastrointestinal or surgical endoscopist, or a cardiac electrophysiologist, a subspecialty of cardiologists.
Tumors
RFA may be performed to treat tumors in the lung, liver, kidney, and bone, as well as other body organs less commonly. Once the diagnosis of tumor is confirmed, a needle-like RFA probe is placed inside the tumor. The radiofrequency waves passing through the probe increase the temperature within tumor tissue, which results in destruction of the tumor. RFA can be used with small tumors, whether these arose within the organ (primary tumors) or spread to the organ (metastases). The suitability of RFA for a particular tumor depends on multiple factors.
RFA can usually be administered as an outpatient procedure, though may at times require |
https://en.wikipedia.org/wiki/Pauling%27s%20rules | Pauling's rules are five rules published by Linus Pauling in 1929 for predicting and rationalizing the crystal structures of ionic compounds.
First rule: the radius ratio rule
For typical ionic solids, the cations are smaller than the anions, and each cation is surrounded by coordinated anions which form a polyhedron. The sum of the ionic radii determines the cation-anion distance, while the cation-anion radius ratio (or ) determines the coordination number (C.N.) of the cation, as well as the shape of the coordinated polyhedron of anions.
For the coordination numbers and corresponding polyhedra in the table below, Pauling mathematically derived the minimum radius ratio for which the cation is in contact with the given number of anions (considering the ions as rigid spheres). If the cation is smaller, it will not be in contact with the anions which results in instability leading to a lower coordination number.
The three diagrams at right correspond to octahedral coordination with a coordination number of six: four anions in the plane of the diagrams, and two (not shown) above and below this plane. The central diagram shows the minimal radius ratio. The cation and any two anions form a right triangle, with , or . Then . Similar geometrical proofs yield the minimum radius ratios for the highly symmetrical cases C.N. = 3, 4 and 8.
For C.N. = 6 and a radius ratio greater than the minimum, the crystal is more stable since the cation is still in contact with six anions, but the anions are further from each other so that their mutual repulsion is reduced. An octahedron may then form with a radius ratio greater than or equal to 0.414, but as the ratio rises above 0.732, a cubic geometry becomes more stable. This explains why in NaCl with a radius ratio of 0.55 has octahedral coordination, whereas in CsCl with a radius ratio of 0.93 has cubic coordination.
If the radius ratio is less than the minimum, two anions will tend to depart and the remaining four will rearra |
https://en.wikipedia.org/wiki/Auxiliary%20Medical%20Service | Auxiliary Medical Service (AMS) is a voluntary medical and health services provider in Hong Kong. Its mission is to supply effectively and efficiently regular services to maintain the health and well-being of people in Hong Kong.
History
The Hong Kong Government decided to form the Auxiliary Medical Service in order to create a force that could assist the regular medical services during emergencies. The establishment of the AMS was announced in the government gazette on 22 December 1950. In early 1951 the AMS made a call for volunteers, including ordinary people who could be trained as auxiliary nurses, ambulance drivers, and other roles. As the population of Hong Kong swelled with refugees from China in the post-Chinese Communist Revolution years, many lived in substandard housing areas susceptible to fires, landslips, storms, and other disasters, for which the AMS played a role in delivering emergency medical treatment. In the 1950s, AMS worked with St. John Ambulance to establish first aid posts all around the territory.
The AMS was involved in major events like the Shek Kip Mei Fire in 1953, Typhoon Wendy in 1962 and landslides caused by heavy rainstorms in 1972. It also served during SARS in Hong Kong in 2003 and 2004 Indian Ocean earthquake. Normally, it sends out volunteers to fireworks displays, marathons, and other major events.
In 1983, AMS became an independent government operation branch under the Security Department of the Government Secretariat. A public hotline for enquiry about the services of AMS and a Non-Emergency Ambulance Transport team were set up in 1995 and 1996 respectively. The Youth Ambassador Scheme has also been implemented in 1997 with the objectives to encourage young people to have a healthy lifestyle and promote a sense of civic duties.
As of 2007, the number of volunteers had grown to 4,418.
Fleet
A list of vehicles used in the past and present:
Past
Land Rover Defender ambulances
Mercedes-Benz T1 E-310 ambulances
Toyota |
https://en.wikipedia.org/wiki/Archaic%20humans | A number of varieties of Homo are grouped into the broad category of archaic humans in the period that precedes and is contemporary to the emergence of the earliest early modern humans (Homo sapiens) around 300 ka. Among the earliest remains of H. sapiens are Omo-Kibish I (Omo I) from southern Ethiopia ( 195 or 233 ka), the remains from Jebel Irhoud in Morocco (about 315 ka) and Florisbad in South Africa (259 ka). The term typically includes H. antecessor, H. bodoensis, Denisovans (H. denisova), H. heidelbergensis (600–200 ka), Neanderthals (H. neanderthalensis; 430 ± 25 ka), and H. rhodesiensis (300–125 ka).
Archaic humans had a brain size averaging 1,200 to 1,400 cubic centimeters, which overlaps with the range of modern humans. Archaics are distinguished from anatomically modern humans by having a thick skull, prominent supraorbital ridges (brow ridges) and the lack of a prominent chin.
Anatomically modern humans appeared around 300,000 years ago in Africa, and 70,000 years ago gradually supplanted the "archaic" human varieties. Non-modern varieties of Homo are certain to have survived until after 30,000 years ago, and perhaps until as recently as 12,000 years ago. According to recent genetic studies, modern humans may have bred with two or more groups of archaic humans, including Neanderthals and Denisovans. Other studies have cast doubt on admixture being the source of the shared genetic markers between archaic and modern humans, pointing to an ancestral origin of the traits which originated 500,000–800,000 years ago. In August 2023, scientists reported the discovery of an unknown ancient human hominin that may have lived 300,000 years ago in China.
Terminology and definition
The category archaic human lacks a single, agreed definition. According to one definition, Homo sapiens is a single species comprising several subspecies that include the archaics and modern humans. Under this definition, modern humans are referred to as Homo sapiens sapiens and arch |
https://en.wikipedia.org/wiki/Statistical%20literacy | Statistical literacy is the ability to understand and reason with statistics and data. The abilities to understand and reason with data, or arguments that use data, are necessary for citizens to understand material presented in publications such as newspapers, television, and the Internet. However, scientists also need to develop statistical literacy so that they can both produce rigorous and reproducible research and consume it. Numeracy is an element of being statistically literate and in some models of statistical literacy, or for some populations (e.g., students in kindergarten through 12th grade/end of secondary school), it is a prerequisite skill. Being statistically literate is sometimes taken to include having the abilities to both critically evaluate statistical material and appreciate the relevance of statistically-based approaches to all aspects of life in general or to the evaluating, design, and/or production of scientific work.
Promoting statistical literacy
Each day people are inundated with statistical information from advertisements ("4 out of 5 dentists recommend"), news reports ("opinion poll show the incumbent leading by four points"), and even general conversation ("half the time I don't know what you're talking about"). Experts and advocates often use numerical claims to bolster their arguments, and statistical literacy is a necessary skill to help one decide what experts mean and which advocates to believe. This is important because statistics can be made to produce misrepresentations of data that may seem valid. The aim of statistical literacy proponents is to improve the public understanding of numbers and figures.
Health decisions are often manifest as statistical decision problems but few doctors or patients are well equipped to engage with these data.
Results of opinion polling are often cited by news organizations, but the quality of such polls varies considerably. Some understanding of the statistical technique of sampling is nec |
https://en.wikipedia.org/wiki/Toxicofera | Toxicofera (Greek for "those who bear toxins") is a proposed clade of scaled reptiles (squamates) that includes the Serpentes (snakes), Anguimorpha (monitor lizards, gila monster, and alligator lizards) and Iguania (iguanas, agamas, and chameleons). Toxicofera contains about 4,600 species, (nearly 60%) of extant Squamata. It encompasses all venomous reptile species, as well as numerous related non-venomous species. There is little morphological evidence to support this grouping; however, it has been recovered by all molecular analyses as of 2012.
Cladistics
Toxicofera combines the following groups from traditional classification:
Suborder Serpentes (snakes)
Suborder Iguania (iguanas, agamid lizards, chameleons, etc.)
Suborder Anguimorpha, consisting of:
Family Varanidae (monitor lizards)
Family Lanthanotidae (earless monitor lizard)
Family Anguidae (alligator lizards, glass lizards, etc.)
Family Helodermatidae (Gila monster and Mexican beaded lizard)
Family Shinisauridae (Chinese crocodile lizard)
Family Xenosauridae (knob-scaled lizards)
The relationship between these extant groups and a couple of extinct taxa are shown in the following cladogram, which is based on Reeder et al. (2015; Fig. 1).
Venom
Venom in squamates has historically been considered a rarity; while it has been known in Serpentes since ancient times, the actual percentage of snake species considered venomous was relatively small (around 25%). Of the approximately 2,650 species of advanced snakes (Caenophidia), only the front-fanged species (~650) were considered venomous by the anthropocentric definition.
Following the classification of Helodermatidae in the 19th century, their venom was thought to have developed independently. In snakes, the venom gland is in the upper jaw, but in helodermatids, it is found in the lower jaw. The origin of venom in squamates was thus considered relatively recent in evolutionary terms and the result of convergent evolution among the seemingly-polyph |
https://en.wikipedia.org/wiki/Small%20Latin%20squares%20and%20quasigroups | Latin squares and quasigroups are equivalent mathematical objects, although the former has a combinatorial nature while the latter is more algebraic. The listing below will consider the examples of some very small orders, which is the side length of the square, or the number of elements in the equivalent quasigroup.
The equivalence
Given a quasigroup with elements, its Cayley table (almost universally called its multiplication table) is an table that includes borders; a top row of column headers and a left column of row headers. Removing the borders leaves an array that is a Latin square. This process can be reversed, starting with a Latin square, introduce a bordering row and column to obtain the multiplication table of a quasigroup. While there is complete arbitrariness in how this bordering is done, the quasigroups obtained by different choices are sometimes equivalent in the sense given below.
Isotopy and isomorphism
Two Latin squares, 1 and 2 of size are isotopic if there are three bijections from the rows, columns and symbols of 1 onto the rows, columns and symbols of 2, respectively, that map 1 to 2. Isotopy is an equivalence relation and the equivalence classes are called isotopy classes.
A stronger form of equivalence exists. Two Latin squares, 1 and 2 of side with common symbol set that is also the index set for the rows and columns of each square, are isomorphic if there is a bijection such that for all , in . An alternate way to define isomorphic Latin squares is to say that a pair of isotopic Latin squares are isomorphic if the three bijections used to show that they are isotopic are, in fact, equal. Isomorphism is also an equivalence relation and its equivalence classes are called isomorphism classes.
An alternate representation of a Latin square is given by an orthogonal array. For a Latin square of order this is an 2 × 3 matrix with columns labeled , and and whose rows correspond to a single position of the Latin square, namely, the |
https://en.wikipedia.org/wiki/Symmetrically%20continuous%20function | In mathematics, a function is symmetrically continuous at a point x if
The usual definition of continuity implies symmetric continuity, but the converse is not true. For example, the function is symmetrically continuous at , but not continuous.
Also, symmetric differentiability implies symmetric continuity, but the converse is not true just like usual continuity does not imply differentiability.
The set of the symmetrically continuous functions, with the usual scalar multiplication can be easily shown to have the structure of a vector space over , similarly to the usually continuous functions, which form a linear subspace within it. |
https://en.wikipedia.org/wiki/Second%20derivative | In calculus, the second derivative, or the second-order derivative, of a function is the derivative of the derivative of . Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation:
where is acceleration, is velocity, is time, is position, and d is the instantaneous "delta" or change. The last expression is the second derivative of position () with respect to time.
On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.
Second derivative power rule
The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows:
Notation
The second derivative of a function is usually denoted . That is:
When using Leibniz's notation for derivatives, the second derivative of a dependent variable with respect to an independent variable is written
This notation is derived from the following formula:
Example
Given the function
the derivative of is the function
The second derivative of is the derivative of , namely
Relation to the graph
Concavity
The second derivative of a function can be used to determine the concavity of the graph of . A function whose second derivative is positive will be concave up (also referred to as convex), meaning that the tangent line will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (also simply called concave), and its tangent lines will lie above the graph of the function.
Inflection points
If the second derivative of a |
https://en.wikipedia.org/wiki/Oscillatory%20integral | In mathematical analysis an oscillatory integral is a type of distribution. Oscillatory integrals make rigorous many arguments that, on a naive level, appear to use divergent integrals. It is possible to represent approximate solution operators for many differential equations as oscillatory integrals.
Definition
An oscillatory integral is written formally as
where and are functions defined on with the following properties:
The function is real-valued, positive-homogeneous of degree 1, and infinitely differentiable away from . Also, we assume that does not have any critical points on the support of . Such a function, is usually called a phase function. In some contexts more general functions are considered and still referred to as phase functions.
The function belongs to one of the symbol classes for some . Intuitively, these symbol classes generalize the notion of positively homogeneous functions of degree . As with the phase function , in some cases the function is taken to be in more general, or just different, classes.
When , the formal integral defining converges for all , and there is no need for any further discussion of the definition of . However, when , the oscillatory integral is still defined as a distribution on , even though the integral may not converge. In this case the distribution is defined by using the fact that may be approximated by functions that have exponential decay in . One possible way to do this is by setting
where the limit is taken in the sense of tempered distributions. Using integration by parts, it is possible to show that this limit is well defined, and that there exists a differential operator such that the resulting distribution acting on any in the Schwartz space is given by
where this integral converges absolutely. The operator is not uniquely defined, but can be chosen in such a way that depends only on the phase function , the order of the symbol , and . In fact, given any integer , it is pos |
https://en.wikipedia.org/wiki/Iguanomorpha | Iguania is an infraorder of squamate reptiles that includes iguanas, chameleons, agamids, and New World lizards like anoles and phrynosomatids. Using morphological features as a guide to evolutionary relationships, the Iguania are believed to form the sister group to the remainder of the Squamata, which comprise nearly 11,000 named species, roughly 2000 of which are iguanians. However, molecular information has placed Iguania well within the Squamata as sister taxa to the Anguimorpha and closely related to snakes. The order has been under debate and revisions after being classified by Charles Lewis Camp in 1923 due to difficulties finding adequate synapomorphic morphological characteristics. Most Iguanias are arboreal but there are several terrestrial groups. They usually have primitive fleshy, non-prehensile tongues, although the tongue is highly modified in chameleons. The group has a fossil record that extends back to the Early Jurassic (the oldest known member is Bharatagama, which lived about 190 million years ago in what is now India). Today they are scattered occurring in Madagascar, the Fiji and Friendly Islands and Western Hemisphere.
Classification
The Iguania currently include these extant families:
Clade Acrodonta
Family Agamidae – agamid lizards, Old World arboreal lizards
Family Chamaeleonidae – chameleons
Clade Pleurodonta – American arboreal lizards, chuckwallas, iguanas
Family Leiocephalidae
Genus Leiocephalus: curly-tailed lizards
Family Corytophanidae – helmet lizards
Family Crotaphytidae – collared lizards, leopard lizards
Family Hoplocercidae – dwarf and spinytail iguanas
Family Iguanidae – marine, Fijian, Galapagos land, spinytail, rock, desert, green, and chuckwalla iguanas
Family Tropiduridae – tropidurine lizards
subclade of Tropiduridae Tropidurini – neotropical ground lizards
Family Dactyloidae – anoles
Family Polychrotidae
subclade of Polychrotidae Polychrus
Family Phrynosomatidae – North American spiny lizards
Family Liolaem |
https://en.wikipedia.org/wiki/Photodegradation | Photodegradation is the alteration of materials by light. Commonly, the term is used loosely to refer to the combined action of sunlight and air, which cause oxidation and hydrolysis. Often photodegradation is intentionally avoided, since it destroys paintings and other artifacts. It is, however, partly responsible for remineralization of biomass and is used intentionally in some disinfection technologies. Photodegradation does not apply to how materials may be aged or degraded via infrared light or heat, but does include degradation in all of the ultraviolet light wavebands.
Applications
Foodstuffs
The protection of food from photodegradation is very important. Some nutrients, for example, are affected by degradation when exposed to sunlight. In the case of beer, UV radiation causes a process that entails the degradation of hop bitter compounds to 3-methyl-2-buten-1-thiol and therefore changes the taste. As amber-colored glass has the ability to absorb UV radiation, beer bottles are often made from such glass to prevent this process.
Paints, inks, and dyes
Paints, inks, and dyes that are organic are more susceptible to photodegradation than those that are not. Ceramics are almost universally colored with non-organic origin materials so as to allow the material to resist photodegradation even under the most relentless conditions, maintaining its color.
Pesticides and herbicides
The photodegradation of pesticides is of great interest because of the scale of agriculture and the intensive use of chemicals. Pesticides are however selected in part not to photodegrade readily in sunlight in order to allow them to exert their biocidal activity. Thus, more modalities are implemented to enhance their photodegradation, including the use of photosensitizers, photocatalysts (e.g., titanium dioxide), and the addition of reagents such as hydrogen peroxide that would generate hydroxyl radicals that would attack the pesticides.
Pharmaceuticals
The photodegradation of pharma |
https://en.wikipedia.org/wiki/Computer%20network%20programming | Computer network programming involves writing computer programs that enable processes to communicate with each other across a computer network.
Connection-oriented and connectionless communications
Very generally, most of communications can be divided into connection-oriented, and connectionless. Whether a communication is connection-oriented or connectionless, is defined by the communication protocol, and not by . Examples of the connection-oriented protocols include and , and examples of connectionless protocols include , "raw IP", and .
Clients and servers
For connection-oriented communications, communication parties usually have different roles. One party is usually waiting for incoming connections; this party is usually referred to as "server". Another party is the one which initiates connection; this party is usually referred to as "client".
For connectionless communications, one party ("server") is usually waiting for an incoming packet, and another party ("client") is usually understood as the one which sends an unsolicited packet to "server".
Popular protocols and APIs
Network programming traditionally covers different layers of OSI/ISO model (most of application-level programming belongs to L4 and up). The table below contains some examples of popular protocols belonging to different OSI/ISO layers, and popular APIs for them.
See also
Software-defined networking
Infrastructure as code
Site reliability engineering
DevOps |
https://en.wikipedia.org/wiki/ECC%20patents | Patent-related uncertainty around elliptic curve cryptography (ECC), or ECC patents, is one of the main factors limiting its wide acceptance. For example, the OpenSSL team accepted an ECC patch only in 2005 (in OpenSSL version 0.9.8), despite the fact that it was submitted in 2002.
According to Bruce Schneier as of May 31, 2007, "Certicom certainly can claim ownership of ECC. The algorithm was developed and patented by the company's founders, and the patents are well written and strong. I don't like it, but they can claim ownership." Additionally, NSA has licensed MQV and other ECC patents from Certicom in a US$25 million deal for NSA Suite B algorithms. (ECMQV is no longer part of Suite B.)
However, according to RSA Laboratories, "in all of these cases, it is the implementation technique that is patented, not the prime or representation, and there are alternative, compatible implementation techniques that are not covered by the patents." Additionally, Daniel J. Bernstein has stated that he is "not aware of" patents that cover the Curve25519 elliptic curve Diffie–Hellman algorithm or its implementation. , published in February 2011, documents ECC techniques, some of which were published so long ago that even if they were patented, any such patents for these previously published techniques would now be expired.
Known patents
Certicom holds a patent on efficient GF(2n) multiplication in normal basis representation; expired in 2016.
Certicom holds multiple patents which cover the MQV (Menezes, Qu, and Vanstone) key agreement technique:
expired in 2015
expired in 2015
expired in 2015
expired in 2015
expired in 2017
/EP0739105B1 expired in 2016
Certicom holds on validating the key exchange messages using ECC to prevent a man-in-the-middle attack, which expired in 2016. Related , , also expired in 2016 and expired in 2018.
Certicom holds and regarding digital signatures on a smartcard; these expired in 2017 and 2016 respectively.
Certicom holds |
https://en.wikipedia.org/wiki/Cognitive%20ergonomics | Cognitive ergonomics is a scientific discipline that studies, evaluates, and designs tasks, jobs, products, environments and systems and how they interact with humans and their cognitive abilities. It is defined by the International Ergonomics Association as "concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. Cognitive ergonomics is responsible for how work is done in the mind, meaning, the quality of work is dependent on the persons understanding of situations. Situations could include the goals, means, and constraints of work. The relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system design." Cognitive ergonomics studies cognition in work and operational settings, in order to optimize human well-being and system performance. It is a subset of the larger field of human factors and ergonomics.
Goals
Cognitive ergonomics (sometimes known as cognitive engineering though this was an earlier field) is an emerging branch of ergonomics. It places particular emphasis on the analysis of cognitive processes required of operators in modern industries and similar milieus. This can be done by studying cognition in work and operational settings. It aims to ensure there is an appropriate interaction between human factors and processes that can be done throughout every day life. This would include every day life such as work tasks. Some cognitive ergonomics aims are: diagnosis, workload, situation awareness, decision making, and planning. CE is used to describe how work affects the mind and how the mind affects work. Its aim is to apply general principles and good practices of cognitive ergonomics that help to avoid unnecessary cognitive load at work and improves human performance. In a practical purpose, it will aid in human nature a |
https://en.wikipedia.org/wiki/Endothelium-derived%20hyperpolarizing%20factor | In blood vessels Endothelium-Derived Hyperpolarizing Factor or EDHF is proposed to be a substance and/or electrical signal that is generated or synthesized in and released from the endothelium; its action is to hyperpolarize vascular smooth muscle cells, causing these cells to relax, thus allowing the blood vessel to expand in diameter.
Introduction
The endothelium maintains vascular homeostasis through the release of active vasodilators. Although nitric oxide (NO) is recognized as the primary factor at level of arteries, increased evidence for the role of another endothelium-derived vasodilator known as endothelium-derived hyperpolarizing factor (EDHF) has accumulated in the last years. Experiments show that when NO and Prostacyclin (Vasodilators) are inhibited there is still another factor causing the vessels to dilate
Despite the ongoing debate of its intriguingly variable nature and mechanisms of action, the contribution of EDHF to the endothelium-dependent relaxation is currently appreciated as an important feature of “healthy” endothelium. Since EDHF's contribution is greatest at level of small arteries, the changes in the EDHF action are of critical importance for the regulation of organ blood flow, peripheral vascular resistance, and blood pressure, and in particular when production of NO is compromised. Moreover, depending on the type of cardiovascular disorders altered, EDHF responses may contribute to, or compensate for, endothelial abnormalities associated with pathogenesis of certain diseases.
It is widely accepted EDHF plays an important role in vasotone, especially in micro vessels. Its effect varies, depending on the size of the vessel.
Pathways Of EDHF
There are two general pathways that explain EDH
Diffusible factors are endothelium-derived substances that are able to pass through internal elastic layer (IEL), reach underlying vascular smooth muscle cells at a concentration sufficient to activate ion channels, and initiate smooth muscle hyperpol |
https://en.wikipedia.org/wiki/Lactarius%20deterrimus | Lactarius deterrimus, also known as false saffron milkcap or orange milkcap, is a species of fungus in the family Russulaceae. The fungus produces medium-sized fruit bodies (mushrooms) with orangish caps up to wide that develop green spots in old age or if injured. Its orange-coloured latex stains maroon within 30 minutes. Lactarius deterrimus is a mycorrhizal fungus that associates with Norway spruce and bearberry. The species is distributed in Europe, but has also found in parts of Asia. A visually similar species in the United States and Mexico is not closely related to the European species. Fruit bodies appear between late June and November, usually in spruce forests. Although the fungus is edible—like all Lactarius mushrooms from the section Deliciosi—its taste is often bitter, and it is not highly valued. The fruit bodies are used as source of food for the larvae of several insect species. Lactarius deterrimus can be distinguished from similar Lactarius species by difference in the mycorrhizal host or latex colour.
Taxonomy and classification
Although the fungus is one of the most common in Central Europe, the species was not validly described until 1968 by German mycologist Frieder Gröger. Before this, L. deterrimus was regarded as a variety of L. deliciosus (L. deliciosus var. piceus, described by Miroslav Smotlacha in 1946). After Roger Heim and A. Leclair described L. semisanguifluus in 1950, this fungus was referred to as the latter. L. fennoscandicus was separated from L. deterrimus in 1998 by Annemieke T. Verbeken and Jan Vesterholt and was classified as a separate species.
The epithet of deterrimus is Latin, and was chosen by Gröger to highlight the poor gustatory properties of the mushroom, such as the bitter aftertaste and often heavy maggot infestations. The superlative of "dēterior" (meaning less good) means "the worst, the poorest". The mushroom is commonly known as the "false saffron milkcap".
Several molecular phylogenetic analyses show th |
https://en.wikipedia.org/wiki/Lactarius%20deliciosus | Lactarius deliciosus, commonly known as the delicious milk cap, saffron milk cap and red pine mushroom, is one of the best known members of the large milk-cap genus Lactarius in the order Russulales. It is native to Europe, but has been accidentally introduced to other countries along with pine trees, with which the fungus is symbiotic.
Taxonomy
The species was known to Carl Linnaeus, who officially described it in the second volume of his Species Plantarum in 1753, giving it the name Agaricus deliciosus. The specific epithet is derived from Latin deliciosus, meaning "tasty". The Swedish taxonomist allegedly gave the species its epithet after smelling it and presuming it tasted good, perhaps confusing it with a Mediterranean milk cap regarded for its flavor. Dutch mycologist Christian Hendrik Persoon added the varietal epithet lactifluus in 1801, before English mycologist Samuel Frederick Gray placed it in its current genus, Lactarius, in 1821 in his The Natural Arrangement of British Plants.
It is commonly known as saffron milk-cap, red pine mushroom, or simply pine mushroom in English. An alternative North American name is orange latex milky. Its Spanish name varies (, nícalo, ...). Its Catalan name is (pl. rovellons). In the Girona area, it is called a (in Catalan) because it is collected near wild pine trees; it is typically harvested in October following the late August rains. Both this and L. deterrimus are known as "kanlıca", "çıntar" or "çam melkisi" in Turkey. In Romania, it is known as Rascovi and it can be found in the northern regions in autumn season.
Description
Lactarius deliciosus has a carrot-orange cap that is convex to vase shaped, inrolled when young, across, often with darker orange lines in the form of concentric circles. The cap is sticky and viscid when wet, but is often dry. It has crowded decurrent gills and a squat orange stipe that is often hollow, long and thick. The flesh stains a deep green color when handled. When fresh, it e |
https://en.wikipedia.org/wiki/Scheduling%20%28production%20processes%29 | Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process or manufacturing process. Scheduling is used to allocate plant and machinery resources, plan human resources, plan production processes and purchase materials.
It is an important tool for manufacturing and engineering, where it can have a major impact on the productivity of a process. In manufacturing, the purpose of scheduling is to keep due dates of customers and then minimize the production time and costs, by telling a production facility when to make, with which staff, and on which equipment. Production scheduling aims to maximize the efficiency of the operation, utilize maximum resources available and reduce costs.
In some situations, scheduling can involve random attributes, such as random processing times, random due dates, random weights, and stochastic machine breakdowns. In this case, the scheduling problems are referred to as "stochastic scheduling".
Overview
Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process. Companies use backward and forward scheduling to allocate plant and machinery resources, plan human resources, plan production processes and purchase materials.
Forward scheduling is planning the tasks from the date resources become available to determine the shipping date or the due date.
Backward scheduling is planning the tasks from the due date or required-by date to determine the start date and/or any changes in capacity required.
The benefits of production scheduling include:
Process change-over reduction
Inventory reduction, levelling
Reduced scheduling effort
Increased production efficiency
Labour load levelling
Accurate delivery date quotes
Real time information
Accurately measure utilized man/equipment hours
Production scheduling tools greatly outperform older manual scheduling methods. These provide the production scheduler with powerful graphical interfaces which |
https://en.wikipedia.org/wiki/Reflection%20principle | In set theory, a branch of mathematics, a reflection principle says that it is possible to find sets that, with respect to any given property, resemble the class of all sets. There are several different forms of the reflection principle depending on exactly what is meant by "resemble". Weak forms of the reflection principle are theorems of ZF set theory due to , while stronger forms can be new and very powerful axioms for set theory.
The name "reflection principle" comes from the fact that properties of the universe of all sets are "reflected" down to a smaller set.
Motivation
A naive version of the reflection principle states that "for any property of the universe of all sets we can find a set with the same property". This leads to an immediate contradiction: the universe of all sets contains all sets, but there is no set with the property that it contains all sets. To get useful (and non-contradictory) reflection principles we need to be more careful about what we mean by "property" and what properties we allow.
Reflection principles are associated with attempts to formulate the idea that no one notion, idea, or statement can capture our whole view of the universe of sets. Kurt Gödel described it as follows:
Georg Cantor expressed similar views on Absolute Infinity: All cardinality properties are satisfied in this number, in which held by a smaller cardinal.
To find non-contradictory reflection principles we might argue informally as follows. Suppose that we have some collection A of methods for forming sets (for example, taking powersets, subsets, the axiom of replacement, and so on). We can imagine taking all sets obtained by repeatedly applying all these methods, and form these sets into a class X, which can be thought of as a model of some set theory. But in light of this view, V is not be exhaustible by a handful of operations, otherwise it would be easily describable from below, this principle is known as inexhaustibility (of V). As a result, V is large |
https://en.wikipedia.org/wiki/Caryophyllene | Caryophyllene (), more formally (−)-β-caryophyllene, (BCP), is a natural bicyclic sesquiterpene that is a constituent of many essential oils, especially clove oil, the oil from the stems and flowers of Syzygium aromaticum (cloves), the essential oil of Cannabis sativa, copaiba, rosemary, and hops. It is usually found as a mixture with isocaryophyllene (the cis double bond isomer) and α-humulene (obsolete name: α-caryophyllene), a ring-opened isomer. Caryophyllene is notable for having a cyclobutane ring, as well as a trans-double bond in a 9-membered ring, both rarities in nature.
Caryophyllene is one of the chemical compounds that contributes to the aroma of black pepper.
Pharmacology
β-Caryophyllene acts as a full agonist of the Cannabinoid receptor type 2 (CB2 receptor) in rats. β-Caryophyllene has a binding affinity of Ki = 155nM at the CB2 receptors in mice. β-Caryophyllene has been shown to have anti-inflammatory action linked to its CB2 receptor activity in a study comparing the pain killing effects in mice with and without CB2 receptors with the group of mice without CB2 receptors seeing little benefit compared to the mice with functional CB2 receptors. β-Caryophyllene has the highest cannabinoid activity compared to the ring opened isomer α-caryophyllene Humulene which may modulate CB2 activity. To compare binding, Cannabinol (CBN) binds to the CB2 receptors as a partial agonist with an affinity of CB2 Ki = 126.4 nM while Delta-9-Tetrahydrocannabinol binds to the CB2 receptors as a partial agonist with an affinity of Ki = 36nM.
Caryophyllene helps to improve cold tolerance at low ambient temperatures. Wild giant pandas frequently roll in horse manure, which contains beta-caryophyllene/caryophyllene oxide, to inhibit transient receptor potential melastatin 8 (TRPM8), an archetypical cold-activated ion channel of mammals.
ß-caryophyllene could be efficiently used in the fight against cancer. For example, in an in vitro human colorectal adenocarcinoma s |
https://en.wikipedia.org/wiki/Management%20Data%20Input/Output | Management Data Input/Output (MDIO), also known as Serial Management Interface (SMI) or Media Independent Interface Management (MIIM), is a serial bus defined for the Ethernet family of IEEE 802.3 standards for the Media Independent Interface, or MII. The MII connects media access control (MAC) devices with Ethernet physical layer (PHY) circuits. The MAC device controlling the MDIO is called the Station Management Entity (SME).
Relationship with MII
MII has two signal interfaces:
A Data interface to the Ethernet MAC, for sending and receiving Ethernet frame data.
A PHY management interface, MDIO, used to read and write the control and status registers of the PHY in order to configure each PHY before operation, and to monitor link status during operation.
Electrical specification
The MDIO interface is implemented by two signals:
MDIO Interface Clock (MDC): clock driven by the MAC device to the PHY.
MDIO data: bidirectional, the PHY drives it to provide register data at the end of a read operation.
The bus only supports a single MAC as the master, and can have up to 32 PHY slaves.
The MDC can be periodic, with a minimum period of 400 ns, which corresponds to a maximum frequency of 2.5 MHz. Newer chips, however, allow faster accesses. For example, the DP83640 supports a 25 MHz maximum clock rate for MDC.
The MDIO requires a specific pull-up resistor of 1.5 kΩ to 10 kΩ, taking into account the total worst-case leakage current of 32 PHYs and one MAC.
Bus timing (clause 22)
Before a register access, PHY devices generally require a preamble of 32 ones to be sent by the MAC on the MDIO line. The access consists of 16 control bits, followed by 16 data bits. The control bits consist of 2 start bits, 2 access type bits (read or write), the PHY address (5 bits), the register address (5 bits), and 2 "turnaround" bits.
During a write command, the MAC provides address and data. For a read command, the PHY takes over the MDIO line during the turnaround bit times, su |
https://en.wikipedia.org/wiki/Sublunary%20sphere | In Aristotelian physics and Greek astronomy, the sublunary sphere is the region of the geocentric cosmos below the Moon, consisting of the four classical elements: earth, water, air, and fire.
The sublunary sphere was the realm of changing nature. Beginning with the Moon, up to the limits of the universe, everything (to classical astronomy) was permanent, regular and unchanging—the region of aether where the planets and stars are located. Only in the sublunary sphere did the powers of physics hold sway.
Evolution of concept
Plato and Aristotle helped to formulate the original theory of a sublunary sphere in antiquity, the idea usually going hand in hand with geocentrism and the concept of a spherical Earth.
Avicenna carried forward into the Middle Ages the Aristotelian idea of generation and corruption being limited to the sublunary sphere. Medieval scholastics like Thomas Aquinas, who charted the division between celestial and sublunary spheres in his work Summa Theologica, also drew on Cicero and Lucan for an awareness of the great frontier between Nature and Sky, sublunary and aetheric spheres. The result for medieval/Renaissance mentalities was a pervasive awareness of the existence, at the Moon, of what C.S. Lewis called 'this "great divide"...from aether to air, from 'heaven' to 'nature', from the realm of gods (or angels) to that of daemons, from the realm of necessity to that of contingence, from the incorruptible to the corruptible"
However, the theories of Copernicus began to challenge the sublunary/aether distinction. In their wake, Tycho Brahe's observations of a new star (nova) and of comets in the supposedly unchanging heavens further undermined the Aristotelian view.<ref>R. Curley, Scientists and Inventors of the Renaissance (2012) p. 6-8</ref> Thomas Kuhn saw scientists' new ability to see change in the 'incorruptible' heavens as a classic example of the new possibilities opened up by a paradigm shift.
Literary offshoots
Dante envisaged Mt Purga |
https://en.wikipedia.org/wiki/Paucimorphism | A paucimorphism is a genetic sequence variant with a rare allele frequency of 0.0005<q<0.05. |
https://en.wikipedia.org/wiki/Myositis | Myositis is a rare disease that involves inflammation of the muscles. It can present with a variety of symptoms such as skin involvement (i.e., rashes), muscle weakness, and other organ involvement. Systemic symptoms such as weight loss, fatigue, and low fever can also present.
Causes
Injury, medicines, infection, inherited muscle disease, or an autoimmune disorder can lead to myositis. It can also be idiopathic (no known cause).
Injury - A mild form of myositis can occur with hard exercise. A more severe form of muscle injury, called rhabdomyolysis, is also associated with myositis. This is a condition where injury to the patient's muscles causes them to quickly break down.
Medicines - A variety of different medicines can cause myositis. One of the most common drug types that can cause myositis is statins. Statins are drugs that are used to help lower high cholesterol. One of the most common side effects of statin therapy is muscle pain. Rarely, statin therapy can lead to myositis.
Infection - The most common infectious cause of myositis is viral infections, such as the common cold. It can also include bacterial, parasitic, and fungal infections. Viruses, such as COVID-19, are also shown to be a rare cause of myositis. Benign acute childhood myositis has been described in children after prodromal viral infections with different viral agents.
Inherited muscle disease - Many inherited myopathies may have secondary myositis, including calpainopathy, dysferlinopathy, fascioscapulohumeral muscular dystrophy, dystrophinopathy, and LMNA-associated myopathy.
Autoimmune - Autoimmune disease is an abnormal immune response to a functioning body part, in this case the muscles. The three main types of idiopathic myositis (known as inflammatory myopathies) that typically test positive for autoantibodies are dermatomyositis, polymyositis, and inclusion body myositis. Other autoimmune diseases, such as systemic lupus erythematosus, can also cause myositis-like symptoms.
|
https://en.wikipedia.org/wiki/Acutance | In photography, acutance describes a subjective perception of sharpness that is related to the edge contrast of an image. Acutance is related to the amplitude of the derivative of brightness with respect to space. Due to the nature of the human visual system, an image with higher acutance appears sharper even though an increase in acutance does not increase real resolution.
Historically, acutance was enhanced chemically during development of a negative (high acutance developers), or by optical means in printing (unsharp masking). In digital photography, onboard camera software and image postprocessing tools such as Photoshop or GIMP offer various sharpening facilities, the most widely used of which is known as "unsharp mask" because the algorithm is derived from the eponymous analog processing method.
In the example image, two light gray lines were drawn on a gray background. As the transition is instantaneous, the line is as sharp as can be represented at this resolution. Acutance in the left line was artificially increased by adding a one-pixel-wide darker border on the outside of the line and a one-pixel-wide brighter border on the inside of the line. The actual sharpness of the image is unchanged, but the apparent sharpness is increased because of the greater acutance.
Artificially increased acutance has drawbacks. In this somewhat overdone example most viewers will also be able to see the borders separately from the line, which create two halos around the line, one dark and one shimmering bright.
Tools
Several image processing techniques, such as unsharp masking, can increase the acutance in real images.
Resampling
Low-pass filtering and resampling often cause overshoot, which increases acutance, but can also reduce absolute gradient, which reduces acutance. Filtering and resampling can also cause clipping and ringing artifacts. An example is bicubic interpolation, widely used in image processing for resizing images.
Definition
One definition of acu |
https://en.wikipedia.org/wiki/Ronald%20Drever | Ronald William Prest Drever (26 October 1931 – 7 March 2017) was a Scottish experimental physicist. He was a professor emeritus at the California Institute of Technology, co-founded the LIGO project, and was a co-inventor of the Pound–Drever–Hall technique for laser stabilisation, as well as the Hughes–Drever experiment. This work was instrumental in the first detection of gravitational waves in September 2015.
Drever died on 7 March 2017, aged 85, seven months before his colleagues Rainer Weiss, Kip Thorne, and Barry Barish won the Nobel Prize in Physics for their work on the observation of gravitational waves. The trio of Drever, Thorne and Weiss shared several major physics prizes in 2016, so it is widely believed that Drever would have won the Nobel Prize in the place of Barry Barish had he not died before the Nobel Committee made their decision.
Education
Drever was educated at Glasgow Academy followed by University of Glasgow where he was awarded a bachelor's degree in 1953 followed by a PhD in 1959 for research on orbital electron capture using proportional counters.
Career and research
After receiving his PhD from the University of Glasgow in 1959, Drever initiated the Glasgow project to detect gravitational waves in the sixties, after which he established the University’s first dedicated gravitational wave research group in 1970.
The same year Drever was recruited to form a gravitational wave program at Caltech. In 1984 Drever left Glasgow to work full-time at Caltech.
Drever's contributions to the design and implementation of the LIGO interferometers were critically important to their ability to function in the extreme sensitivity realm required for detection of gravitational waves (10−23 strain).
Drever's final work involved the development of magnetically levitated optical tables for seismic isolation of experimental apparatus.
Honors and awards
Drever was recognized by numerous awards including:
Fellowship of the American Physical Society (199 |
https://en.wikipedia.org/wiki/Muon%20capture | Muon capture is the capture of a negative muon by a proton, usually resulting in production of a neutron and a neutrino, and sometimes a gamma photon.
Muon capture by heavy nuclei often leads to emission of particles; most often neutrons, but charged particles can be emitted as well.
Ordinary muon capture (OMC) involves capture of a negative muon from the atomic orbital without emission of a gamma photon:
+ → μ +
Radiative muon capture (RMC) is a radiative version of OMC, where a gamma photon is emitted:
+ → μ + +
Theoretical motivation for the study of muon capture on the proton is its connection to the proton's induced pseudoscalar form factor gp.
Practical application - Nuclear waste disposal
Muon capture is being investigated for practical application in radioactive waste disposal, for example in the artificial transmutation of large quantities of long-lived radioactive waste that have been produced globally by fission reactors. Radioactive waste can be transmuted to stable isotopes following irradiation by an incident muon () beam from a compact proton accelerator source. |
https://en.wikipedia.org/wiki/Apelin | Apelin (also known as APLN) is a peptide that in humans is encoded by the APLN gene. Apelin is one of two endogenous ligands for the G-protein-coupled APJ receptor that is expressed at the surface of some cell types. It is widely expressed in various organs such as the heart, lung, kidney, liver, adipose tissue, gastrointestinal tract, brain, adrenal glands, endothelium, and human plasma.
Discovery
Apelin is a peptide hormone that was identified in 1998 by Masahiko Fujino and his colleagues at Gunma University and Takeda Pharmaceutical Company. In 2013, a second peptide hormone named Elabela was found by Bruno Reversade to also act as an endogenous ligand to the APLNR.
Biosynthesis
The apelin gene encodes a pre-proprotein of 77 amino acids, with a signal peptide in the N-terminal region. After translocation into the endoplasmic reticulum and cleavage of the signal peptide, the proprotein of 55 amino acids may generate several active fragments: a 36 amino acid peptide corresponding to the sequence 42-77 (apelin 36), a 17 amino acid peptide corresponding to the sequence 61-77 (apelin 17) and a 13 amino acid peptide corresponding to the sequence 65-77 (apelin 13). This latter fragment may also undergo a pyroglutamylation at the level of its N-terminal glutamine residue. However the presence and/or the concentrations of those peptides in human plasma has been questioned. Recently, 46 different apelin peptides ranging from apelin 55 () to apelin 12 have been identified in bovine colostrum, including C-ter truncated isoforms.
Physiological functions
The sites of receptor expression are linked to the different functions played by apelin in the organism.
Vascular
Vascular expression of the receptor participates in the control of blood pressure and its activation promotes the formation of new blood vessels (angiogenesis). The blood pressure-lowering (hypotensive) effect of apelin results from the activation of receptors expressed at the surface of endothelial cel |
https://en.wikipedia.org/wiki/Serving%20area%20interface | The serving area interface or service area interface (SAI) is an outdoor enclosure or metal box that allows access to telecommunications wiring.
Alternate names
Access point (AP)
Cabinet (cab)
B-box (breakout box)
Cross box
Cross-connect box
Jumper wire interface (JWI)
Outside plant interface (OPI)
Pedestal (ped)
Primary cross-connection point (PCP) (UK)
Secondary cross-connection point (SCP) (UK)
Telecom cabinet
Function
The SAI provides the termination of individual twisted pairs of a telephony local loop for onward connection back to the nearest telephone exchange (US: "central office" (CO)) or remote switch, or first to transmission equipment such as a subscriber loop carrier multiplexer and then to the exchange main distribution frame (MDF).
In the United Kingdom, the components from the PCP onwards to the customer are known as "D-side" (distribution side), and from the PCP back to the MDF as the "E-side" (exchange side). In the United States, the connection back to the MDF is known as the F2 (secondary distribution cable) and/or the F1 (main feeder cable) pairs.
SAIs are used in suburban and low-density urban areas, serving some of the same purposes that manholes do in high-density urban areas. Besides a cross connect point, they sometimes contain a DSLAM or more rarely a remote concentrator or both.
See also
Demarcation point
Enclosure (electrical)
Fiber to the telecom enclosure
Sub-loop unbundling |
https://en.wikipedia.org/wiki/Anne-Marie%20Imafidon | Anne-Marie Osawemwenze Ore-Ofe Imafidon (pronounced: , ; is a British-Nigerian social entrepreneur and computer scientist. She founded and became CEO of Stemettes in 2013, a social enterprise promoting women in STEM careers. In June 2022, she was announced as the 2022–2023 President of the British Science Association.
Early life and education
Imafidon obtained a masters degree in mathematics and computing science from the University of Oxford.
Stemettes and entrepreneurship
Imafidon is the founder and CEO of Stemettes, a social initiative promoting women in science, technology, engineering, and maths (STEM) careers.
Other work
In 2019, Imafidon hosted the Women Tech Charge podcast for The Evening Standard, where she conducted interviews with tech figures such as Jack Dorsey, and other celebrities such as Rachel Riley, and Lewis Hamilton.
She is a trustee of the Institute for the Future of Work, which researches ways to improve work and working lives.
In September 2021, Imafidon co-hosted a special episode of Channel 4's Countdown – broadcast for the channel's Black to Front Day campaign as arithmetician. She reprised the role later that year — standing in on 60 episodes for Rachel Riley while she was on maternity leave;
In December 2022, Imafidon guest-edited BBC Radio 4's Today programme.
In 2022, she was announced by the British Science Association as that organisation's president for the year 2022-3.
In June 2023, Imafidon was interviewed by Jim Al-Khalili for The Life Scientific podcast, discussing diversity and equality in science, recorded at the Cheltenham Arts Festival.
Recognition
Imafidon was awarded an MBE in the 2017 New Year Honours for services to young women and the STEM sector. She was listed as one of the BBC's 100 women of 2017. In 2020, she was given a Suffrage Science award by the London Institute of Medical Sciences. She is an Honorary Fellow at Keble College, Oxford. |
https://en.wikipedia.org/wiki/Landau%20prime%20ideal%20theorem | In algebraic number theory, the prime ideal theorem is the number field generalization of the prime number theorem. It provides an asymptotic formula for counting the number of prime ideals of a number field K, with norm at most X.
Example
What to expect can be seen already for the Gaussian integers. There for any prime number p of the form 4n + 1, p factors as a product of two Gaussian primes of norm p. Primes of the form 4n + 3 remain prime, giving a Gaussian prime of norm p2. Therefore, we should estimate
where r counts primes in the arithmetic progression 4n + 1, and r′ in the arithmetic progression 4n + 3. By the quantitative form of Dirichlet's theorem on primes, each of r(Y) and r′(Y) is asymptotically
Therefore, the 2r(X) term dominates, and is asymptotically
General number fields
This general pattern holds for number fields in general, so that the prime ideal theorem is dominated by the ideals of norm a prime number. As Edmund Landau proved in , for norm at most X the same asymptotic formula
always holds. Heuristically this is because the logarithmic derivative of the Dedekind zeta-function of K always has a simple pole with residue −1 at s = 1.
As with the Prime Number Theorem, a more precise estimate may be given in terms of the logarithmic integral function. The number of prime ideals of norm ≤ X is
where cK is a constant depending on K.
See also
Abstract analytic number theory |
https://en.wikipedia.org/wiki/Optoelectric%20nuclear%20battery | An optoelectric nuclear battery (also radiophotovoltaic device, radioluminescent nuclear battery or radioisotope photovoltaic generator) is a type of nuclear battery in which nuclear energy is converted into light, which is then used to generate electrical energy. This is accomplished by letting the ionizing radiation emitted by the radioactive isotopes hit a luminescent material (scintillator or phosphor), which in turn emits photons that generate electricity upon striking a photovoltaic cell.
The technology was developed by researchers of the Kurchatov Institute in Moscow.
Description
A beta emitter such as technetium-99 or strontium-90 is suspended in a gas or liquid containing luminescent gas molecules of the excimer type, constituting a "dust plasma". This permits a nearly lossless emission of beta electrons from the emitting dust particles. The electrons then excite the gases whose excimer line is selected for the conversion of the radioactivity into a surrounding photovoltaic layer such that a theoretical lightweight, low-pressure, high-efficiency battery can be realized. (In practice, existing designs are heavy and involve high pressure.) These nuclides are relatively low-cost radioactive waste from nuclear power reactors. The diameter of the dust particles is so small (a few micrometers) that the electrons from the beta decay leave the dust particles nearly without loss. The surrounding weakly ionized plasma consists of gases or gas mixtures (such as krypton, argon, and xenon) with excimer lines such that a considerable amount of the energy of the beta electrons is converted into this light. The surrounding walls contain photovoltaic layers with wide forbidden zones, such as diamond, which convert the optical energy generated from the radiation into electrical energy.
A German patent provides a description of an optoelectric nuclear battery, which would consist of an excimer of argon, xenon, or krypton (or a mixture of two or three of them) in a pressu |
https://en.wikipedia.org/wiki/Comparison%20of%20video%20codecs | Α video codec is software or a device that provides encoding and decoding for digital video, and which may or may not include the use of video compression and/or decompression. Most codecs are typically implementations of video coding formats.
The compression may employ lossy data compression, so that quality-measurement issues become important. Shortly after the compact disc became widely available as a digital-format replacement for analog audio, it became feasible to also store and use video in digital form. A variety of technologies soon emerged to do so. The primary goal for most methods of compressing video is to produce video that most closely approximates the fidelity of the original source, while simultaneously delivering the smallest file-size possible. However, there are also several other factors that can be used as a basis for comparison.
Introduction to comparison
The following characteristics are compared in video codecs comparisons:
Video quality per bitrate (or range of bitrates). Commonly video quality is considered the main characteristic of codec comparisons. Video quality comparisons can be subjective or objective.
Performance characteristics such as compression/decompression speed, supported profiles/options, supported resolutions, supported rate control strategies, etc.
General software characteristics for example:
Manufacturer
Supported OS (Linux, macOS, Windows)
Version number
Date of release
Type of license (commercial, free, open source)
Supported interfaces (VfW, DirectShow, etc.)
Price (value for money, volume discounts, etc.)
Video quality
The quality the codec can achieve is heavily based on the compression format the codec uses. A codec is not a format, and there may be multiple codecs that implement the same compression specification for example, MPEG-1 codecs typically do not achieve quality/size ratio comparable to codecs that implement the more modern H.264 specification. But quality/size ratio of output produced b |
https://en.wikipedia.org/wiki/Common%20squirrel%20monkey | Common squirrel monkey is the traditional common name for several small squirrel monkey species native to the tropical areas of South America. The term common squirrel monkey had been used as the common name for Saimiri sciureus before genetic research by Jessica Lynch Alfaro and others indicated S. scuireus covered at least 3 and possibly 4 species: the Guianan squirrel monkey (S. scuireus), Humboldt's squirrel monkey (S. cassiquiarensis) and Collins' squirrel monkey (S. collinsi). The Ecuadorian squirrel monkey (S. cassiquiarensis macrodon), generally regarded as a subspecies of Humboldt's squirrel monkey, had also been sometimes proposed as a separate species that had originally been included within the term "common squirrel monkey."
Range and introductions
Common squirrel monkeys are found primarily in the Amazon Basin. Before the taxon was split, it had been considered to be found within the countries of Brazil, Colombia, Ecuador, French Guiana, Guyana, Peru, Suriname and Venezuela; a small population has been introduced to Florida and many of the Caribbean Islands. However, taxonomic research in 2009 and 2015 determined that several populations that had been considered S. scuireus were actually separate species:
Guianan squirrel monkey, S. sciureus
Collins' squirrel monkey, S. collinsi
Humboldt's squirrel monkey, S. cassiquiarensis
The Ecuadorian squirrel monkey, S. cassiquiarensis macrodon, has also sometimes been regarded as a separate species.
As a result of these populations no longer being considered S. scuireus, the range of S. scuireus is now limited to Brazil and the Guianas.
A group of free-ranging individuals was spotted and photographed in 2009 at the Tijuca Forest in Rio de Janeiro – possibly the result of an illegal release or of an escape from the pet trade; by 2010, the squirrel monkey had begun to be considered as an invasive species in the Brazilian Atlantic rainforest, and concerns were expressed about its role as a predator of e |
https://en.wikipedia.org/wiki/3-Carene | 3-Carene is a bicyclic monoterpene consisting of fused cyclohexene and cyclopropane rings. It occurs as a constituent of turpentine, with a content as high as 42% depending on the source. Carene has a sweet and pungent odor, best described as a combination of fir needles, musky earth, and damp woodlands.
A colorless liquid, it is not soluble in water, but miscible with fats and oils. It is chiral, occurring naturally both as the racemate and enantio-enriched forms.
Reactions and uses
Treatment with peracetic acid gives 3,4-caranediol. Pyrolysis over ferric oxide induces rearrangement, giving p-cymene. Carene is used in the perfume industry and as a chemical intermediate.
Because carene can be found in cannabis naturally, it can also be found in cannabis distillates. Greater concentrations of carene in a distillate give it an earthier taste and smell. 3-Carene is also present in mango, giving the fruit a characteristic pine-like flavor and aroma. |
https://en.wikipedia.org/wiki/Medical%20illustration | A medical illustration is a form of biological illustration that helps to record and disseminate medical, anatomical, and related knowledge.
History
Medical illustrations have been made possibly since the beginning of medicine in any case for hundreds (or thousands) of years. Many illuminated manuscripts and Arabic scholarly treatises of the medieval period contained illustrations representing various anatomical systems (circulatory, nervous, urogenital), pathologies, or treatment methodologies. Many of these illustrations can look odd to modern eyes, since they reflect early reliance on classical scholarship (especially Galen) rather than direct observation, and the representation of internal structures can be fanciful. An early high-water mark was the 1543 CE publication of Andreas Vesalius's De Humani Corporis Fabrica Libri Septum, which contained more than 600 exquisite woodcut illustrations based on careful observation of human dissection.
Since the time of the Leonardo da Vinci and his depictions of the human form, there have been great advancements in the art of representing the human body. The art has evolved over time from illustration to digital imaging using the technological advancements of the digital age. Berengario da Carpi was the first known anatomist to include medical illustration within his textbooks. Gray's Anatomy, originally published in 1858, is one well-known human anatomy textbook that showcases a variety of anatomy depiction techniques.
In 1895, Konrad Roentgen, a German physicist discovered the X-Ray. Internal imaging became a reality after the invention of the X-Ray. Since then internal imaging has progressed to include ultrasonic, CT, and MRI imaging.
As a profession, medical illustration has a more recent history. In the late 1890s, Max Brödel, a talented artist from Leipzig, was brought to The Johns Hopkins School of Medicine in Baltimore to illustrate for Harvey Cushing, William Halsted, Howard Kelly, and other notable clinicians |
https://en.wikipedia.org/wiki/Alpha%20globulin | Alpha globulins are a group of globular proteins in plasma that are highly mobile in alkaline or electrically charged solutions. They inhibit certain blood proteases and show significant inhibitor activity.
The alpha globulins typically have molecular weights of around 93 kDa.
Examples
Alpha globulins include certain hormones, proteins that transport hormones, and other compounds, including prothrombin and HDL.
Alpha 1 globulins
α1-antitrypsin
Alpha 1-antichymotrypsin
Orosomucoid (acid glycoprotein)
Serum amyloid A
Alpha 1-lipoprotein
Protein HC
Alpha 2 globulins
Haptoglobin
Alpha-2u globulin
α2-macroglobulin
Ceruloplasmin
Thyroxine-binding globulin
Alpha 2-antiplasmin
Protein C
Alpha 2-lipoprotein
Angiotensinogen
Cortisol binding globulin
Vitamin D-binding protein |
https://en.wikipedia.org/wiki/Bone%20resorption | Bone resorption is resorption of bone tissue, that is, the process by which osteoclasts break down the tissue in bones and release the minerals, resulting in a transfer of calcium from bone tissue to the blood.
The osteoclasts are multi-nucleated cells that contain numerous mitochondria and lysosomes. These are the cells responsible for the resorption of bone. Osteoblasts are generally present on the outer layer of bone, just beneath the periosteum. Attachment of the osteoclast to the osteon begins the process. The osteoclast then induces an infolding of its cell membrane and secretes collagenase and other enzymes important in the resorption process. High levels of calcium, magnesium, phosphate and products of collagen will be released into the extracellular fluid as the osteoclasts tunnel into the mineralized bone. Osteoclasts are prominent in the tissue destruction found in psoriatic arthritis and rheumatological disorders.
The human body is in a constant state of bone remodeling. Bone remodeling is a process which maintains bone strength and ion homeostasis by replacing discrete parts of old bone with newly synthesized packets of proteinaceous matrix. Bone is resorbed by osteoclasts, and is deposited by osteoblasts in a process called ossification. Osteocyte activity plays a key role in this process. Conditions that result in a decrease in bone mass can either be caused by an increase in resorption or by a decrease in ossification. During childhood, bone formation exceeds resorption. As the aging process occurs, resorption exceeds formation.
Bone resorption rates are much higher in post-menopausal older women due to estrogen deficiency related with menopause. Common treatments include drugs that increase bone mineral density. Bisphosphonates, RANKL inhibitors, SERMs—selective oestrogen receptor modulators, hormone replacement therapy and calcitonin are some of the common treatments. Light weight bearing exercise tends to eliminate the negative effects of bon |
https://en.wikipedia.org/wiki/Balloonist%20theory | Balloonist theory was a theory in early neuroscience that attempted to explain muscle movement by asserting that muscles contract by inflating with air or fluid. The Greek physician Galen believed that muscles contracted due to a fluid flowing into them, and for 1500 years afterward, it was believed that nerves were hollow and that they carried fluid. René Descartes, who was interested in hydraulics and used fluid pressure to explain various aspects of physiology such as the reflex arc, proposed that "animal spirits" flowed into muscle and were responsible for their contraction. In the model, which Descartes used to explain reflexes, the spirits would flow from the ventricles of the brain, through the nerves, and to the muscles to animate the latter.
In 1667, Thomas Willis proposed that muscles may expand by the reaction of animal spirits with vital spirits. He hypothesized that this reaction would produce air in a manner similar to the reaction that causes an explosion, causing muscles to swell and produce movement.
This theory has now been superseded by the mainstream scientific community due to the establishment of neuroscience, which is supported by empirical evidence.
Physiological refutations of the theory
In 1667, Jan Swammerdam, a Dutch anatomist famous for working with insects, struck the first important blow against the balloonist theory. Swammerdam, who was the first to experiment on nerve-muscle preparations, showed that muscles do not increase in size when they contract (and he supposed if a substance such as animal spirits flowed into muscles, their volume should increase when they contract). Swammerdam placed severed frog thigh muscle in an airtight syringe with a small amount of water in the tip. He could thus determine whether there was a change the volume of the muscle when it contracted by observing a change in the level of the water (image at right). When Swammerdam caused the muscle to contract by irritating the nerve, the water level d |
https://en.wikipedia.org/wiki/Broken%20diagonal | In recreational mathematics and the theory of magic squares, a broken diagonal is a set of n cells forming two parallel diagonal lines in the square. Alternatively, these two lines can be thought of as wrapping around the boundaries of the square to form a single sequence.
In pandiagonal magic squares
A magic square in which the broken diagonals have the same sum as the rows, columns, and diagonals is called a pandiagonal magic square.
Examples of broken diagonals from the number square in the image are as follows: 3,12,14,5; 10,1,7,16; 10,13,7,4; 15,8,2,9; 15,12,2,5; and 6,13,11,4.
The fact that this square is a pandiagonal magic square can be verified by checking that all of its broken diagonals add up to the same constant:
3+12+14+5 = 34
10+1+7+16 = 34
10+13+7+4 = 34
One way to visualize a broken diagonal is to imagine a "ghost image" of the panmagic square adjacent to the original:
111x121px
The set of numbers {3, 12, 14, 5} of a broken diagonal, wrapped around the original square, can be seen starting with the first square of the ghost image and moving down to the left.
In linear algebra
Broken diagonals are used in a formula to find the determinant of 3 by 3 matrices.
For a 3 × 3 matrix A, its determinant is
Here, and are (products of the elements of) the broken diagonals of the matrix.
Broken diagonals are used in the calculation of the determinants of all matrices of size 3 × 3 or larger. This can be shown by using the matrix's minors to calculate the determinant. |
https://en.wikipedia.org/wiki/Incongruent%20melting | Incongruent melting occurs when a solid substance does not melt uniformly, so that the chemical composition of the resulting liquid is not the same as that of the original solid. During incongruent melting a new solid of different composition forms. For example, melting of orthoclase (KAlSi3O8) produces leucite (KAlSi2O6) in addition to a melt. The melt produced is richer in silica (SiO2). The proportions of leucite and melt created can be recombined to yield the bulk composition of the starting feldspar. Another mineral that can melt incongruently is enstatite (Mg2Si2O6), which produces forsterite (Mg2SiO4) in addition to a melt richer in SiO2 when melting at low pressure. Enstatite melts congruently at higher pressures between 2.5 and 5.5 kilobars.
See also
Congruent melting
Incongruent transition
Phase diagram |
https://en.wikipedia.org/wiki/Push%20email | Push email is an email system that provides an always-on capability, in which when new email arrives at the mail delivery agent (MDA) (commonly called mail server), it is immediately, actively transferred (pushed) by the MDA to the mail user agent (MUA), also called the email client, so that the end-user can see incoming email immediately. This is in contrast with systems that check for new incoming mail every so often, on a schedule. Email clients include smartphones and, less strictly, IMAP personal computer mail applications.
Comparison with polling email
Outgoing mail is generally pushed from the sender to the final mail delivery agent (and possibly via intermediate mail servers) using Simple Mail Transfer Protocol. If the receiver uses a polling email delivery protocol, the final step from the last mail delivery agent to the client is done using a poll. Post Office Protocol (POP3) is an example of a polling email delivery protocol. At login and later at intervals, the mail user agent (client) polls the mail delivery agent (server) to see if there is new mail, and if so downloads it to a mailbox on the user's computer. Extending the "push" to the last delivery step is what distinguishes push email from polling email systems.
The reason that polling is often used for the last stage of mail delivery is that, although the server mail delivery agent would normally be permanently connected to the network, it does not necessarily know how to locate the client mail user agent, which may only be connected occasionally and also change network address quite often. For example, a user with a laptop on a Wi-Fi connection may be assigned different addresses from the network DHCP server periodically and have no persistent network name. When new mail arrives to the mail server, it does not know what address the client is currently assigned.
The Internet Message Access Protocol (IMAP) provides support for polling and notifications. When a client receives a notification from |
https://en.wikipedia.org/wiki/Lagrangian%20Grassmannian | In mathematics, the Lagrangian Grassmannian is the smooth manifold of Lagrangian subspaces of a real symplectic vector space V. Its dimension is n(n + 1) (where the dimension of V is 2n). It may be identified with the homogeneous space
,
where is the unitary group and the orthogonal group. Following Vladimir Arnold it is denoted by Λ(n). The Lagrangian Grassmannian is a submanifold of the ordinary Grassmannian of V.
A complex Lagrangian Grassmannian is the complex homogeneous manifold of Lagrangian subspaces of a complex symplectic vector space V of dimension 2n. It may be identified with the homogeneous space of complex dimension n(n + 1)
,
where is the compact symplectic group.
As a homogeneous space
To see that the Lagrangian Grassmannian Λ(n) can be identified with , note that is a 2n-dimensional real vector space, with the imaginary part of its usual inner product making it into a symplectic vector space. The Lagrangian subspaces of are then the real subspaces of real dimension n on which the imaginary part of the inner product vanishes. An example is . The unitary group acts transitively on the set of these subspaces, and the stabilizer of is the orthogonal group . It follows from the theory of homogeneous spaces that Λ(n) is isomorphic to as a homogeneous space of .
Topology
The stable topology of the Lagrangian Grassmannian and complex Lagrangian Grassmannian is completely understood, as these spaces appear in the Bott periodicity theorem: , and – they are thus exactly the homotopy groups of the stable orthogonal group, up to a shift in indexing (dimension).
In particular, the fundamental group of is infinite cyclic. Its first homology group is therefore also infinite cyclic, as is its first cohomology group, with a distinguished generator given by the square of the determinant of a unitary matrix, as a mapping to the unit circle. Arnold showed that this leads to a description of the Maslov index, introduced by V. P. Maslov.
For a |
https://en.wikipedia.org/wiki/Pneumatic%20artificial%20muscles | Pneumatic artificial muscles (PAMs) are contractile or extensional devices operated by pressurized air filling a pneumatic bladder. In an approximation of human muscles, PAMs are usually grouped in pairs: one agonist and one antagonist.
PAMs were first developed (under the name of McKibben Artificial Muscles) in the 1950s for use in artificial limbs. The Bridgestone rubber company (Japan) commercialized the idea in the 1980s under the name of Rubbertuators.
The retraction strength of the PAM is limited by the sum total strength of individual fibers in the woven shell. The exertion distance is limited by the tightness of the weave; a very loose weave allows greater bulging, which further twists individual fibers in the weave.
One example of a complex configuration of air muscles is the Shadow Dexterous Hand developed by the Shadow Robot Company, which also sells a range of muscles for integration into other projects/systems.
Advantages
PAMs are very lightweight because their main element is a thin membrane. This allows them to be directly connected to the structure they power, which is an advantage when considering the replacement of a defective muscle. If a defective muscle has to be substituted, its location will always be known and its substitution becomes easier. This is an important characteristic, since the membrane is connected to rigid endpoints, which introduces tension concentrations and therefore possible membrane ruptures.
Another advantage of PAMs is their inherent compliant behavior: when a force is exerted on the PAM, it "gives in", without increasing the force in the actuation. This is an important feature when the PAM is used as an actuator in a robot that interacts with a human, or when delicate operations have to be carried out.
In PAMs the force is not only dependent on pressure but also on their state of inflation. This is one of the major advantages; the mathematical model that supports the PAMs functionality is a non-linear system, which |
https://en.wikipedia.org/wiki/Hydraulic%20cylinder | A hydraulic cylinder (also called a linear hydraulic motor) is a mechanical actuator that is used to give a unidirectional force through a unidirectional stroke. It has many applications, notably in construction equipment (engineering vehicles), manufacturing machinery, elevators, and civil engineering.
A hydraulic cylinder is a hydraulic actuator that provides linear motion when hydraulic energy is converted into mechanical movement. It can be likened to a muscle in that, when the hydraulic system of a machine is activated, the cylinder is responsible for providing the motion.
Operation
Hydraulic cylinders get their power from pressurized hydraulic fluid, which is incompressible. Typically oil is used as hydraulic fluid. The hydraulic cylinder consists of a cylinder barrel, in which a piston connected to a piston rod moves back and forth. The barrel is closed on one end by the cylinder bottom (also called the cap) and the other end by the cylinder head (also called the gland) where the piston rod comes out of the cylinder. The piston has sliding rings and seals. The piston divides the inside of the cylinder into two chambers, the bottom chamber (cap end) and the piston rod side chamber (rod end/head-end).
Flanges, trunnions, clevises, and lugs are common cylinder mounting options. The piston rod also has mounting attachments to connect the cylinder to the object or machine component that it is pushing or pulling.
A hydraulic cylinder is the actuator or "motor" side of this system. The "generator" side of the hydraulic system is the hydraulic pump which delivers a fixed or regulated flow of oil to the hydraulic cylinder, to move the piston. There are three types of pump widely used: hydraulic hand pump, hydraulic air pump, and hydraulic electric pump. The piston pushes the oil in the other chamber back to the reservoir. If we assume that the oil enters from the cap end, during extension stroke, and the oil pressure in the rod end/head end is approximately zero, |
https://en.wikipedia.org/wiki/List%20of%20exceptional%20set%20concepts | This is a list of exceptional set concepts. In mathematics, and in particular in mathematical analysis, it is very useful to be able to characterise subsets of a given set X as 'small', in some definite sense, or 'large' if their complement in X is small. There are numerous concepts that have been introduced to study 'small' or 'exceptional' subsets. In the case of sets of natural numbers, it is possible to define more than one concept of 'density', for example. See also list of properties of sets of reals.
Almost all
Almost always
Almost everywhere
Almost never
Almost surely
Analytic capacity
Closed unbounded set
Cofinal (mathematics)
Cofinite
Dense set
IP set
2-large
Large set (Ramsey theory)
Meagre set
Measure zero
Natural density
Negligible set
Nowhere dense set
Null set, conull set
Partition regular
Piecewise syndetic set
Schnirelmann density
Small set (combinatorics)
Stationary set
Syndetic set
Thick set
Thin set (Serre)
Exceptional
Exceptional |
https://en.wikipedia.org/wiki/Janssen%20Pharmaceuticals | Johnson & Johnson Innovative Medicine (formerly Janssen Pharmaceuticals) is a pharmaceutical company headquartered in Beerse, Belgium, and wholly-owned by Johnson & Johnson. It was founded in 1953 by Paul Janssen.
In 1961, Janssen Pharmaceuticals was purchased by New Jersey-based American corporation Johnson & Johnson, and became part of Johnson & Johnson Pharmaceutical Research and Development (J&J PRD), now renamed to Janssen Research and Development (JRD), which conducts research and development activities related to a wide range of human medical disorders, including mental illness, neurological disorders, anesthesia and analgesia, gastrointestinal disorders, fungal infection, HIV/AIDS, allergies and cancer. Janssen and Ortho-McNeil Pharmaceutical have been placed in the Ortho-McNeil-Janssen group within Johnson & Johnson Company.
Subsidiaries
Actelion
Cilag AG
Janssen Biotech (formerly Centocor)
Janssen Vaccines (formerly Crucell)
Tibotec
History
The early roots of what would become Janssen Pharmaceuticals date back to 1933. In 1933, Constant Janssen, the father of Paul Janssen, acquired the right to distribute the pharmaceutical products of Richter, a Hungarian pharmaceutical company, for Belgium, the Netherlands and Belgian Congo. On 23 October 1934, he founded the N.V. Produkten Richter in Turnhout. In 1937, Constant Janssen acquired an old factory building in the Statiestraat 78 in Turnhout for his growing company, which he expanded during World War II into a four-story building. Still a student, Paul Janssen assisted in the development of paracetamol (USP: acetaminophen, often referred to generically under the trademark Tylenol) under the name Perdolan, which would later become well-known. After the war, the name for the company products was changed to Eupharma, although the company name Richter would remain until 1956.
Paul Janssen founded his own research laboratory in 1953 on the third floor of the building in the Statiestraat, still within th |
https://en.wikipedia.org/wiki/Philippine%20condiments | The generic term for condiments in the Filipino cuisine is sawsawan (Philippine Spanish: sarsa). Unlike sauces in other Southeast Asian regions, most sawsawan are not prepared beforehand, but are assembled on the table according to the preferences of the diner.
Description
In the Philippines, the common condiments aside from salt and pepper are vinegar, soy sauce, calamansi, and patis. The combination and different regional variations of these simple sauces make up the various common dipping sauces in the region.
The most common type of sawsawan is the toyomansi (or toyo't kalamansi), which is a mixture of soy sauce, calamansi, and native Siling labuyo. It can also be seasoned with vinegar and patis (fish sauce). This sauce is typically served with roasted meat dishes.
A similar dipping sauce used for grilled meats like inihaw is toyo, suka, at sili (literally "soy sauce, vinegar, and chili"). It is made of soy sauce, vinegar, and siling labuyo with some opting to add diced onions and/or garlic and a seasoning of sugar and/or black pepper. For serving with grilled fish, it is typically garnished with diced tomatoes, patis (fish sauce), or more rarely, bagoong (fermented shrimp or fish).
The simplest dipping sauce, for example, is vinegar mixed with another ingredient like siling labuyo (sukang may sili), garlic (suka't bawang), soy sauce (sukang may toyo), and so on. This can be elaborated further by adding a range of spices and even fruits, resulting in dipping sauces like sinamak (spiced vinegar). Suka Pinakurat is a popular brand of spiced vinegar in the Philippines.
All of these do not have set recipes, however, and can use ingredients and proportions interchangeably according to what is available and to the preference of the diner.
Other notable ingredients added to these kinds of sawsawan include shallots, whole black peppercorns, sugar, siling haba, wansoy (cilantro), ginger, and so on. Sawsawan are also unique in that they can function as marinades.
S |
https://en.wikipedia.org/wiki/Pressure%20experiment | Pressure experiments are experiments performed at pressures lower or higher than atmospheric pressure, called low-pressure experiments and high-pressure experiments, respectively. Pressure experiment are necessary because substances behave differently at different pressures. For example, water boils at a lower temperature at lower pressures. The equipment used for pressure experiments depends on whether the pressure is to be increased or decreased and by how much. A vacuum pump is used to remove the air out of a vacuum vessel for low-pressure experiments. High-pressures can be created with a piston-cylinder apparatus, up to () and . The piston is shifted with hydraulics, decreasing the volume inside the confining cylinder and increasing the pressure. For higher pressures, up to , a multi-anvil cell is used and for even higher pressures the diamond anvil cell. The diamond anvil cell is used to create extremely high pressures, as much as a million atmospheres (), though only over a small area. The current record is , but the sample size is confined to the order of tens of micrometres (). |
https://en.wikipedia.org/wiki/Citronellol | Citronellol, or dihydrogeraniol, is a natural acyclic monoterpenoid. Both enantiomers occur in nature. (+)-Citronellol, which is found in citronella oils, including Cymbopogon nardus (50%), is the more common isomer. (−)-Citronellol is widespread, but particularly abundant in the oils of rose (18–55%) and Pelargonium geraniums.
Preparation
Several million kilograms of citronellol are produced annually. It is mainly obtained by hydrogenation of geraniol or nerol over copper chromite catalyst. Homogeneous catalysts are used for the production of enantiomers.
Uses
Citronellol is used in perfumes and as a fragrance in cleaning products. In many applications, one of the enantiomers is preferred. It is a component of citronella oil, an insect repellant.
Citronellol is used as a raw material for the production of rose oxide. It is also a precursor to many commercial and potential fragrances such as citronellol acetate, citronellyl oxyacetaldehyde, citronellyl methyl acetal, and ethyl citronellyl oxalate.
Health and safety
The United States FDA considers citronellol as generally recognized as safe (GRAS) for food use. Citronellol is subject to restrictions on its use in perfumery, as some people may become sensitised to it, but the degree to which citronellol can cause an allergic reaction in humans is disputed.
In terms of dermal safety, citronellol has been evaluated as an insect repellent.
See also
Citronellal
Geraniol
Rhodinol
Pelargonium graveolens
Perfume intolerance (allergy) |
https://en.wikipedia.org/wiki/Heritability%20of%20autism | The heritability of autism is the proportion of differences in expression of autism that can be explained by genetic variation; if the heritability of a condition is high, then the condition is considered to be primarily genetic. Autism has a strong genetic basis. Although the genetics of autism are complex, autism spectrum disorder (ASD) is explained more by multigene effects than by rare mutations with large effects.
Autism is known to have a strong genetic component, with studies consistently demonstrating a higher prevalence among siblings and in families with a history of autism. This led researchers to investigate the extent to which genetics contribute to the development of autism. Numerous studies, including twin studies and family studies, have estimated the heritability of autism to be around 80 to 90%, indicating that genetic factors play a substantial role in its etiology. Heritability estimates do not imply that autism is solely determined by genetics, as environmental factors also contribute to the development of the disorder.
Studies of twins from 1977 to 1995 estimated the heritability of autism to be more than 90%; in other words, that 90% of the differences between autistic and non-autistic individuals are due to genetic effects. When only one identical twin is autistic, the other often has learning or social disabilities. For adult siblings, the likelihood of having one or more features of the broad autism phenotype might be as high as 30%, much higher than the likelihood in controls.
Though genetic linkage analysis have been inconclusive, many association analyses have discovered genetic variants associated with autism. For each autistic individual, mutations in many genes are typically implicated. Mutations in different sets of genes may be involved in different autistic individuals. There may be significant interactions among mutations in several genes, or between the environment and mutated genes. By identifying genetic markers inherited wi |
https://en.wikipedia.org/wiki/List%20of%20Indian%20condiments | The following is a list of condiments used in Indian cuisine.
Dried powders
Ajwain
Asafetida
Black salt
Cardamom powder
Red chili powder
Coriander powder
Curry leaves
Garam masala
Ginger, ginger powder
Himalayan salt
Jira (Indian cumin seeds)
Raai
Turmeric
Chutneys
Chammanthi podi
Coriander chutney
Coconut chutney
Garlic chutney (made from fresh garlic, coconut and groundnut)
Hang curd hari mirch pudina chutney (typical north Indian)
Lime chutney (made from whole, unripe limes)
Mango chutney (keri) chutney (made from unripe, green mangoes)
Mint chutney
Onion chutney
Saunth chutney (made from dried ginger and tamarind paste)
Tamarind chutney (Imli chutney)
Tomato chutney
Sauces
Raita (a cucumber curd side-dish)
See also
List of condiments
Achar
Condiments
Condiments |
https://en.wikipedia.org/wiki/Nerol | Nerol is a monoterpenoid alcohol found in many essential oils such as lemongrass and hops. It was originally isolated from neroli oil, hence its name. This colourless liquid is used in perfumery. Like geraniol, nerol has a sweet rose odor but it is considered to be fresher. Esters and related derivatives of nerol are referred to as neryl, e.g., neryl acetate.
Isomeric with nerol is geraniol, which is trans- or E-isomer. Nerol readily loses water to form a set of C10 compounds called dipentene. Nerol can be synthesized by pyrolysis of beta-pinene, which also affords myrcene. Hydrochlorination of myrcene gives a series of isomeric chlorides.
See also
Citral
Citronellol
Geraniol
Linalool
Perfume allergy |
https://en.wikipedia.org/wiki/RNA%20polymerase%20III | In eukaryote cells, RNA polymerase III (also called Pol III) is a protein that transcribes DNA to synthesize 5S ribosomal RNA, tRNA and other small RNAs.
The genes transcribed by RNA Pol III fall in the category of "housekeeping" genes whose expression is required in all cell types and most environmental conditions. Therefore, the regulation of Pol III transcription is primarily tied to the regulation of cell growth and the cell cycle, and thus requires fewer regulatory proteins than RNA polymerase II. Under stress conditions however, the protein Maf1 represses Pol III activity. Rapamycin is another Pol III inhibitor via its direct target TOR.
Transcription
The process of transcription (by any polymerase) involves three main stages:
Initiation, requiring construction of the RNA polymerase complex on the gene's promoter
Elongation, the synthesis of the RNA transcript
Termination, the finishing of RNA transcription and disassembly of the RNA polymerase complex
Initiation
Initiation: the construction of the polymerase complex on the promoter. Pol III is unusual (compared to Pol II) by requiring no control sequences upstream of the gene, instead normally relying on internal control sequences - sequences within the transcribed section of the gene (although upstream sequences are occasionally seen, e.g. U6 snRNA gene has an upstream TATA box as seen in Pol II Promoters).
There are three classes of Pol III initiation, corresponding to 5S rRNA, tRNA, and U6 snRNA initiation. In all cases, the process starts with transcription factors binding to control sequences, and ends with TFIIIB (Transcription Factor for polymerase III B) being recruited to the complex and assembling Pol III. TFIIIB consists of three subunits: TATA binding protein (TBP), a TFIIB-related factor (BRF1, or BRF2 for transcription of a subset of Pol III-transcribed genes in vertebrates), and a B-double-prime (BDP1) unit. The overall architecture bears similarities to that of Pol II.
Class I
Typical s |
https://en.wikipedia.org/wiki/Histone%20H1 | Histone H1 is one of the five main histone protein families which are components of chromatin in eukaryotic cells. Though highly conserved, it is nevertheless the most variable histone in sequence across species.
Structure
Metazoan H1 proteins feature a central globular "winged helix" domain and long C- and short N-terminal tails. H1 is involved with the packing of the "beads on a string" sub-structures into a high order structure, whose details have not yet been solved. H1 found in protists and bacteria, otherwise known as nucleoproteins HC1 and HC2 (, ), lack the central domain and the N-terminal tail.
H1 is less conserved than core histones. The globular domain is the most conserved part of H1.
Function
Unlike the other histones, H1 does not make up the nucleosome "bead". Instead, it sits on top of the structure, keeping in place the DNA that has wrapped around the nucleosome. H1 is present in half the amount of the other four histones, which contribute two molecules to each nucleosome bead. In addition to binding to the nucleosome, the H1 protein binds to the "linker DNA" (approximately 20-80 nucleotides in length) region between nucleosomes, helping stabilize the zig-zagged 30 nm chromatin fiber. Much has been learned about histone H1 from studies on purified chromatin fibers. Ionic extraction of linker histones from native or reconstituted chromatin promotes its unfolding under hypotonic conditions from fibers of 30 nm width to beads-on-a-string nucleosome arrays.
It is uncertain whether H1 promotes a solenoid-like chromatin fiber, in which exposed linker DNA is shortened, or whether it merely promotes a change in the angle of adjacent nucleosomes, without affecting linker length However, linker histones have been demonstrated to drive the compaction of chromatin fibres that had been reconstituted in vitro using synthetic DNA arrays of the strong '601' nucleosome positioning element. Nuclease digestion and DNA footprinting experiments suggest that the |
https://en.wikipedia.org/wiki/TATA-binding%20protein | The TATA-binding protein (TBP) is a general transcription factor that binds specifically to a DNA sequence called the TATA box. This DNA sequence is found about 30 base pairs upstream of the transcription start site in some eukaryotic gene promoters.
TBP gene family
TBP is a member of a small gene family of TBP-related factors. The first TBP-related factor (TRF/TRF1) was identified in the fruit fly Drosophila, but appears to be fly or insect-specific. Subsequently TBPL1/TRF2 was found in the genomes of many metazoans, whereas vertebrate genomes encode a third vertebrate family member, TBPL2/TRF3. In specific cell types or on specific promoters TBP can be replaced by one of these TBP-related factors, some of which interact with the TATA box similarly to TBP.
Role as transcription factor
TBP is a subunit of the eukaryotic general transcription factor TFIID. TFIID is the first protein to bind to DNA during the formation of the transcription preinitiation complex of RNA polymerase II (RNA Pol II). As one of the few proteins in the preinitiation complex that binds DNA in a sequence-specific manner, it helps position RNA polymerase II over the transcription start site of the gene. However, it is estimated that only 10–20% of human promoters have TATA boxes - the majority of human promoters are TATA-less housekeeping gene promoters - so TBP is probably not the only protein involved in positioning RNA polymerase II.. The binding of TBP to these promoters is facilitated by housekeeping gene regulators. Interestingly, transcription initiates within a narrow region at around 30 bp downstream of TATA box on TATA-containing promoters, while transcription start sites of TATA-less promoters are dispersed within a 200 bp region.
Binding of TFIID to the TATA box in the promoter region of the gene initiates the recruitment of other factors required for RNA Pol II to begin transcription. Some of the other recruited transcription factors include TFIIA, TFIIB, and TFIIF. Each of |
https://en.wikipedia.org/wiki/Phosphoinositide%20phospholipase%20C | Phosphoinositide phospholipase C (PLC, EC 3.1.4.11, triphosphoinositide phosphodiesterase, phosphoinositidase C, 1-phosphatidylinositol-4,5-bisphosphate phosphodiesterase, monophosphatidylinositol phosphodiesterase, phosphatidylinositol phospholipase C, PI-PLC, 1-phosphatidyl-D-myo-inositol-4,5-bisphosphate inositoltrisphosphohydrolase; systematic name 1-phosphatidyl-1D-myo-inositol-4,5-bisphosphate inositoltrisphosphohydrolase) is a family of eukaryotic intracellular enzymes that play an important role in signal transduction processes. These enzymes belong to a larger superfamily of Phospholipase C. Other families of phospholipase C enzymes have been identified in bacteria and trypanosomes. Phospholipases C are phosphodiesterases.
Phospholipase Cs participate in phosphatidylinositol 4,5-bisphosphate (PIP2) metabolism and lipid signaling pathways in a calcium-dependent manner. At present, the family consists of six sub-families comprising a total of 13 separate isoforms that differ in their mode of activation, expression levels, catalytic regulation, cellular localization, membrane binding avidity and tissue distribution. All are capable of catalyzing the hydrolysis of PIP2 into two important second messenger molecules, which go on to alter cell responses such as proliferation, differentiation, apoptosis, cytoskeleton remodeling, vesicular trafficking, ion channel conductance, endocrine function and neurotransmission.
Reaction and catalytic mechanism
All family members are capable of catalyzing the hydrolysis of PIP2, a phosphatidylinositol at the inner leaflet of the plasma membrane into the two second messengers, inositol trisphosphate (IP3) and diacylglycerol (DAG).
The chemical reaction may be expressed as:
1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate + H2O 1D-myo-inositol 1,4,5-trisphosphate + diacylglycerol
PLCs catalyze the reaction in two sequential steps. The first reaction is a phosphotransferase step that involves an intramolecular attack betw |
https://en.wikipedia.org/wiki/Atmospheric%20pressure%20discharge | An atmospheric pressure discharge is an electrical discharge in air or another gas at atmospheric pressure.
An electrical discharge in a gas forms plasma. Plasmas are sustained if there is a continuous inflow of energy to maintain the required degree of ionization by counterbalancing the recombination events that lead to extinction of the discharge. The number of recombination events per unit time and per unit volume is proportional to the density of each recombining plasma species (ions and electrons) and, thus, grows fast with the gas pressure. Therefore, compared to lower-pressure discharges, atmospheric discharges require a higher power to maintain.
Typical atmospheric discharges are:
DC arc
Lightning
Atmospheric-pressure glow discharge
Dielectric barrier discharge
See also
List of plasma (physics) articles |
https://en.wikipedia.org/wiki/Benchmarking%20%28hobby%29 | Benchmarking, also known as benchmark hunting, is a hobby activity in which participants find benchmarks (also known as survey markers or geodetic control points). The term "bench mark" is used only to refer to survey markers that designate a certain elevation, but hobbyists often use the term benchmarks to include triangulation stations or other reference marks. They typically then log their finds online. Like geocaching, the activity has become popular since 1995, propelled by the availability of online data on the location of survey marks (with directions for finding them) and by the rise of hobbyist-oriented websites.
History
Many survey markers in the U.S. were set over 100 years ago. There was a surge in creating these marks in the U.S. from about 1930 to 1955, in conjunction with the expansion of map-making activities across the country.
Sources of data on U.S. marks
In the U.S., about 740,000 benchmarks with the most precise elevations or coordinates (but only a small fraction of the existing survey marks) are listed in a database maintained by the National Geodetic Survey (NGS) and accessible online. The majority of marks set by the U.S. Geological Survey (USGS), the Forest Service, the Corps of Engineers, cities, and states, and local authorities. Cadastral (land survey) marks are usually not measured for the geodetic database.
Geocaching.com, the hobbyist website, formerly had a "snapshot" of the marks that the NGS had documented by the year 2000. This data is no longer available.
Each NGS-listed mark has a permanent identifier (PID), a six-character code that can be used to call up data about that mark. Using a form for an internet query like this, the PID for the mark can be entered and a data sheet for the mark viewed. A datasheet obtained through such a query looks like this. There is also a website which uses Google Maps to show the locations (and PIDs) of marks in each individual state of the U.S. Geocaching.com formerly had a section of |
https://en.wikipedia.org/wiki/Dusty%20plasma | A dusty plasma is a plasma containing micrometer (10−6) to nanometer (10−9) sized particles suspended in it. Dust particles are charged and the plasma and particles behave as a plasma. Dust particles may form larger particles resulting in "grain plasmas". Due to the additional complexity of studying plasmas with charged dust particles, dusty plasmas are also known as complex plasmas.
Dusty plasmas are encountered in:
Space plasmas
The mesosphere of the Earth
Specifically designed laboratory experiments
Dusty plasmas are interesting because the presence of particles significantly alters the charged particle equilibrium leading to different phenomena. It is a field of current research. Electrostatic coupling between the grains can vary over a wide range so that the states of the dusty plasma can change from weakly coupled (gaseous) to crystalline. Such plasmas are of interest as a non-Hamiltonian system of interacting particles and as a means to study generic fundamental physics of self-organization, pattern formation, phase transitions, and scaling.
Characteristics
The temperature of dust in a plasma may be quite different from its environment. For example:
The electric potential of dust particles is typically 1–10 V (positive or negative). The potential is usually negative because the electrons are more mobile than the ions. The physics is essentially that of a Langmuir probe that draws no net current, including formation of a Debye sheath with a thickness of a few times the Debye length. If the electrons charging the dust grains are relativistic, then the dust may charge to several kilovolts. Field electron emission, which tends to reduce the negative potential, can be important due to the small size of the particles. The photoelectric effect and the impact of positive ions may actually result in a positive potential of the dust particles.
Dynamics
Interest in the dynamics of charged dust in plasmas was amplified by the detection of spokes in the rings of |
https://en.wikipedia.org/wiki/Military%20anti-shock%20trousers | Military anti-shock trousers (MAST), or pneumatic anti-shock garments (PASG), are medical devices used to treat severe blood loss. The device is usually applied to the patient's pelvis, abdomen, and lower parts of the body and is composed of man-made inflatable air bladders. The device is designed to transfer blood away from the above described body parts and into the upper body by applying pressure.
There is significant controversy over the use of MAST. Initial studies in the 1970s suggested that the application of MAST auto-transfused up to 20 percent of the patient's blood to the upper body. However, by using human and dog models, subsequent studies in the 1980s disputed the claim, showing that lower than 5 percent of the blood was actually auto-transfused with the device. In addition, the usage of the device may cause further complications such as compartment syndrome and lower extremity ischemia. Most modern EMS and trauma programs have abandoned their use following data from a Cochrane review which indicated no mortality or survival benefit when MAST were applied to patients in shock.
I. G. Roberts et al. sought to quantify the effect on mortality and morbidity of the use of MAST in patients following trauma, and published the data in the Cochrane Database of Systematic Reviews.
See also
Compression garment
Hemostasis
Hypovolemia
Permissive hypotension
Shock (circulatory) |
https://en.wikipedia.org/wiki/Nodule%20%28medicine%29 | In medicine, nodules are small firm lumps, usually greater than 1 cm in diameter. If filled with fluid they are referred to as cysts. Smaller (less than 0.5 cm) raised soft tissue bumps may be termed papules.
The evaluation of a skin nodule includes a description of its appearance, its location, how it feels to touch and any associated symptoms which may give clues to an underlying medical condition.
Nodules in skin include dermatofibroma and pyogenic granuloma. Nodules may form on tendons and muscles in response to injury, and are frequently found on vocal cords. They may occur in organs such as the lung, or thyroid, or be a sign in other medical conditions such as rheumatoid arthritis.
Characteristics
Nodules are small firm lumps usually greater than 1 cm in diameter, found in skin and other organs. If filled with fluid they are usually softer and referred to as cysts. Smaller (less than 0.5 cm) raised soft tissue bumps may be termed papules.
Evaluation
The evaluation of a skin nodule includes a description of its appearance, its location, how it feels to touch and any associated symptoms which may give clues to an underlying medical condition.
Often discovered unintentionally on a chest x-ray, a single nodule in the lung requires assessment to exclude cancer.
Conditions
Nodules may form on tendons and muscles in response to injury, and are frequently found on vocal cords, They occur in conditions including endometriosis, neurofibromatosis, and in rheumatoid arthritis. They may also feature in Kaposi's sarcoma and gonorrhea.
Other examples |
https://en.wikipedia.org/wiki/Voja%20Antoni%C4%87 | Vojislav "Voja" Antonić (, ʾ, 12 July 1952) is a Serbian inventor, journalist, and writer. He is known for creating a build-it-yourself home computer Galaksija and originating a related "Build your own computer Galaksija" initiative with Dejan Ristanović. This initiative encouraged and enlightened thousands of computer enthusiasts during the 1980s in the Socialist Federal Republic of Yugoslavia. Antonić has donated many of his personal creations to the public domain. He was also a magazine editor and contributed to a number of radio shows.
Biography
While in school, Voja Antonić found a passion for HAM radios. He obtained a licence and a callsign to broadcast his own waves. One day, the state police seized all CB Band units known to operate in the country, creating a new trend for HAM radio units which bored Voja Antonić who decided to move on towards new digital technologies.
His first creation with a microprocessor was Conway's Game of Life machine which shows its state using 16x16 matrix of red LEDs. Without a computer, Voja Antonić wrote the code on paper and operated the input in the system byte by byte using rotary switches. LEDs being expensive back then, it took him months to buy and install the last LEDs. A replica of his machine reportedly worked flawlessly almost continuously for 40 years.
When personal computers arrived on the market, they were not accessible in Yugoslavia. Voja Antonić asked a friend in the USA to disassemble a TRS-80 Model I and send it to him and received it labelled as "technical junk". He received it, reassembled it, and started his new computer passion.
While studying at the Faculty of Dramatic Arts in the late 1970s, he started to build computer systems capable of rendering animations.
Prior to the Winter of 1981/1982, the Skiing Federation of Serbia timed the competitors using regular stopwatches and hand signaling. The upcoming Balkan competition required this to be improved and more precise. In 1981, Antonić created a sm |
https://en.wikipedia.org/wiki/Dejan%20Ristanovi%C4%87 | Dejan Ristanović (, Belgrade, 16 April 1963), is a well known Serbian writer and computer publicist.
In January 1981 he wrote the first article on personal computers for the popular science magazine Galaksija (Galaxy). During the following years he wrote many articles about programmable calculators and home computers.
In December 1983 he wrote a special edition of Galaksija called "Computers in Your Home" (Računari u vašoj kući), the first computer magazine in former Yugoslavia. This issue featured entire schematic diagrams guides on how to build computer Galaksija, created by Voja Antonić.
The series of special editions was eventually developed into computer magazine Računari (Computers). Ristanović was a contributor of Računari for 11 years. After that, in 1995 Ristanović founded the PC Press publishing company and magazine PC, the first privately owned computer magazine in Serbia. Ristanović has been the editor-in-chief of PC for more than 10 years.
In 1989 he co-founded Sezam BBS, which eventually become a major BBS system and evolved to Internet provider Sezam Pro, which in 2009 merged in Orion Telecom.
Dejan Ristanović is the author of about 20 books and more than 500 magazine articles about computers, written in the Serbian and English languages. He also operates the www.ti59.com nostalgia home page of TI-59 programmable calculators.
Dejan Ristanović is alumnus of Mathematical Gymnasium Belgrade, graduated in 1981 (search term in the list: "Ристановић Дејан"). |
https://en.wikipedia.org/wiki/Weather%20spotting | Weather spotting is observing weather for the purpose of reporting to a larger group or organization. Examples include National Weather Service (NWS) co-op observers and Skywarn storm spotters.
Storm spotters
A storm spotter is a specific type of weather spotter. In the U.S., these volunteers are usually trained by the National Weather Service or local Skywarn group, and are given a phone number, internet outlet, or amateur radio frequency to report to if a severe weather event, such as a tornado, severe thunderstorm, or flash flood occurs where the spotter is located. They add ground truth information to remote sensing technology such as weather radar. Canwarn is the national storm spotting program of Canada, Skywarn Europe covers about a dozen countries (including the U.K., which is also covered by TORRO), and Australia also has a program organized by the Bureau of Meteorology.
National Weather Service Coop Observers
The National Weather Service Cooperative Observer Program (COOP) is a network of around 11,000 volunteers that record official weather observations across the United States. Data is taken from a multitude of geographic regions and topography, and sent to the National Weather Service and National Climatic Data Center (NCDC) for official records. In making these reports, observers use a specialized set of jargon and slang to describe their observations.
Cooperative weather observers often double as storm spotters. Some are also river and coastal watchers, typically reporting gauge readings.
Media weather spotters
Since New England experiences harsh winters, several regional television stations use weather spotters for up-to-date snowfall amounts and reports. WHDH-TV's network, launched by former meteorologist Todd Gross, is the largest in New England with close to 300 spotters. The former name of the group was "WHDHwx - The 7NEWS Weather Spotter Group." In December 2005, the group's name was switched to "NEWeather - Todd Gross' Weather Spotter N |
https://en.wikipedia.org/wiki/Saxon%20math | Saxon math, developed by John Saxon (1923–1996), is a teaching method for incremental learning of mathematics created in the 1980s. It involves teaching a new mathematical concept every day and constantly reviewing old concepts. Early editions were deprecated for providing very few opportunities to practice the new material before plunging into a review of all previous material. Newer editions typically split the day's work evenly between practicing the new material and reviewing old material. It uses a steady review of all previous material, with a focus on students who struggle with retaining the math they previously learned. However, it has sometimes been criticized for its heavy emphasis on rote rather than conceptual learning.
The Saxon Math 1 to Algebra 1/2 (the equivalent of a Pre-Algebra book) curriculum is designed so that students complete assorted mental math problems, learn a new mathematical concept, practice problems relating to that lesson, and solve a variety of problems. Daily practice problems include relevant questions from the current day's lesson as well as cumulative problems. This daily cycle is interrupted for tests and additional topics. From Algebra 1/2 on, the higher level books remove the mental math problems and incorporate testing more frequently.
Saxon Publishers has also published a phonics and spelling curriculum. This curriculum, authored by Lorna Simmons and first published in 2005, follows the same incremental principles as the Saxon Math curriculum.
The Saxon math program has a specific set of products to support homeschoolers, including solution keys and ready-made tests, which makes it popular among some homeschool families. It has also been adopted as an alternative to reform mathematics programs in public and private schools. Saxon teaches memorization of algorithms, unlike many reform texts.
Relation to Common Core
In some reviews, such as ones performed by the nonprofit curriculum rating site EdReports.org, Saxon M |
https://en.wikipedia.org/wiki/TORQUE | The Terascale Open-source Resource and Queue Manager (TORQUE) is a distributed resource manager providing control over batch jobs and distributed compute nodes. TORQUE can integrate with the non-commercial Maui Cluster Scheduler or the commercial Moab Workload Manager to improve overall utilization, scheduling and administration on a cluster.
The TORQUE community has extended the original Portable Batch System (PBS) to extend scalability, fault tolerance, and functionality. Contributors include NCSA, OSC, USC, the US DOE, Sandia, PNNL, UB, TeraGrid and other HPC organizations. As of June 2018, TORQUE is no longer open-source even though previously it was described by its developers as open-source software, using the OpenPBS version 2.3 license and as non-free software by the Debian Free Software Guidelines due to license issues.
Feature set
TORQUE provides enhancements over standard OpenPBS in the following areas:
Fault Tolerance
Additional failure conditions checked/handled.
Node health check script support.
Scheduling Interface
Extended query interface providing the scheduler with additional and more accurate information.
Extended control interface allowing the scheduler increased control over job behavior and attributes.
Allows the collection of statistics for completed jobs.
Scalability
Significantly improved server to worker nodes' Machine Oriented Mini-server (MOM) communication model.
Ability to handle larger clusters. (over 15 TF/2,500 processors)
Ability to handle larger jobs. (over 2000 processors)
Ability to support larger server messages.
Usability
Extensive logging additions.
More human readable logging. (i.e. no more "error 15038 on command 42")
See also
Beowulf cluster
HTCondor
Maui Cluster Scheduler
Open Source Cluster Application Resources (OSCAR)
Portable Batch System
Slurm Workload Manager
Univa Grid Engine |
https://en.wikipedia.org/wiki/Pseudoconvexity | In mathematics, more precisely in the theory of functions of several complex variables, a pseudoconvex set is a special type of open set in the n-dimensional complex space Cn. Pseudoconvex sets are important, as they allow for classification of domains of holomorphy.
Let
be a domain, that is, an open connected subset. One says that is pseudoconvex (or Hartogs pseudoconvex) if there exists a continuous plurisubharmonic function on such that the set
is a relatively compact subset of for all real numbers In other words, a domain is pseudoconvex if has a continuous plurisubharmonic exhaustion function. Every (geometrically) convex set is pseudoconvex. However, there are pseudoconvex domains which are not geometrically convex.
When has a (twice continuously differentiable) boundary, this notion is the same as Levi pseudoconvexity, which is easier to work with. More specifically, with a boundary, it can be shown that has a defining function, i.e., that there exists which is so that , and . Now, is pseudoconvex iff for every and in the complex tangent space at p, that is,
, we have
The definition above is analogous to definitions of convexity in Real Analysis.
If does not have a boundary, the following approximation result can be useful.
Proposition 1 If is pseudoconvex, then there exist bounded, strongly Levi pseudoconvex domains with (smooth) boundary which are relatively compact in , such that
This is because once we have a as in the definition we can actually find a C∞ exhaustion function.
The case n = 1
In one complex dimension, every open domain is pseudoconvex. The concept of pseudoconvexity is thus more useful in dimensions higher than 1.
See also
Analytic polyhedron
Eugenio Elia Levi
Holomorphically convex hull
Stein manifold |
https://en.wikipedia.org/wiki/Domain%20of%20holomorphy | In mathematics, in the theory of functions of several complex variables, a domain of holomorphy is a domain which is maximal in the sense that there exists a holomorphic function on this domain which cannot be extended to a bigger domain.
Formally, an open set in the n-dimensional complex space is called a domain of holomorphy if there do not exist non-empty open sets and where is connected, and such that for every holomorphic function on there exists a holomorphic function on with on
In the case, every open set is a domain of holomorphy: we can define a holomorphic function with zeros accumulating everywhere on the boundary of the domain, which must then be a natural boundary for a domain of definition of its reciprocal. For this is no longer true, as it follows from Hartogs' lemma.
Equivalent conditions
For a domain the following conditions are equivalent:
is a domain of holomorphy
is holomorphically convex
is pseudoconvex
is Levi convex - for every sequence of analytic compact surfaces such that for some set we have ( cannot be "touched from inside" by a sequence of analytic surfaces)
has local Levi property - for every point there exist a neighbourhood of and holomorphic on such that cannot be extended to any neighbourhood of
Implications are standard results (for , see Oka's lemma). The main difficulty lies in proving , i.e. constructing a global holomorphic function which admits no extension from non-extendable functions defined only locally. This is called the Levi problem (after E. E. Levi) and was first solved by Kiyoshi Oka, and then by Lars Hörmander using methods from functional analysis and partial differential equations (a consequence of -problem).
Properties
If are domains of holomorphy, then their intersection is also a domain of holomorphy.
If is an ascending sequence of domains of holomorphy, then their union is also a domain of holomorphy (see Behnke-Stein theorem).
If and are domains |
https://en.wikipedia.org/wiki/Barrer | Barrer is a non-SI unit of gas permeability (specifically, gas permeability) used in the membrane technology and contact lens industry. It is named after Richard Barrer.
Definition
Here the 'cm3STP' is standard cubic centimeter, which is a unit of amount of gas rather than a unit of volume. It represents the number of gas molecules or moles that would occupy one cubic centimeter at standard temperature and pressure, as calculated via the ideal gas law.
The cm corresponds in the permeability equations to the thickness of the material whose permeability is being evaluated, the cm3STPcm−2s−1 to the flux of gas through the material, and the cmHg to the pressure drop across the material. That is, it measures the rate of fluid flow passing through an area of material with a thickness driven by a given pressure. See Darcy's Law.
In SI unit Barrer can be expressed as:
To convert to CGS permeability unit, one must use the following:
Where M is the molecular weight of the penetrant gas (g/mol).
Another commonly expressed unit is Gas Permeance Unit (GPU). It is used in the measurement of gas permeance. Permeance can be expressed as the ratio of the permeability with the thickness of membrane.
Or in SI units: |
https://en.wikipedia.org/wiki/Pseudoconvex%20function | In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points.
Formal definition
Consider a differentiable function , defined on a (nonempty) convex open set of the finite-dimensional Euclidean space . This function is said to be pseudoconvex if the following property holds:
Equivalently:
Here is the gradient of , defined by:
Note that the definition may also be stated in terms of the directional derivative of , in the direction given by the vector . This is because, as is differentiable, this directional derivative is given by:
Properties
Relation to other types of "convexity"
Every convex function is pseudoconvex, but the converse is not true. For example, the function is pseudoconvex but not convex. Similarly, any pseudoconvex function is quasiconvex; but the converse is not true, since the function is quasiconvex but not pseudoconvex. This can be summarized schematically as:
To see that is not pseudoconvex, consider its derivative at : . Then, if was pseudoconvex, we should have:
In particular it should be true for . But it is not, as: .
Sufficient optimality condition
For any differentiable function, we have the Fermat's theorem necessary condition of optimality, which states that: if has a local minimum at in an open domain, then must be a stationary point of (that is: ).
Pseudoconvexity is of great interest in the area of optimization, because the converse is also true for any pseudoconvex function. That is: if is a stationary point of a pseudoconvex function , then has a global minimum at . Note also that the result guarantees a global minimum |
https://en.wikipedia.org/wiki/Energy%20systems%20language | The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission.
Design intent
The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example).
In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology.
General characteristics
When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could |
https://en.wikipedia.org/wiki/Convex%20analysis | Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.
Convex sets
A subset of some vector space is if it satisfies any of the following equivalent conditions:
If is real and then
If is real and with then
Throughout, will be a map valued in the extended real numbers with a domain that is a convex subset of some vector space.
The map is a if
holds for any real and any with If this remains true of when the defining inequality () is replaced by the strict inequality
then is called .
Convex functions are related to convex sets. Specifically, the function is convex if and only if its
is a convex set. The epigraphs of extended real-valued functions play a role in convex analysis that is analogous to the role played by graphs of real-valued function in real analysis. Specifically, the epigraph of an extended real-valued function provides geometric intuition that can be used to help formula or prove conjectures.
The domain of a function is denoted by while its is the set
The function is called if and for Alternatively, this means that there exists some in the domain of at which and is also equal to In words, a function is if its domain is not empty, it never takes on the value and it also is not identically equal to If is a proper convex function then there exist some vector and some such that
for every
where denotes the dot product of these vectors.
Convex conjugate
The of an extended real-valued function (not necessarily convex) is the function from the (continuous) dual space of and
where the brackets denote the canonical duality The of is the map defined by for every
If denotes the set of -valued functions on then the map defined by is called the .
Subdifferential set and the Fenchel-Young inequality
If and then the is
For example, |
https://en.wikipedia.org/wiki/Subharmonic%20function | In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.
Intuitively, subharmonic functions are related to convex functions of one variable as follows. If the graph of a convex function and a line intersect at two points, then the graph of the convex function is below the line between those points. In the same way, if the values of a subharmonic function are no larger than the values of a harmonic function on the boundary of a ball, then the values of the subharmonic function are no larger than the values of the harmonic function also inside the ball.
Superharmonic functions can be defined by the same description, only replacing "no larger" with "no smaller". Alternatively, a superharmonic function is just the negative of a subharmonic function, and for this reason any property of subharmonic functions can be easily transferred to superharmonic functions.
Formal definition
Formally, the definition can be stated as follows. Let be a subset of the Euclidean space and let
be an upper semi-continuous function. Then, is called subharmonic if for any closed ball of center and radius contained in and every real-valued continuous function on that is harmonic in and satisfies for all on the boundary of , we have for all
Note that by the above, the function which is identically −∞ is subharmonic, but some authors exclude this function by definition.
A function is called superharmonic if is subharmonic.
Properties
A function is harmonic if and only if it is both subharmonic and superharmonic.
If is C2 (twice continuously differentiable) on an open set in , then is subharmonic if and only if one has on , where is the Laplacian.
The maximum of a subharmonic function cannot be achieved in the interior of its domain unless the function is constant, which is called the maximum principle. However, the minimum of a subharmonic f |
https://en.wikipedia.org/wiki/Gavilan%20SC | The Gavilan SC is a laptop computer that was the first ever to be marketed as a "laptop". The computer ran on an Intel 8088 microprocessor running at 5 MHz and sported a touchpad for a pointing device, one of the first computers to do so. The laptop was developed by Manuel "Manny" Fernandez and released by the Gavilan Computer Corporation, the company he founded and owned, in May 1983.
History
The brainchild of Gavilan Computer Corporation founder Manuel (Manny) Fernandez, the Gavilan was introduced in May 1983, at approximately the same time as the similar Sharp PC-5000. It came to market a year after the GRiD Compass, with which it shared several pioneering details, notably a clamshell design, in which the screen folds shut over the keyboard.
The Gavilan, however, was more affordable than the GRiD, at a list price of around US$4000. Unlike the GRiD, it was equipped with a floppy disk drive and used the MS-DOS operating system, although it was only partially IBM PC-compatible. Powered by a 5 MHz Intel 8088 processor, it was equipped with a basic graphical user interface, stored in its 48 KB of ROM. The operating system used a FORTH-like interpreter to generate very compact code. An internal 300-baud modem was standard. A compact printer that attached to the rear of the machine was an option.
The machine's included software was a terminal program, MS-DOS, and MBasic (a version of the BASIC programming language). An Office Pack of four applications—Sorcim SuperCalc and SuperWriter, and pfs:File and pfs:Report—was optional.
It was far smaller than competing IBM compatible portables, such as the Compaq Portable, which were the size of a portable sewing machine and weighed more than twice the Gavilan's 4 kg (9 lb), and unlike the Gavilan they could not run off batteries. Gavilan claimed the SC could run up to nine hours on its built-in nickel-cadmium batteries.
Jack Hall, an award-winning industrial designer, was chosen to work out the ergonomics, mechanics and ov |
https://en.wikipedia.org/wiki/Coherent%20backscattering | In physics, coherent backscattering is observed when coherent radiation (such as a laser beam) propagates through a medium which has a large number of scattering centers (such as milk or a thick cloud) of size comparable to the wavelength of the radiation.
The waves are scattered many times while traveling through the medium. Even for incoherent radiation, the scattering typically reaches a local maximum in the direction of backscattering. For coherent radiation, however, the peak is two times higher.
Coherent backscattering is very difficult to detect and measure for two reasons. The first is fairly obvious, that it is difficult to measure the direct backscatter without blocking the beam, but there are methods for overcoming this problem. The second is that the peak is usually extremely sharp around the backward direction, so that a very high level of angular resolution is needed for the detector to see the peak without averaging its intensity out over the surrounding angles where the intensity can undergo large dips. At angles other than the backscatter direction, the light intensity is subject to numerous essentially random fluctuations called speckles.
This is one of the most robust interference phenomena that survives multiple scattering, and it is regarded as an aspect of a quantum mechanical phenomenon known as weak localization (Akkermans et al. 1986). In weak localization, interference of the direct and reverse paths leads to a net reduction of light transport in the forward direction. This phenomenon is typical of any coherent wave which is multiple scattered. It is typically discussed for light waves, for which it is similar to the weak localization phenomenon for electrons in disordered semi-conductors and often seen as the precursor to Anderson (or strong) localization of light. Weak localization of light can be detected since it is manifested as an enhancement of light intensity in the backscattering direction. This substantial enhancement is called |
https://en.wikipedia.org/wiki/Conjunctive%20grammar | Conjunctive grammars are a class of formal grammars
studied in formal language theory.
They extend the basic type of grammars,
the context-free grammars,
with a conjunction operation.
Besides explicit conjunction,
conjunctive grammars allow implicit disjunction
represented by multiple rules for a single nonterminal symbol,
which is the only logical connective expressible in context-free grammars.
Conjunction can be used, in particular,
to specify intersection of languages.
A further extension of conjunctive grammars
known as Boolean grammars
additionally allows explicit negation.
The rules of a conjunctive grammar are of the form
where is a nonterminal and
, ...,
are strings formed of symbols in and (finite sets of terminal and nonterminal symbols respectively).
Informally, such a rule asserts that
every string over
that satisfies each of the syntactical conditions represented
by , ...,
therefore satisfies the condition defined by .
Formal definition
A conjunctive grammar is defined by the 4-tuple where
is a finite set; each element is called a nonterminal symbol or a variable. Each variable represents a different type of phrase or clause in the sentence. Variables are also sometimes called syntactic categories.
is a finite set of terminals, disjoint from , which make up the actual content of the sentence. The set of terminals is the alphabet of the language defined by the grammar .
is a finite set of productions, each of the form for some in and . The members of are called the rules or productions of the grammar.
is the start variable (or start symbol), used to represent the whole sentence (or program). It must be an element of .
It is common to list all right-hand sides for the same left-hand side on the same line, using | (the pipe symbol) to separate them. Rules and can hence be written as .
Two equivalent formal definitions
of the language specified by a conjunctive grammar exist.
One definition is based upon representing the gram |
https://en.wikipedia.org/wiki/Boolean%20grammar | Boolean grammars, introduced by , are a class of formal grammars studied in formal language theory. They extend the basic type of grammars, the context-free grammars, with conjunction and negation operations. Besides these explicit operations, Boolean grammars allow implicit disjunction represented by multiple rules for a single nonterminal symbol, which is the only logical connective expressible in context-free grammars. Conjunction and negation can be used, in particular, to specify intersection and complement of languages. An intermediate class of grammars known as conjunctive grammars allows conjunction and disjunction, but not negation.
The rules of a Boolean grammar are of the form
where is a nonterminal, and , ..., , , ..., are strings formed of symbols in and . Informally, such a rule asserts that every string over that satisfies each of the syntactical conditions represented by , ..., and none of the syntactical conditions represented by , ..., therefore satisfies the condition defined by .
There exist several formal definitions of the language generated by a Boolean grammar. They have one thing in common: if the grammar is represented as a system of language equations with union, intersection, complementation and concatenation, the languages generated by the grammar must be the solution of this system. The semantics differ in details, some define the languages using language equations, some draw upon ideas from the field of logic programming. However, these nontrivial issues of formal definition are mostly irrelevant for practical considerations, and one can construct grammars according to the given informal semantics. The practical properties of the model are similar to those of conjunctive grammars, while the descriptional capabilities are further improved. In particular, some practically useful properties inherited from context-free grammars, such as efficient parsing algorithms, are retained, see . |
https://en.wikipedia.org/wiki/Branch%20and%20cut | Branch and cut is a method of combinatorial optimization for solving integer linear programs (ILPs), that is, linear programming (LP) problems where some or all the unknowns are restricted to integer values. Branch and cut involves running a branch and bound algorithm and using cutting planes to tighten the linear programming relaxations. Note that if cuts are only used to tighten the initial LP relaxation, the algorithm is called cut and branch.
Algorithm
This description assumes the ILP is a maximization problem.
The method solves the linear program without the integer constraint using the regular simplex algorithm. When an optimal solution is obtained, and this solution has a non-integer value for a variable that is supposed to be integer, a cutting plane algorithm may be used to find further linear constraints which are satisfied by all feasible integer points but violated by the current fractional solution. These inequalities may be added to the linear program, such that resolving it will yield a different solution which is hopefully "less fractional".
At this point, the branch and bound part of the algorithm is started. The problem is split into multiple (usually two) versions. The new linear programs are then solved using the simplex method and the process repeats. During the branch and bound process, non-integral solutions to LP relaxations serve as upper bounds and integral solutions serve as lower bounds. A node can be pruned if an upper bound is lower than an existing lower bound. Further, when solving the LP relaxations, additional cutting planes may be generated, which may be either global cuts, i.e., valid for all feasible integer solutions, or local cuts, meaning that they are satisfied by all solutions fulfilling the side constraints from the currently considered branch and bound subtree.
The algorithm is summarized below.
Add the initial ILP to , the list of active problems
Set and
while is not empty
Select and remove (de-queue) a proble |
https://en.wikipedia.org/wiki/Indian%20National%20Mathematical%20Olympiad | The Indian National Mathematical Olympiad (INMO) is a high school mathematics competition held annually in India since 1989. It is the third tier in the Indian team selection procedure for the International Mathematical Olympiad and is conducted by the Homi Bhabha Centre for Science Education (HBCSE) under the aegis of the National Board of Higher Mathematics (NBHM).
The Mathematical Olympiad Program is a five stage process conducted under the aegis of National Board for Higher Mathematics (NBHM). The first stage PRMO is conducted by the Mathematics Teachers’ Association (India). All the remaining stages are organized by Homi Bhabha Centre for Science Education (HBCSE).
Eligibility and participant selection process
The INMO is conducted by the MO Cell which is held on the third Sunday of January at 30 centers across the country. Prospective candidates first need to write the Pre-Regional Mathematical Olympiad (known as PRMO or Pre-RMO) then the Regional Mathematical Olympiad of their respective state or region. Around thirty students are selected from each region, to write the INMO. The best-performing students from the RMO (approximately 900) qualify for the second stage INMO.
Structure of the examination
The Indian National Mathematics Olympiad is the national level Olympiad which is conducted to select students for the International Mathematical Olympiad Training Camp, which is further conducted to select the Indian team for the International Mathematical Olympiad. It is similar to the USAMO conducted in the USA. The exam structure various from year to year. From 2024 onwards, INMO will consist of 6 problems to be solved over a span of 4.5 hrs. The topics asked are generally what is taught at high school level, except calculus. The difficulty of the problems tends to be generally higher than what is done in schools, with strong focus on application of concepts. The topics generally covered are Number Theory, Geometry, Combinatorics and Algebra.
Further stag |
https://en.wikipedia.org/wiki/Cholinergic%20crisis | A cholinergic crisis is an over-stimulation at a neuromuscular junction due to an excess of acetylcholine (ACh), as a result of the inactivity of the AChE enzyme, which normally breaks down acetylcholine.
Symptoms and diagnosis
As a result of cholinergic crisis, the muscles stop responding to the high synaptic levels of ACh, leading to flaccid paralysis, respiratory failure, and other signs and symptoms reminiscent of organophosphate poisoning. Other symptoms include increased sweating, salivation, bronchial secretions along with miosis (constricted pupils).
This crisis may be masked by the concomitant use of atropine along with cholinesterase inhibitor medication in order to prevent side effects. Flaccid paralysis resulting from cholinergic crisis can be distinguished from myasthenia gravis by the use of the drug edrophonium (Tensilon), as it only worsens the paralysis caused by cholinergic crisis but strengthens the muscle response in the case of myasthenia gravis. (Edrophonium is a cholinesterase inhibitor, hence increases the concentration of acetylcholine present).
Some of the symptoms of increased cholinergic stimulation include:
Salivation: stimulation of the salivary glands
Lacrimation: stimulation of the lacrimal glands (tearing)
Urination: relaxation of the internal sphincter muscle of urethra, and contraction of the detrusor muscles
Defecation
Gastrointestinal distress: Smooth muscle tone changes causing gastrointestinal problems, including cramping
Emesis: Vomiting
Miosis constriction of the pupils of the eye via stimulation of the pupillary constrictor muscles
Muscle spasm: stimulation of skeletal muscle (due to nicotinic acetylcholine receptor stimulation)
Cause
Cholinergic crisis, sometimes known by the mnemonic "SLUDGE syndrome" (Salivation, Lacrimation, Urination, Defecation, Gastrointestinal distress and Emesis), can be a consequence of:
Contamination with - or excessive exposure to - certain chemicals including:
nerve agents, |
https://en.wikipedia.org/wiki/Coriolis%20frequency | The Coriolis frequency ƒ, also called the Coriolis parameter or Coriolis coefficient, is equal to twice the rotation rate Ω of the Earth multiplied by the sine of the latitude .
The rotation rate of the Earth (Ω = 7.2921 × 10−5 rad/s) can be calculated as 2π / T radians per second, where T is the rotation period of the Earth which is one sidereal day (23 h 56 min 4.1 s). In the midlatitudes, the typical value for is about 10−4 rad/s. Inertial oscillations on the surface of the Earth have this frequency. These oscillations are the result of the Coriolis effect.
Explanation
Consider a body (for example a fixed volume of atmosphere) moving along at a given latitude at velocity in the Earth's rotating reference frame. In the local reference frame of the body, the vertical direction is parallel to the radial vector pointing from the center of the Earth to the location of the body and the horizontal direction is perpendicular to this vertical direction and in the meridional direction. The Coriolis force (proportional to ), however, is perpendicular to the plane containing both the earth's angular velocity vector (where ) and the body's own velocity in the rotating reference frame . Thus, the Coriolis force is always at an angle with the local vertical direction. The local horizontal direction of the Coriolis force is thus . This force acts to move the body along longitudes or in the meridional directions.
Equilibrium
Suppose the body is moving with a velocity such that the centripetal and Coriolis (due to ) forces on it are balanced. This gives
where is the radius of curvature of the path of object (defined by ). Replacing , where is the magnitude of the spin rate of the Earth, to obtain
Thus the Coriolis parameter, , is the angular velocity or frequency required to maintain a body at a fixed circle of latitude or zonal region. If the Coriolis parameter is large, the effect of the Earth's rotation on the body is significant since it will need a larger an |
https://en.wikipedia.org/wiki/Distribution%20list | A distribution list is an application of email client programs that allows a user to maintain a list of email addresses and send messages to all of them at once. This can be referred to as an electronic mailshot. Sending mail using a distribution list differs from an electronic mailing list or the email option found in an Internet forum as it is usually for one-way traffic and not for coordinating a discussion. A distribution list is an email equivalent of a postal mailing list. Can also be called "Distro". |
https://en.wikipedia.org/wiki/Automatically%20switched%20optical%20network | Automatically Switched Optical Network (ASON) is a concept for the evolution of transport networks which allows for dynamic policy-driven control of an optical or SDH network based on signaling between a user and components of the network. Its aim is to automate the resource and connection management within the network. The IETF defines ASON as an alternative/supplement to NMS based connection management.
The need for ASON
In an optical network without ASON, whenever a user requires more bandwidth, there is a request for a new connection from the user to the service provider. The service provider must then manually plan and configure the route in the network. This is not only time consuming, but also wastes bandwidth if the user sparingly uses the connection. Bandwidth is increasingly becoming a precious resource and expectations from future optical networks are that they should be able to efficiently handle resources as quickly as possible. ASON fulfills some of the requirements of optical networks such as:
Fast and automatic end-to-end provisioning
Fast and efficient re-routing
Support of different clients, but optimized for IP
Dynamic set up of connections
Support of optical virtual private networks (OVPNs)
Support of different levels of quality of service
(These requirements are not restricted to optical networks and can be applied to any transport network, including SDH Networks.)
Logical architecture of an ASON
The logical architecture of an ASON can be divided into three planes:
Transport plane
Control plane
Management plane
The Transport Plane contains a number of switches (optical or otherwise) responsible for transporting user data via connections. These switches are connected to each other via PI (Physical Interface).
The Control Plane is responsible for the actual resource and connection management within an ASON network. It consists of a series of OCC (Optical Connection Controllers), interconnected via NNIs (Network to Network Interfa |
https://en.wikipedia.org/wiki/Microfabrication | Microfabrication is the process of fabricating miniature structures of micrometre scales and smaller. Historically, the earliest microfabrication processes were used for integrated circuit fabrication, also known as "semiconductor manufacturing" or "semiconductor device fabrication". In the last two decades microelectromechanical systems (MEMS), microsystems (European usage), micromachines (Japanese terminology) and their subfields, microfluidics/lab-on-a-chip, optical MEMS (also called MOEMS), RF MEMS, PowerMEMS, BioMEMS and their extension into nanoscale (for example NEMS, for nano electro mechanical systems) have re-used, adapted or extended microfabrication methods. Flat-panel displays and solar cells are also using similar techniques.
Miniaturization of various devices presents challenges in many areas of science and engineering: physics, chemistry, materials science, computer science, ultra-precision engineering, fabrication processes, and equipment design. It is also giving rise to various kinds of interdisciplinary research. The major concepts and principles of microfabrication are microlithography, doping, thin films, etching, bonding, and polishing.
Fields of use
Microfabricated devices include:
integrated circuits (“microchips”) (see semiconductor manufacturing)
microelectromechanical systems (MEMS) and microoptoelectromechanical systems (MOEMS)
microfluidic devices (ink jet print heads)
solar cells
flat panel displays (see AMLCD and thin-film transistors)
sensors (microsensors) (biosensors, nanosensors)
power MEMS, fuel cells, energy harvesters/scavengers
Origins
Microfabrication technologies originate from the microelectronics industry, and the devices are usually made on silicon wafers even though glass, plastics and many other substrate are in use. Micromachining, semiconductor processing, microelectronic fabrication, semiconductor fabrication, MEMS fabrication and integrated circuit technology are terms used instead of microfabrication, |
https://en.wikipedia.org/wiki/Secondary%20electrons | Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector |
https://en.wikipedia.org/wiki/Farnesol | Farnesol is a natural 15-carbon organic compound which is an acyclic sesquiterpene alcohol. Under standard conditions, it is a colorless liquid. It is hydrophobic, and thus insoluble in water, but miscible with oils.
Farnesol is produced from 5-carbon isoprene compounds in both plants and animals. Phosphate-activated derivatives of farnesol are the building blocks of possibly all acyclic sesquiterpenoids. These compounds are doubled to form 30-carbon squalene, which is the precursor for steroids in plants, animals, and fungi. Farnesol and its derivatives are important starting compounds for natural and artificial organic synthesis.
Uses
Farnesol is present in many essential oils such as citronella, neroli, cyclamen, lemon grass, tuberose, rose, musk, balsam, and tolu. It is used in perfumery to emphasize the odors of sweet, floral perfumes. It enhances perfume scent by acting as a co-solvent that regulates the volatility of the odorants. It is especially used in lilac perfumes.
Farnesol is a natural pesticide for mites and is a pheromone for several other insects.
In a 1994 report released by five top cigarette companies, farnesol was listed as one of 599 additives to cigarettes. It is a flavoring ingredient.
Natural source and synthesis
Farnesol is produced from isoprene compounds in both plants and animals. When geranyl pyrophosphate reacts with isopentenyl pyrophosphate, the result is the 15-carbon farnesyl pyrophosphate, which is an intermediate in the biosynthesis of sesquiterpenes such as farnesene. Oxidation can then provide sesquiterpenoids such as farnesol.
In industry, farnesol could be synthesized from linalool
History of the name
Farnesol is found in a flower extract with a long history of use in perfumery. The pure substance farnesol was named (c. 1900–1905) after the Farnese acacia tree (Vachellia farnesiana), since the flowers from the tree were the commercial source of the floral essence in which the chemical was identified. This particular |
https://en.wikipedia.org/wiki/Weyl%20curvature%20hypothesis | The Weyl curvature hypothesis, which arises in the application of Albert Einstein's general theory of relativity to physical cosmology, was introduced by the British mathematician and theoretical physicist Roger Penrose in an article in 1979 in an attempt to provide explanations for two of the most fundamental issues in physics. On the one hand, one would like to account for a universe which on its largest observational scales appears remarkably spatially homogeneous and isotropic in its physical properties (and so can be described by a simple Friedmann–Lemaître model); on the other hand, there is the deep question on the origin of the second law of thermodynamics.
Penrose suggests that the resolution of both of these problems is rooted in a concept of the entropy content of gravitational fields. Near the initial cosmological singularity (the Big Bang), he proposes, the entropy content of the cosmological gravitational field was extremely low (compared to what it theoretically could have been), and started rising monotonically thereafter. This process manifested itself e.g. in the formation of structure through the clumping of matter to form galaxies and clusters of galaxies. Penrose associates the initial low entropy content of the universe or the past hypothesis with the effective vanishing of the Weyl curvature tensor of the cosmological gravitational field near the Big Bang. From then on, he proposes, its dynamical influence gradually increased, thus being responsible for an overall increase in the amount of entropy in the universe, and so inducing a cosmological arrow of time.
The Weyl curvature represents such gravitational effects as tidal fields and gravitational radiation. Mathematical treatments of Penrose's ideas on the Weyl curvature hypothesis have been given in the context of isotropic initial cosmological singularities e.g. in the articles. Penrose views the Weyl curvature hypothesis as a physically more credible alternative to cosmic inflation (a h |
https://en.wikipedia.org/wiki/ARINC%20429 | ARINC 429, the "Mark 33 Digital Information Transfer System (DITS)," is the ARINC technical standard for the predominant avionics data bus used on most higher-end commercial and transport aircraft. It defines the physical and electrical interfaces of a two-wire data bus and a data protocol to support an aircraft's avionics local area network.
Technical description
Medium and signaling
ARINC 429 is a data transfer standard for aircraft avionics. It uses a self-clocking, self-synchronizing data bus protocol (Tx and Rx are on separate ports). The physical connection wires are twisted pairs carrying balanced differential signaling. Data words are 32 bits in length and most messages consist of a single data word. Messages are transmitted at either 12.5 or 100 kbit/s to other system elements that are monitoring the bus messages. The transmitter constantly transmits either 32-bit data words or the NULL state (0 Volts). A single wire pair is limited to one transmitter and no more than 20 receivers. The protocol allows for self-clocking at the receiver end, thus eliminating the need to transmit clocking data. ARINC 429 is an alternative to MIL-STD-1553.
Bit numbering, transmission order, and bit significance
The ARINC 429 unit of transmission is a fixed-length 32-bit frame, which the standard refers to as a 'word'. The bits within an ARINC 429 word are serially identified from Bit Number 1 to Bit Number 32 or simply Bit 1 to Bit 32. The fields and data structures of the ARINC 429 word are defined in terms of this numbering.
While it is common to illustrate serial protocol frames progressing in time from right to left, a reversed ordering is commonly practiced within the ARINC standard. Even though ARINC 429 word transmission begins with Bit 1 and ends with Bit 32, it is common to diagram and describe ARINC 429 words in the order from Bit 32 to Bit 1.
In simplest terms, while the transmission order of bits (from the first transmitted bit to the last transmitted bit |
https://en.wikipedia.org/wiki/Post-transcriptional%20modification | Transcriptional modification or co-transcriptional modification is a set of biological processes common to most eukaryotic cells by which an RNA primary transcript is chemically altered following transcription from a gene to produce a mature, functional RNA molecule that can then leave the nucleus and perform any of a variety of different functions in the cell. There are many types of post-transcriptional modifications achieved through a diverse class of molecular mechanisms.
One example is the conversion of precursor messenger RNA transcripts into mature messenger RNA that is subsequently capable of being translated into protein. This process includes three major steps that significantly modify the chemical structure of the RNA molecule: the addition of a 5' cap, the addition of a 3' polyadenylated tail, and RNA splicing. Such processing is vital for the correct translation of eukaryotic genomes because the initial precursor mRNA produced by transcription often contains both exons (coding sequences) and introns (non-coding sequences); splicing removes the introns and links the exons directly, while the cap and tail facilitate the transport of the mRNA to a ribosome and protect it from molecular degradation.
Post-transcriptional modifications may also occur during the processing of other transcripts which ultimately become transfer RNA, ribosomal RNA, or any of the other types of RNA used by the cell.
mRNA processing
5' processing
Capping
Capping of the pre-mRNA involves the addition of 7-methylguanosine (m7G) to the 5' end. To achieve this, the terminal 5' phosphate requires removal, which is done with the aid of enzyme RNA triphosphatase. The enzyme guanosyl transferase then catalyses the reaction, which produces the diphosphate 5' end. The diphosphate 5' end then attacks the alpha phosphorus atom of a GTP molecule in order to add the guanine residue in a 5'5' triphosphate link. The enzyme (guanine-N7-)-methyltransferase ("cap MTase") transfers a methyl gr |
https://en.wikipedia.org/wiki/Ultramicrotomy | Ultramicrotomy is a method for cutting specimens into extremely thin slices, called ultra-thin sections, that can be studied and documented at different magnifications in a transmission electron microscope (TEM). It is used mostly for biological specimens, but sections of plastics and soft metals can also be prepared. Sections must be very thin because the 50 to 125 kV electrons of the standard electron microscope cannot pass through biological material much thicker than 150 nm. For best resolutions, sections should be from 30 to 60 nm. This is roughly the equivalent to splitting a 0.1 mm-thick human hair into 2,000 slices along its diameter, or cutting a single red blood cell into 100 slices.
Ultramicrotomy process
Ultra-thin sections of specimens are cut using a specialized instrument called an "ultramicrotome". The ultramicrotome is fitted with either a diamond knife, for most biological ultra-thin sectioning, or a glass knife, often used for initial cuts. There are numerous other pieces of equipment involved in the ultramicrotomy process. Before selecting an area of the specimen block to be ultra-thin sectioned, the technician examines semithin or "thick" sections range from 0.5 to 2 μm. These thick sections are also known as survey sections and are viewed under a light microscope to determine whether the right area of the specimen is in a position for thin sectioning. "Ultra-thin" sections from 50 to 100 nm thick are able to be viewed in the TEM.
Tissue sections obtained by ultramicrotomy are compressed by the cutting force of the knife. In addition, interference microscopy of the cut surface of the blocks reveals that the sections are often not flat. With Epon or Vestopal as embedding medium the ridges and valleys usually do not exceed 0.5 μm in height, i.e., 5–10 times the thickness of ordinary sections (1).
A small sample is taken from the specimen to be investigated. Specimens may be from biological matter, like animal or plant tissue, or from inorgani |
https://en.wikipedia.org/wiki/John%20Robert%20Anderson%20%28psychologist%29 | John Robert Anderson (born August 27, 1947) is a Canadian-born American psychologist. He is currently professor of Psychology and Computer Science at Carnegie Mellon University.
Biography
Anderson obtained a B.A. from the University of British Columbia in 1968, and a Ph.D. in Psychology from Stanford in 1972. He became an assistant professor at Yale in 1972. He moved to the University of Michigan in 1973 as a Junior Fellow (and married Lynne Reder who was a graduate student there) and returned to Yale in 1976 with tenure. He was promoted to full professor at Yale in 1977 but moved to Carnegie Mellon University in 1978. From 1988 to 1989, he served as president of the Cognitive Science Society. He was elected to the American Academy of Arts and Sciences and the National Academy of Sciences and has received a series of awards:
1968: Governor General's Gold Medal: Graduated as top student in Arts and Sciences at University of British Columbia
1978: Early Career Award of the American Psychological Association
1989–1994: Research Scientist Award, NIMH
1994: American Psychological Association's Distinguished Scientific Career Award
1999: Elected to the National Academy of Sciences
1999: Fellow of American Academy of Arts and Sciences
2004: The David E. Rumelhart Prize, for Contributions to the Formal Analysis of Human Cognition
2005: Howard Crosby Warren Medal for outstanding achievement in Experimental Psychology in the United States and Canada, Society of Experimental Psychology
2006: Inaugural Dr. A.H. Heineken Prize for Cognitive Science awarded by the Royal Netherlands Academy of Arts and Sciences
2011: Benjamin Franklin Medal in Computer and Cognitive Science, Franklin Institute "for the development of the first large-scale computational theory of the process by which humans perceive, learn and reason, and its application to computer tutoring systems."
2016: Atkinson Prize from the National Academy of Sciences.
Research
In cognitive psychology, Joh |
https://en.wikipedia.org/wiki/Toda%20lattice | The Toda lattice, introduced by , is a simple model for a one-dimensional crystal in solid state physics. It is famous because it is one of the earliest examples of a non-linear completely integrable system.
It is given by a chain of particles with nearest neighbor interaction, described by the Hamiltonian
and the equations of motion
where is the displacement of the -th particle from its equilibrium position,
and is its momentum (mass ),
and the Toda potential .
Soliton solutions
Soliton solutions are solitary waves spreading in time with no change to their shape and size and interacting with each other in a particle-like way. The general N-soliton solution of the equation is
where
with
where
and
.
Integrability
The Toda lattice is a prototypical example of a completely integrable system. To see this one uses Flaschka's variables
such that the Toda lattice reads
To show that the system is completely integrable, it suffices to find a Lax pair, that is, two operators L(t) and P(t) in the Hilbert space of square summable sequences such that the Lax equation
(where [L, P] = LP - PL is the Lie commutator of the two operators) is equivalent to the time derivative of Flaschka's variables. The choice
where f(n+1) and f(n-1) are the shift operators, implies that the operators L(t) for different t are unitarily equivalent.
The matrix has the property that its eigenvalues are invariant in time. These eigenvalues constitute independent integrals of motion, therefore the Toda lattice is completely integrable.
In particular, the Toda lattice can be solved by virtue of the inverse scattering transform for the Jacobi operator L. The main result implies that arbitrary (sufficiently fast) decaying initial conditions asymptotically for large t split into a sum of solitons and a decaying dispersive part.
See also
Lax pair
Lie bialgebra
Poisson–Lie group |
https://en.wikipedia.org/wiki/Offset%20%28computer%20science%29 | In computer science, an offset within an array or other data structure object is an integer indicating the distance (displacement) between the beginning of the object and a given element or point, presumably within the same object. The concept of a distance is valid only if all elements of the object are of the same size (typically given in bytes or words).
For example, if A is an array of characters containing "abcdef", the fourth element containing the character 'd' has an offset of three from the start of A.
In assembly language
In computer engineering and low-level programming (such as assembly language), an offset usually denotes the number of address locations added to a base address in order to get to a specific absolute address. In this (original) meaning of offset, only the basic address unit, usually the 8-bit byte, is used to specify the offset's size. In this context an offset is sometimes called a relative address.
In IBM System/360 instructions, a 12-bit offset embedded within certain instructions provided a range of between 0 and 4096 bytes. For example, within an unconditional branch instruction (X'47F0Fxxx'), the xxx 12bit hexadecimal offset provided the byte offset from the base register (15) to branch to. An odd offset would cause a program check (unless the base register itself also contained an odd address)—since instructions had to be aligned on half-word boundaries to execute without a program or hardware interrupt.
The previous example describes an indirect way to address to a memory location in the format of segment:offset. For example, assume we want to refer to memory location 0xF867. One way this can be accomplished is by first defining a segment with beginning address 0xF000, and then defining an offset of 0x0867. Further, we are also allowed to shift the hexadecimal segment to reach the final absolute memory address. One thing to note here is that we can reach our final absolute address in many ways.
An offset is not always rela |
https://en.wikipedia.org/wiki/SINIX | SINIX is a discontinued variant of the Unix operating system from Siemens Nixdorf Informationssysteme. SINIX supersedes SIRM OS and Pyramid Technology's DC/OSx. Following X/Open's acceptance that its requirements for the use of the UNIX trademark were met, version 5.44 and subsequent releases were published as Reliant UNIX by Fujitsu Siemens Computers.
Features
In some versions of SINIX (5.2x) the user could emulate the behaviour of a number of different versions of Unix (known as universes). These included System V.3, System III or BSD. Each universe had its own command set, libraries and header files.
Xenix-based SINIX
The original SINIX was a modified version of Xenix and ran on Intel 80186 processors. For some years Siemens used the NSC-32x32 (up to Sinix 5.2x) and Intel 80486 CPUs (Sinix 5.4x - non MIPS) in their MX-Series.
System V-based SINIX
Later versions of SINIX based on System V were designed for the:
SNI RM-200, RM-300, RM-400 and RM-600 servers running on the MIPS processor (SINIX-N, SINIX-O, SINIX-P, SINIX-Y)
SNI PC-MX2, MX300-05/-10/-15/-30, Siemens MX500-75/-85 running NS320xx (SINIX-H)
PC-MXi, MX300-45 on the Intel X86 processor (SINIX-L)
SNI WX-200 and other IBM-compatible i386 PCs on the Intel 80386 and newer processors (SINIX-Z)
The last release under the SINIX name was version 5.43 in 1995.
Reliant UNIX
The last Reliant UNIX versions were registered as UNIX 95 compliant (XPG4 hard branding).
The last release of Reliant UNIX was version 5.45.
See also
BS2000
VM2000
External links
Siemens Business Services - SINIX patches and support
The SINIX operating system
Sven Mascheck, SINIX V5.20 Universes
MIPS operating systems
UNIX System V |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.