source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/PKCS%2011 | In cryptography, PKCS #11 is one of the Public-Key Cryptography Standards, and also refers to the programming interface to create and manipulate cryptographic tokens (a token where the secret is a cryptographic key).
Detail
The PKCS #11 standard defines a platform-independent API to cryptographic tokens, such as hardware security modules (HSM) and smart cards, and names the API itself "Cryptoki" (from "cryptographic token interface" and pronounced as "crypto-key", although "PKCS #11" is often used to refer to the API as well as the standard that defines it).
The API defines most commonly used cryptographic object types (RSA keys, X.509 certificates, DES/Triple DES keys, etc.) and all the functions needed to use, create/generate, modify and delete those objects.
Usage
Most commercial certificate authority (CA) software uses PKCS #11 to access the CA signing key or to enroll user certificates. Cross-platform software that needs to use smart cards uses PKCS #11, such as Mozilla Firefox and OpenSSL (using an extension). It is also used to access smart cards and HSMs. Software written for Microsoft Windows may use the platform specific MS-CAPI API instead. Both Oracle Solaris and Red Hat Enterprise Linux contain implementations for use by applications, as well.
Relationship to KMIP
The Key Management Interoperability Protocol (KMIP) defines a wire protocol that has similar functionality to the PKCS#11 API.
The two standards were originally developed independently but are now both governed by an OASIS technical committee. It is the stated objective of both the PKCS#11 and KMIP committees to align the standards where practicable. For example, the PKCS#11 Sensitive and Extractable attributes are being added to KMIP version 1.4. There is considerable overlap between members of the two technical committees.
History
The PKCS#11 standard originated from RSA Security along with its other PKCS standards in 1994. In 2013, RSA contributed the latest draft revision of th |
https://en.wikipedia.org/wiki/Elementary%20reaction | An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy |
https://en.wikipedia.org/wiki/Computation%20history | In computer science, a computation history is a sequence of steps taken by an abstract machine in the process of computing its result. Computation histories are frequently used in proofs about the capabilities of certain machines, and particularly about the undecidability of various formal languages.
Formally, a computation history is a (normally finite) sequence of configurations of a formal automaton. Each configuration fully describes the status of the machine at a particular point. To be valid, certain conditions must hold:
the first configuration must be a valid initial configuration of the automaton and
each transition between adjacent configurations must be valid according to the transition rules of the automaton.
In addition, to be complete, a computation history must be finite and
the final configuration must be a valid terminal configuration of the automaton.
The definitions of "valid initial configuration", "valid transition", and "valid terminal configuration" vary for different kinds of formal machines.
A deterministic automaton has exactly one computation history for a given initial configuration, though the history may be infinite and therefore incomplete.
Finite State Machines
For a finite state machine , a configuration is simply
the current state of the machine, together with the remaining input. The first configuration must be the initial state of and the complete input. A transition from a configuration to
a configuration is allowed if for
some input symbol and if has a transition from
to on input . The final
configuration must have the empty string as its remaining
input; whether has accepted or rejected the input depends
on whether the final state is an accepting state.
Turing Machines
Computation histories are more commonly used in reference to Turing machines. The configuration of a single-tape Turing machine consists of the contents of the tape, the position of the read/write head on the tape, and the current stat |
https://en.wikipedia.org/wiki/Neuroanthropology | Neuroanthropology is the study of the relationship between culture and the brain. This field of study emerged from a 2008 conference of the American Anthropological Association. It is based on the premise that lived experience leaves identifiable patterns in brain structure, which then feed back into cultural expression.The exact mechanisms are so far ill defined and remain speculative.
Overview
Neuroanthropology explores how the brain gives rise to culture, how culture influences brain development, structure and function, and the pathways followed by the co-evolution of brain and culture. Moreover, neuroanthropologists consider how new findings in the brain sciences help us understand the interactive effects of culture and biology on human development and behavior. In one way or another, neuroanthropologists ground their research and explanations in how the human brain develops, how it is structured and how it functions within the genetic and cultural limits of its biology (see Biogenetic structuralism and related website).
"Neuroanthropology" is a broad term, intended to embrace all dimensions of human neural activity, including emotion, perception, cognition, motor control, skill acquisition, and a range of other issues. Interests include the evolution of the hominid brain, cultural development and the brain, the biochemistry of the brain and alternative states of consciousness, human universals, how culture influences perception, how the brain structures experience, and so forth. In comparison to previous ways of doing psychological or cognitive anthropology, it remains open and heterogeneous, recognizing that not all brain systems function in the same way, so culture will not take hold of them in identical fashion.
Anthropology and neuroscience
Cultural neuroscience is another area that focuses on society's impact on the brain, but with a different focus. For example, studies in cultural neuroscience focus on differences in brain development across cultur |
https://en.wikipedia.org/wiki/North%20Pacific%20Exploring%20and%20Surveying%20Expedition | The North Pacific Exploring and Surveying Expedition, also known as the Rodgers-Ringgold Expedition was a United States scientific and exploring project from 1853 to 1856.
Commander Cadwalader Ringgold (1802–1867) led the expedition until he was relieved of command in Hong Kong by a commission convened by Commodore Matthew Perry. Lt. John Rodgers (1812–1882) then commanded the expedition until its conclusion.
The expedition under Ringgold
Ringgold sailed on USS Porpoise, a ship he had commanded during the U.S. Exploring Expedition years before. USS John Hancock, commanded by Lt. John Rodgers and three other vessels including USS Vincennes would be the other vessels in the expedition.
Porpoise joined the squadron at Hampton Roads, and with it, stood out to sea 11 June 1853. After stopping at Funchal, Madeira Islands; Porto Praya; and Simonstown, False Bay; the expedition arrived in Batavia, Dutch East Indies, on the island of Java, on 12 December, and in China in March 1854.
Five months were devoted to surveying the waters surrounding the large islands off the coast of Southeast Asia. Early in May 1854, John Hancock, with Rodgers commanding, departed for Hong Kong, where she arrived 24 May. The squadron operated from that port as its base throughout the summer, surveying nearby coasts, islands and rivers. At this time China was plagued by rebellion and pirates endangering foreigners and threatening their property. The American ships helped protect American citizens and interests. While steaming up the Canton River, two armed boats from John Hancock were fired upon by rebel batteries, which Hancock'''s cannon promptly silenced.
From Hong Kong to the United States
In July 1854, Ringgold became sick with malaria and was sent home, according to at least one source. However, Nathaniel Philbrick, in his book "Sea of Glory" about the U.S. Exploring Expedition, writes that in the later expedition Ringgold "began to act strangely" once in China, keeping his ships in |
https://en.wikipedia.org/wiki/Anti%E2%80%93computer%20forensics | Anti–computer forensics or counter-forensics are techniques used to obstruct forensic analysis.
Definition
Anti-forensics has only recently been recognized as a legitimate field of study.
One of the more widely known and accepted definitions comes from Marc Rogers. One of the earliest detailed presentations of anti-forensics, in Phrack Magazine in 2002, defines anti-forensics as "the removal, or hiding, of evidence in an attempt to mitigate the effectiveness of a forensics investigation".
A more abbreviated definition is given by Scott Berinato in his article entitled, The Rise of Anti-Forensics. "Anti-forensics is more than technology. It is an approach to criminal hacking that can be summed up like this: Make it hard for them to find you and impossible for them to prove they found you." Neither author takes into account using anti-forensics methods to ensure the privacy of one's personal data.
Sub-categories
Anti-forensics methods are often broken down into several sub-categories to make classification of the various tools and techniques simpler. One of the more widely accepted subcategory breakdowns was developed by Dr. Marcus Rogers. He has proposed the following sub-categories: data hiding, artifact wiping, trail obfuscation and attacks against the CF (computer forensics) processes and tools. Attacks against forensics tools directly has also been called counter-forensics.
Purpose and goals
Within the field of digital forensics, there is much debate over the purpose and goals of anti-forensic methods. The conventional wisdom is that anti-forensic tools are purely malicious in intent and design. Others believe that these tools should be used to illustrate deficiencies in digital forensic procedures, digital forensic tools, and forensic examiner education. This sentiment was echoed at the 2005 Blackhat Conference by anti-forensic tool authors, James Foster and Vinnie Liu. They stated that by exposing these issues, forensic investigators will have to work h |
https://en.wikipedia.org/wiki/Scottish%20National%20Dictionary | The Scottish National Dictionary (SND) was published by the Scottish National Dictionary Association (SNDA) from 1931 to 1976 and documents the Modern (Lowland) Scots language. The original editor, William Grant, was the driving force behind the collection of Scots vocabulary. A wide range of sources were used by the editorial team in order to represent the full spectrum of Scottish vocabulary and cultural life.
Literary sources of words and phrases up to the mid-twentieth century were thoroughly investigated, as were historical records, both published and unpublished, of Parliament, Town Councils, Kirk Sessions and Presbyteries and Law Courts. More ephemeral sources such as domestic memoirs, household account books, diaries, letters and the like were also read for the dictionary, as well as a wide range of local and national newspapers and magazines, which often shed light on regional vocabulary and culture.
Perhaps because Scots has often been perceived as inappropriate for formal situations (including formal written text) during the period from 1700 to the present day, many words and expressions that were in regular everyday use did not appear in print. In order to redress this imbalance and fully appreciate the linguistic oral heritage of Scots, field-workers for the dictionary collected personal quotations across the country.
David Murison became editor of the dictionary in 1946, after William Grant's death. He greatly increased the number and range of written sources and expanded the coverage of oral material. He improved the layout and clarity of the entries, revealing the healthy position of modern Scots usage in spite of centuries of neglect. Murison was therefore instrumental in encouraging the study of modern Scots and fostering respect for it as a language. He was responsible for the completion of Volume III, and for overall control of Volumes IV to X.
In 1985, the one-volume Concise Scots Dictionary based on the SND and DOST was published (editor-in |
https://en.wikipedia.org/wiki/Point%20source | A point source is a single identifiable localised source of something. A point source has negligible extent, distinguishing it from other source geometries. Sources are called point sources because in mathematical modeling, these sources can usually be approximated as a mathematical point to simplify analysis.
The actual source need not be physically small, if its size is negligible relative to other length scales in the problem. For example, in astronomy, stars are routinely treated as point sources, even though they are in actuality much larger than the Earth.
In three dimensions, the density of something leaving a point source decreases in proportion to the inverse square of the distance from the source, if the distribution is isotropic, and there is no absorption or other loss.
Mathematics
In mathematics, a point source is a singularity from which flux or flow is emanating. Although singularities such as this do not exist in the observable universe, mathematical point sources are often used as approximations to reality in physics and other fields.
Visible electromagnetic radiation (light)
Generally, a source of light can be considered a point source if the resolution of the imaging instrument is too low to resolve the source's apparent size. There are two types and sources of light: a point source and an extended source.
Mathematically an object may be considered a point source if its angular size, , is much smaller than the resolving power of the telescope:
, where is the wavelength of light and is the telescope diameter.
Examples:
Light from a distant star seen through a small telescope
Light passing through a pinhole or other small aperture, viewed from a distance much greater than the size of the hole
Light from a street light in a large-scale study of light pollution or street illumination
Other electromagnetic radiation
Radio wave sources which are smaller than one radio wavelength are also generally treated as point sources. Radio emissions g |
https://en.wikipedia.org/wiki/Parry%E2%80%93Daniels%20map | In mathematics, the Parry–Daniels map is a function studied in the context of dynamical systems. Typical questions concern the existence of an invariant or ergodic measure for the map.
It is named after the English mathematician Bill Parry and the British statistician Henry Daniels, who independently studied the map in papers published in 1962.
Definition
Given an integer n ≥ 1, let Σ denote the n-dimensional simplex in Rn+1 given by
Let π be a permutation such that
Then the Parry–Daniels map
is defined by |
https://en.wikipedia.org/wiki/Rubber%20elasticity | Rubber elasticity refers to a property of crosslinked rubber: it can be stretched by up to a factor of 10 from its original length and, when released, returns very nearly to its original length. This can be repeated many times with no apparent degradation to the rubber. Rubber is a member of a larger class of materials called elastomers and it is difficult to overestimate their economic and technological importance. Elastomers have played a key role in the development of new technologies in the 20th century and make a substantial contribution to the global economy. Rubber elasticity is produced by several complex molecular processes and its explanation requires a knowledge of advanced mathematics, chemistry and statistical physics, particularly the concept of entropy. Entropy may be thought of as a measure of the thermal energy that is stored in a molecule.
Common rubbers, such as polybutadiene and polyisoprene (also called natural rubber), are produced by a process called polymerization. Very long molecules (polymers) are built up sequentially by adding short molecular backbone units through chemical reactions. A rubber polymer follows a random, zigzag path in three dimensions, intermingling with many other rubber molecules. An elastomer is created by the addition of a few percent of a cross linking molecule such as sulfur. When heated, the crosslinking molecule causes a reaction that chemically joins (bonds) two of the rubber molecules together at some point (a crosslink). Because the rubber molecules are so long, each one participates in many crosslinks with many other rubber molecules forming a continuous molecular network.
As a rubber band is stretched, some of the network chains are forced to become straight and this causes a decrease in their entropy. It is this decrease in entropy that gives rise to the elastic force in the network chains.
History
Following its introduction to Europe from the New World in the late 15th century, natural rubber ( |
https://en.wikipedia.org/wiki/Standard%20linear%20solid%20model | The standard linear solid (SLS), also known as the Zener model, is a method of modeling the behavior of a viscoelastic material using a linear combination of springs and dashpots to represent elastic and viscous components, respectively. Often, the simpler Maxwell model and the Kelvin–Voigt model are used. These models often prove insufficient, however; the Maxwell model does not describe creep or recovery, and the Kelvin–Voigt model does not describe stress relaxation. SLS is the simplest model that predicts both phenomena.
Definition
Materials undergoing strain are often modeled with mechanical components, such as springs (restorative force component) and dashpots (damping component).
Connecting a spring and damper in series yields a model of a Maxwell material while connecting a spring and damper in parallel yields a model of a Kelvin–Voigt material. In contrast to the Maxwell and Kelvin–Voigt models, the SLS is slightly more complex, involving elements both in series and in parallel. Springs, which represent the elastic component of a viscoelastic material, obey Hooke's law:
where σ is the applied stress, E is the Young's modulus of the material, and ε is the strain. The spring represents the elastic component of the model's response.
Dashpots represent the viscous component of a viscoelastic material. In these elements, the applied stress varies with the time rate of change of the strain:
where η is viscosity of the dashpot component.
Solving the model
In order to model this system, the following physical relations must be realized:
For parallel components: , and .
For series components: , and .
Maxwell representation
This model consists of two systems in parallel. The first, referred to as the Maxwell arm, contains a spring () and dashpot (viscosity ) in series. The other system contains only a spring ().
These relationships help relate the various stresses and strains in the overall system and the Maxwell arm:
where the subscripts , , a |
https://en.wikipedia.org/wiki/Intel%20iPSC | The Intel Personal SuperComputer (Intel iPSC) was a product line of parallel computers in the 1980s and 1990s.
The iPSC/1 was superseded by the Intel iPSC/2, and then the Intel iPSC/860.
iPSC/1
In 1984, Justin Rattner became manager of the Intel Scientific Computers group in Beaverton, Oregon. He hired a team that included mathematician Cleve Moler.
The iPSC used a hypercube of connections between the processors internally inspired by the Caltech Cosmic Cube research project.
For that reason, it was configured with nodes numbering with power of two, which correspond to the corners of hypercubes of increasing dimension.
Intel announced the iPSC/1 in 1985, with 32 to 128 nodes connected with Ethernet into a hypercube. The system was managed by a personal computer of the PC/AT era running Xenix, the "cube manager". Each node had a 80286 CPU with 80287 math coprocessor, 512K of RAM, and eight Ethernet ports (seven for the hypercube interconnect, and one to talk to the cube manager).
A message passing interface called NX that was developed by Paul Pierce evolved throughout the life of the iPSC line.
Because only the cube manager had connections to the outside world, developing and debugging applications was difficult.
The basic models were the iPSC/d5 (five-dimension hypercube with 32 nodes), iPSC/d6 (six dimensions with 64 nodes), and iPSC/d7 (seven dimensions with 128 nodes).
Each cabinet had 32 nodes, and prices ranged up to about half a million dollars for the four-cabinet iPSC/d7 model.
Extra memory (iPSC-MX) and vector processor (iPSC-VX) models were also available, in the three sizes. A four-dimensional hypercube was also available (iPSC/d4), with 16 nodes.
iPSC/1 was called the first parallel computer built from commercial off-the-shelf parts. This allowed it to reach the market about the same time as its competitor from nCUBE, even though the nCUBE project had started earlier.
Each iPSC cabinet was (overall) 127 cm x 41 cm x 43 cm. Total computer perform |
https://en.wikipedia.org/wiki/Rosenfeld%27s%20law | Rosenfeld's law is an axiom relating physics to economics, that states that the amount of energy required to produce one dollar of GDP has decreased by about one percent per year since 1845.
The original quote by Arthur H. Rosenfeld is:
From 1845 to the present, the amount of energy required to produce the same amount of gross national product has steadily decreased at the rate of about 1 percent per year. This is not quite as spectacular as Moore's Law of integrated circuits, but it has been tested over a longer period of time. One percent per year yields a factor of 2.7 when compounded over 100 years. It took 56 BTUs (59,000 joules) of energy consumption to produce one (1992) dollar of GDP in 1845. By 1998, the same dollar required only 12.5 BTUs (13,200 joules).
Notes
Economic indicators
Economics laws
Eponyms
Energy economics |
https://en.wikipedia.org/wiki/Phrenicocolic%20ligament | A fold of peritoneum, the phrenicocolic ligament is continued from the left colic flexure to the thoracic diaphragm opposite the tenth and eleventh ribs; it passes below and serves to support the spleen, and therefore has received the name of sustentaculum lienis.
The phrenicocolic ligament is also called Hensing's ligament after Friedrich Wilhelm Hensing (1719–1745), a German professor for medicine in Giessen.
Clinical significance
Knowledge of basic anatomic and the variations of suspensory ligament of the spleen it is essential in the case of open surgery or laparoscopic splenectomy. Moreover, during some surgical procedures, in many cases it is necessary to exert a certain degree of traction on the spleen and on its peritoneal insertions. Unfortunately this traction may result in a rupture of the fibrous capsule of the organ, resulting in severe bleeding, very difficult to control. Particularly hazardous is the downward traction of the phrenicocolic ligament (this maneuver may be necessary for the mobilization of splenic flexure). This ligament marks the site where the colon exits the peritoneal cavity: the phrenicocolic ligament so is an important point of intersection of abdominal anatomy and, consequently, a crucial point for spread of abdominal disease. |
https://en.wikipedia.org/wiki/TAP%20Pharmaceuticals | TAP Pharmaceuticals was formed in 1977 as a joint venture between the two global pharmaceutical companies, Abbott Laboratories and Takeda Pharmaceutical Co. and was dissolved in 2008; its two most lucrative products were proton-pump inhibitor lansoprazole (Prevacid) and the prostate cancer drug, leuprorelin (Lupron). The intention of the joint venture was to get products that Takeda had discovered developed, approved, and marketed in the US and Canada.
The company was established at a time when Japanese pharmaceutical companies were seeking partnerships to access the US market. These efforts were supported by the Japanese government at the time to help the national economy compete in higher technology, as countries like South Korea, Taiwan were beginning to catch up with Japan in commodity production. Japanese pharmaceutical companies were especially strong in the fields of generating analogs of known cephalosporin antibiotics, cancer drugs, and cardiovascular drugs.
The first products TAP file new drug applications for, were two cephalosporins, cefmenoxime (Cefmax) and cefsulodin (Cefonomil), estazolam for sleep disorders, and leuprorelin; leuprorelin was the first one approved, in 1985.
In 1998 Takeda established its own US R&D and sales force, for the diabetes drug pioglitazone (Actos).
In 2000, TAP's withdrew its new drug application for apomorphine (branded as "Uprima") as a treatment for erectile dysfunction after an FDA review panel raised questions about the drug's safety, due to many clinical trial subjects fainting after taking the drug.
In 2001, the US Department of Justice, states attorneys general, and TAP Pharmaceutical Products settled criminal and civil charges against TAP related to federal and state medicare fraud and illegal marketing of the drug leuprorelin. TAP paid a total of $875 million, which was a record high at the time.
The $875 million settlement broke down to $290 million for violating the Prescription Drug Marketing Act, $559 |
https://en.wikipedia.org/wiki/Gastrophrenic%20ligament | The postero-superior surface of the stomach is covered by peritoneum, except over a small area close to the cardiac orifice; this area is limited by the lines of attachment of the gastrophrenic ligament, and lies in apposition with the diaphragm, and frequently with the upper portion of the left suprarenal gland. |
https://en.wikipedia.org/wiki/Coronary%20ligament | The coronary ligament of the liver refers to parts of the peritoneal reflections that hold the liver to the inferior surface of the diaphragm.
Structure
The convex diaphragmatic surface of the liver (anterior, superior and a little posterior) is connected to the concavity of the inferior surface of the diaphragm by reflections of peritoneum. The coronary ligament is the largest of these, having an anterior (frontal) and posterior (back) layers.
The diaphragmatic surface of the liver that is in direct contact with the diaphragm (just beyond the peritoneal reflections) has no peritoneal covering, and is termed the bare area of the liver.
The anterior layer of the coronary ligament is formed by the reflection of the peritoneum from the upper margin of the bare area of the liver to the under surface of the diaphragm.
The posterior layer of the coronary ligament is reflected from the lower margin of the bare area and is continuous with the right layer of the lesser omentum.
The anterior and posterior layers converge on the right and left sides of the liver to form the right triangular ligament and the left triangular ligament, respectively. In between the two sides of the anterior layer, the reflection of peritoneum has an inferior continuation termed the falciform ligament. The falciform ligament contains the round ligament of liver.
Additional images |
https://en.wikipedia.org/wiki/Thoracoepigastric%20vein | The thoracoepigastric vein runs along the lateral aspect of the trunk between the superficial epigastric vein below and the lateral thoracic vein above and establishes an important communication between the femoral vein and axillary vein. This is an especially important vein when the inferior vena cava (IVC) becomes obstructed, by providing a means of collateral venous return. It creates a cavocaval anastomosis by connecting with superficial epigastric veins arising from femoral vein just below inguinal ligament.
Clinical significance
The thoracoepigastric vein is unique in that it drains to both the Superior Vena Cava (SVC) and to the Inferior Vena Cava (IVC). Hence, it serves as an anastomotic caval-caval link between the two. Furthermore, the thoracoepigastric vein is connected to the portal vein via the paraumbilical vein and thereby serves as a portocaval anastomosis as well. When a patient experiences portal hypertension, there can be congestion (backup) of blood that enters into the caval system via the thoracoepigastric vein. When this occurs, there can be an externally visible dilation of the paraumbilical (and perhaps even the thoracoepigastric veins) which leads to the appearance of "Caput Medusae". Caput Medusae is a clinical sign that is recognized by the physician by the characteristic appearance of distended veins emanating from the umbilicus of the patient. The shape of these veins and their arrangement around the umbilicus is said to resemble the snake-like hair of the mythological Greek Monster, Medusa. "Caput Medusae" [Latin] means "Head of Medusa". |
https://en.wikipedia.org/wiki/Tetralemma | The tetralemma is a figure that features prominently in the logic of India.
Definition
It states that with reference to any a logical proposition X, there are four possibilities:
(affirmation)
(negation)
(both)
(neither)
Catuskoti
The history of fourfold negation, the Catuskoti (Sanskrit), is evident in the logico-epistemological tradition of India, given the categorical nomenclature Indian logic in Western discourse. Subsumed within the auspice of Indian logic, 'Buddhist logic' has been particularly focused in its employment of the fourfold negation, as evidenced by the traditions of Nagarjuna and the Madhyamaka, particularly the school of Madhyamaka given the retroactive nomenclature of Prasangika by the Tibetan Buddhist logico-epistemological tradition. Though tetralemma was also used as a form inquiry rather than logic in the Nasadiya Sukta of Rigveda (creation hymn) though seems to be rarely used as a tool of logic before Buddhism.
See also
De Morgan's laws
Paraconsistent logic
Prasangika
Two-truths doctrine
Catuṣkoṭi, a similar concept in Indian philosophy |
https://en.wikipedia.org/wiki/Tree%20rearrangement | Tree rearrangements are deterministic algorithms devoted to search for optimal phylogenetic tree structure. They can be applied to any set of data that are naturally arranged into a tree, but have most applications in computational phylogenetics, especially in maximum parsimony and maximum likelihood searches of phylogenetic trees, which seek to identify one among many possible trees that best explains the evolutionary history of a particular gene or species.
Basic tree rearrangements
The simplest tree-rearrangement, known as nearest-neighbor interchange, exchanges the connectivity of four subtrees within the main tree. Because there are three possible ways of connecting four subtrees, and one is the original connectivity, each interchange creates two new trees. Exhaustively searching the possible nearest-neighbors for each possible set of subtrees is the slowest but most optimizing way of performing this search. An alternative, more wide-ranging search, subtree pruning and regrafting (SPR), selects and removes a subtree from the main tree and reinserts it elsewhere on the main tree to create a new node. Finally, tree bisection and reconnection (TBR) detaches a subtree from the main tree at an interior node and then attempts all possible connections between edges of the two trees thus created. The increasing complexity of the tree rearrangement technique correlates with increasing computational time required for the search, although not necessarily with their performance.
SPR can be further divided into uSPR: Unrooted SPR, rSPR: Rooted SPR. uSPR is applied to unrooted trees, and goes like this: break any edge. Join one end of the edge (selected arbitrarily) to any other edge in the tree. rSPR is applied to rooted trees*, and goes: break any edge except the edge leading to the root node. Join one end of the edge (specifically: the end of the edge that is FURTHEST from the root) and attach it to any other edge of the tree.
* In this example the root of the tree is |
https://en.wikipedia.org/wiki/Violin%20acoustics | Violin acoustics is an area of study within musical acoustics concerned with how the sound of a violin is created as the result of interactions between its many parts. These acoustic qualities are similar to those of other members of the violin family, such as the viola.
The energy of a vibrating string is transmitted through the bridge to the body of the violin, which allows the sound to radiate into the surrounding air. Both ends of a violin string are effectively stationary, allowing for the creation of standing waves. A range of simultaneously produced harmonics each affect the timbre, but only the fundamental frequency is heard. The frequency of a note can be raised by the increasing the string's tension, or decreasing its length or mass. The number of harmonics present in the tone can be reduced, for instance by the using the left hand to shorten the string length. The loudness and timbre of each of the strings is not the same, and the material used affects sound quality and ease of articulation. Violin strings were originally made from catgut but are now usually made of steel or a synthetic material. Most strings are wound with metal to increase their mass while avoiding excess thickness.
During a bow stroke, the string is pulled until the string's tension causes it to return, after which it receives energy again from the bow. Violin players can control bow speed, the force used, the position of the bow on the string, and the amount of hair in contact with the string. The static forces acting on the bridge, which supports one end of the strings' playing length, are large: dynamic forces acting on the bridge force it to rock back and forth, which causes the vibrations from the strings to be transmitted. A violin's body is strong enough to resist the tension from the strings, but also light enough to vibrate properly. It is made of two arched wooden plates with ribs around the sides and has two f-holes on either side of the bridge. It acts as a sound box to |
https://en.wikipedia.org/wiki/Atomic%20ratio | The atomic ratio is a measure of the ratio of atoms of one kind (i) to another kind (j). A closely related concept is the atomic percent (or at.%), which gives the percentage of one kind of atom relative to the total number of atoms. The molecular equivalents of these concepts are the molar fraction, or molar percent.
Atoms
Mathematically, the atomic percent is
%
where Ni are the number of atoms of interest and Ntot are the total number of atoms, while the atomic ratio is
For example, the atomic percent of hydrogen in water (H2O) is , while the atomic ratio of hydrogen to oxygen is .
Isotopes
Another application is in radiochemistry, where this may refer to isotopic ratios or isotopic abundances. Mathematically, the isotopic abundance is
where Ni are the number of atoms of the isotope of interest and Ntot is the total number of atoms, while the atomic ratio is
For example, the isotopic ratio of deuterium (D) to hydrogen (H) in heavy water is roughly (corresponding to an isotopic abundance of 0.00014%).
Doping in laser physics
In laser physics however, the atomic ratio may refer to the doping ratio or the doping fraction.
For example, theoretically, a 100% doping ratio of Yb : Y3Al5O12 is pure Yb3Al5O12.
The doping fraction equals,
See also
Table of concentration measures |
https://en.wikipedia.org/wiki/Pseudoelasticity | Pseudoelasticity, sometimes called superelasticity, is an elastic (reversible) response to an applied stress, caused by a phase transformation between the austenitic and martensitic phases of a crystal. It is exhibited in shape-memory alloys.
Overview
Pseudoelasticity is from the reversible motion of domain boundaries during the phase transformation, rather than just bond stretching or the introduction of defects in the crystal lattice (thus it is not true superelasticity but rather pseudoelasticity). Even if the domain boundaries do become pinned, they may be reversed through heating. Thus, a pseudoelastic material may return to its previous shape (hence, shape memory) after the removal of even relatively high applied strains. One special case of pseudoelasticity is called the Bain Correspondence. This involves the austenite/martensite phase transformation between a face-centered crystal lattice (FCC) and a body-centered tetragonal crystal structure (BCT).
Superelastic alloys belong to the larger family of shape-memory alloys. When mechanically loaded, a superelastic alloy deforms reversibly to very high strains (up to 10%) by the creation of a stress-induced phase. When the load is removed, the new phase becomes unstable and the material regains its original shape. Unlike shape-memory alloys, no change in temperature is needed for the alloy to recover its initial shape.
Superelastic devices take advantage of their large, reversible deformation and include antennas, eyeglass frames, and biomedical stents.
Nickel titanium (Nitinol) is an example of an alloy exhibiting superelasticity.
Size effects
Recently, there have been interests of discovering materials exhibiting superelasticity in nanoscale for MEMS (Microelectromechanical systems) application. The ability to control the martensitic phase transformation has already been reported. But the behavior of superelasticity has been observed to have size effects in nanoscale.
Qualitatively speaking, supere |
https://en.wikipedia.org/wiki/Hydrophobin | Hydrophobins are a group of small (~100 amino acids) cysteine-rich proteins that were discovered in filamentous fungi that are lichenized or not. Later similar proteins were also found in Bacteria. Hydrophobins are known for their ability to form a hydrophobic (water-repellent) coating on the surface of an object. They were first discovered and separated in Schizophyllum commune in 1991. Based on differences in hydropathy patterns and biophysical properties, they can be divided into two categories: class I and class II. Hydrophobins can self-assemble into a monolayer on hydrophilic:hydrophobic interfaces such as a water:air interface. Class I monolayer contains the same core structure as amyloid fibrils, and is positive to Congo red and thioflavin T. The monolayer formed by class I hydrophobins has a highly ordered structure, and can only be dissociated by concentrated trifluoroacetate or formic acid. Monolayer assembly involves large structural rearrangements with respect to the monomer.
Fungi make complex aerial structures and spores even in aqueous environments.
Hydrophobins have been identified in lichens as well as non-lichenized ascomycetes and basidiomycetes; whether they exist in other groups is not known. Hydrophobins are generally found on the outer surface of conidia and of the hyphal wall, and may be involved in mediating contact and communication between the fungus and its environment. Some family members contain multiple copies of the domain.
Hydrophobins have been found to be structurally and functionally similar to cerato-platanins, another group of small cysteine-rich proteins, which also contain a high percentage of hydrophobic amino acids, and are also associated with hyphal growth.
This family of proteins includes the rodlet proteins of Neurospora crassa (gene eas) and Emericella nidulans (gene rodA), these proteins are the main component of the hydrophobic sheath covering the surface of many fungal spores.
Genomic sequencing of two fungi f |
https://en.wikipedia.org/wiki/Noise%20margin | In electrical engineering, noise margin is the maximum voltage amplitude of extraneous signal that can be algebraically added to the noise-free worst-case input level without causing the output voltage to deviate from the allowable logic voltage level. It is commonly used in at least two contexts as follows:
In communications system engineering, noise margin is the ratio by which the signal exceeds the minimum acceptable amount. It is normally measured in decibels.
In a digital circuit, the noise margin is the amount by which the signal exceeds the threshold for a proper '0' (logic low) or '1' (logic high). For example, a digital circuit might be designed to swing between 0.0 and 1.2 volts, with anything below 0.2 volts considered a '0', and anything above 1.0 volts considered a '1'. Then the noise margin for a '0' would be the amount that a signal is below 0.2 volts, and the noise margin for a '1' would be the amount by which a signal exceeds 1.0 volt. In this case noise margins are measured as an absolute voltage, not a ratio. Noise margins for CMOS chips are usually much greater than those for TTL because the VOH min is closer to the power supply voltage and VOL max is closer to zero.
Real digital inverters do not instantaneously switch from a logic high (1) to a logic low (0), there is some capacitance. While an inverter is transitioning from a logic high to low, there is an undefined region where the voltage cannot be considered high or low. This is considered a noise margin. There are two noise margins to consider: Noise margin high (NMH) and noise margin low (NML). NMH is the amount of voltage between an inverter transitioning from a logic high (1) to a logic low (0) and vice versa for NML. The equations are as follows: NMH ≡ VOH - VIH and NML ≡ VIL - VOL. Typically, in a CMOS inverter VOH will equal VDD and VOL will equal the ground potential, as mentioned above.
VIH is defined as the highest input voltage at which the slope of the voltage transfer |
https://en.wikipedia.org/wiki/Split-flap%20display | A split-flap display, or sometimes simply a flap display, is a digital electromechanical display device that presents changeable alphanumeric text, and occasionally fixed graphics.
Often used as a public transport timetable in airports or railway stations, as such they are often called Solari boards after Italian display manufacturer Solari di Udine, or in Central European countries they are called Pragotron after the Czech manufacturer.
Split-flap displays were once commonly used in consumer digital clocks known as flip clocks.
Description
Each character position or graphic position has a collection of flaps on which the characters or graphics are painted or silkscreened. These flaps are precisely rotated to show the desired character or graphic. These displays are often found in railway stations and airports, where they serve as flight information display system and typically display departure or arrival information.
Sometimes the flaps are large and display whole words, and in other installations there are several smaller flaps, each displaying a single character.
Flip-dot displays and LED display boards may be used instead of split-flap displays in most applications. Their output can be changed by reprogramming instead of replacement of physical parts but they suffer from lower readability. They also can refresh more quickly, as a split-flap display often must cycle through many states.
Advantages to these displays include:
high visibility and wide viewing angle in most lighting conditions
little or no power consumption while the display remains static
Distinct metallic flapping sound draws attention when the information is updated.
The Massachusetts Bay Transportation Authority has designed the new LED replacements for its aging Solari boards at North Station and South Station to emit an electronically generated flapping noise to cue passengers to train boarding updates.
Many game shows of the 1970s used this type of display for the contestant podiu |
https://en.wikipedia.org/wiki/Spin%20diffusion | Spin diffusion describes a situation wherein the individual nuclear spins undergo continuous exchange of energy. This permits polarization differences within the sample to be reduced on a timescale much shorter than relaxation effects.
Spin diffusion is a process by which magnetization can be exchanged spontaneously between spins. The process is driven by dipolar coupling, and is therefore related to internuclear distances. Spin diffusion has been used to study many structural problems in the past, ranging from domain sizes in polymers and disorder in glassy materials to high-resolution crystal structure determination of small molecules and proteins.
In solid-state nuclear magnetic resonance, spin diffusion plays a major role in Cross Polarization (CP) experiments. As mentioned before, by transferring the magnetization (and thus the population) from nuclei with different values for the spin-lattice relaxation (T1), the overall time for the experiment is reduced. Is a very common practice when the sample contains hydrogen. Another desirable effect is that the signal to noise ratio (S/N) is increased until a theoretical factor γA/γB, being γ the gyromagnetic ratio.
Notes
Quantum field theory
Nuclear magnetic resonance |
https://en.wikipedia.org/wiki/Tracking%20%28particle%20physics%29 | In particle physics, tracking is the process of reconstructing the trajectory (or track) of electrically charged particles in a particle detector known as a tracker. The particles entering such a tracker leave a precise record of their passage through the device, by interaction with suitably constructed components and materials. The presence of a calibrated magnetic field, in all or part of the tracker, allows the local momentum of the charged particle to be directly determined from the reconstructed local curvature of the trajectory for known (or assumed) electric charge of the particle.
Generally, track reconstruction is divided into two stages. First, track finding needs to be performed where a cluster of detector hits believed to originate from the same track are grouped together. Second, a track fitting is performed. Track fitting is the procedure of mathematically fitting a curve to the found hits and from this fit the momentum is obtained.
Identification and reconstruction of trajectories from the digitised output of a modern tracker can, in the simplest cases, in the absence of a magnetic field and absorbing/scattering material, be achieved via straight-line segment fits. A simple helical model, to determine momentum in the presence of a magnetic field, might be sufficient in less simple cases, through to a complete (e.g.) Kalman Filter process, to provide a detailed reconstructed local model throughout the complete track in the most complex cases.
This reconstruction of trajectory plus momentum allows projection to/through other detectors, which measure other important properties of the particle such as energy or particle type (Calorimeter, Cherenkov Detector). These reconstructed charged particles can be used to identify and reconstruct secondary decays, including those arising from 'unseen' neutral particles, as can be done for B-tagging (in experiments like CDF or at the LHC) and to fully reconstruct events (as in many current particle physics exper |
https://en.wikipedia.org/wiki/Newtonian%20material | With regard to materials science, a material is said to be "Newtonian" if it exhibits a linear relationship between stress and strain rate.
See also
Stress
Strain
Classical mechanics |
https://en.wikipedia.org/wiki/British%20Colloquium%20for%20Theoretical%20Computer%20Science | The British Colloquium for Theoretical Computer Science (BCTCS) is an organisation, founded in 1985, that represents the interests of Theoretical Computer Science in the UK, e.g. through representation on academic boards and providing commentary and evidence in response to consultations from public bodies. The BCTCS operates under the direction of an Organising Committee, with an Executive consisting of a President, Secretary and Treasurer. The current President is Barnaby Martin.
The purpose of BCTCS is:
to provide a platform from which the interests and future well-being of British theoretical computer science may be advanced;
to offer a forum in which UK-based researchers in all aspects of theoretical computer science can meet, present research findings, and discuss recent developments in the field; and
to foster an environment within which PhD students undertaking research in theoretical computer science may gain experience in presenting their work in a formal arena, broaden their outlook on the subject, and benefit from contact with established researchers in the community.
In pursuit of these aims, the BCTCS organises an annual Conference for UK-based researchers in theoretical computer science. A central aspect of the annual BCTCS Conference is the training of PhD students. The scope of the annual BCTCS Conference includes all aspects of theoretical computer science, including algorithms, complexity, semantics, formal methods, concurrency, types, languages and logics. An emphasis on breadth, together with the inherently mathematical nature of theoretical computer science, means that BCTCS always actively solicits both computer scientists and mathematicians as participants at its annual Conference, and offers an environment within which the two communities can meet and exchange ideas.
The Annual BCTCS Conference is primarily for the benefit of UK-based researchers. However, to promote British theoretical computer science in the wider community, partic |
https://en.wikipedia.org/wiki/John%20V.%20Tucker | John Vivian Tucker (born 4 February 1952) is a British computer scientist and expert on computability theory, also known as recursion theory. Computability theory is about what can and cannot be computed by people and machines. His work has focused on generalising the classical theory to deal with all forms of discrete/digital and continuous/analogue data; and on using the generalisations as formal methods for system design; based on abstract data types and on the interface between algorithms and physical equipment.
Biography
Born in Cardiff, Wales, he was educated at Bridgend Boys' Grammar School, where he was taught mathematics, logic and computing. He read mathematics at University of Warwick (BA in 1973), and studied mathematical logic and the foundations of computing at University of Bristol (MSc in 1974, PhD in 1977). He has held posts at Oslo University, the CWI Amsterdam, and at Bristol and Leeds Universities, before returning to Wales as Professor of Computer Science at Swansea University in 1989. In addition to theoretical computer science, Tucker also lectures on the history of computing and on the history of science and technology and Wales.
Tucker founded the British Colloquium for Theoretical Computer Science in 1985 and served as its president from its inception until 1992. He is a Fellow of the British Computer Society and editor of several international scientific journals and monograph series. At Swansea, he has been Head of Computer Science (1994–2008), Head of Physical Sciences (2007–11) and Deputy Pro Vice Chancellor (2011–2019). He is Member of Academia Europaea.
Outside of Computer Science, Tucker has been a Trustee of the Welsh think-tank, the Institute of Welsh Affairs and the chair of the Swansea Bay branch. He is also a Trustee of the South Wales Institute of Engineers Educational Trust, and the Gower Society.
Professor Tucker is married to Dr. T.E. Rihll, formerly a Reader in Ancient History at Swansea University.
In the early 1990s, |
https://en.wikipedia.org/wiki/Alcatel-Lucent | Alcatel–Lucent S.A. () was a multinational telecommunications equipment company, headquartered in Boulogne-Billancourt, France. It was formed in 2006 by the merger of France-based Alcatel and U.S.-based Lucent, the latter being a successor of AT&T's Western Electric and a holding company of Bell Labs.
In 2014, the Alcatel-Lucent group split into two: Alcatel-Lucent Enterprise, providing enterprise communication services, and Alcatel-Lucent, selling to communications operators. The enterprise business was sold to China Huaxin Post and Telecom Technologies in the same year, and in 2016 Nokia acquired the remainder of Alcatel-Lucent.
The company focused on fixed, mobile and converged networking hardware, IP technologies, software and services, with operations in more than 130 countries. In 2014, it had been named Industry Group Leader for Technology Hardware & Equipment sector in Dow Jones Sustainability Indices review, and listed in the 2014 Thomson Reuters Top 100 Global Innovators for the 4th consecutive year. Alcatel-Lucent also owned Bell Laboratories, one of the largest research and development facilities in the communications industry, whose employees have been awarded nine Nobel Prizes and which holds in excess of 29,000 patents.
On 3 November 2016, Nokia completed the acquisition of the company and it was merged into their Nokia Networks division. Bell Labs was maintained as an independent subsidiary of Nokia.
The Alcatel-Lucent brand has been retired by Nokia, but it survives in the form of Alcatel-Lucent Enterprise, the enterprise division of Alcatel-Lucent that was sold to China Huaxin in 2014.
History
Alcatel-Lucent was formed when Alcatel (originally short for the Société Alsacienne de Constructions Atomiques, de Télécommunications et d'Électronique, a small company in Mulhouse absorbed by CGE in 1966) merged with Lucent Technologies on 1 December 2006. However, the predecessors of the company have been a part of telecommunications industry since t |
https://en.wikipedia.org/wiki/M.%20C.%20Escher%20in%20popular%20culture | There are numerous references to Dutch painter M.C. Escher in popular culture. |
https://en.wikipedia.org/wiki/Spouge%27s%20approximation | In mathematics, Spouge's approximation is a formula for computing an approximation of the gamma function. It was named after John L. Spouge, who defined the formula in a 1994 paper. The formula is a modification of Stirling's approximation, and has the form
where a is an arbitrary positive integer and the coefficients are given by
Spouge has proved that, if Re(z) > 0 and a > 2, the relative error in discarding εa(z) is bounded by
The formula is similar to the Lanczos approximation, but has some distinct features. Whereas the Lanczos formula exhibits faster convergence, Spouge's coefficients are much easier to calculate and the error can be set arbitrarily low. The formula is therefore feasible for arbitrary-precision evaluation of the gamma function. However, special care must be taken to use sufficient precision when computing the sum due to the large size of the coefficients ck, as well as their alternating sign. For example, for a = 49, one must compute the sum using about 65 decimal digits of precision in order to obtain the promised 40 decimal digits of accuracy.
See also
Stirling's approximation
Lanczos approximation |
https://en.wikipedia.org/wiki/Computational%20RAM | Computational RAM (C-RAM) is random-access memory with processing elements integrated on the same chip. This enables C-RAM to be used as a SIMD computer. It also can be used to more efficiently use memory bandwidth within a memory chip. The general technique of doing computations in memory is called Processing-In-Memory (PIM).
Overview
The most influential implementations of computational RAM came from The Berkeley IRAM Project. Vector IRAM (V-IRAM) combines DRAM with a vector processor integrated on the same chip.
Reconfigurable Architecture DRAM (RADram) is DRAM with reconfigurable computing FPGA logic elements integrated on the same chip.
SimpleScalar simulations show that RADram (in a system with a conventional processor) can give orders of magnitude better performance on some problems than traditional DRAM (in a system with the same processor).
Some embarrassingly parallel computational problems are already limited by the von Neumann bottleneck between the CPU and the DRAM.
Some researchers expect that, for the same total cost, a machine built from computational RAM will run orders of magnitude faster than a traditional general-purpose computer on these kinds of problems.
As of 2011, the "DRAM process" (few layers; optimized for high capacitance) and the "CPU process" (optimized for high frequency; typically twice as many BEOL layers as DRAM; since each additional layer reduces yield and increases manufacturing cost, such chips are relatively expensive per square millimeter compared to DRAM) is distinct enough that there are three approaches to computational RAM:
starting with a CPU-optimized process and a device that uses much embedded SRAM, add an additional process step (making it even more expensive per square millimeter) to allow replacing the embedded SRAM with embedded DRAM (eDRAM), giving ≈3x area savings on the SRAM areas (and so lowering net cost per chip).
starting with a system with a separate CPU chip and DRAM chip(s), add small amounts of |
https://en.wikipedia.org/wiki/P-matrix | In mathematics, a -matrix is a complex square matrix with every principal minor is positive. A closely related class is that of -matrices, which are the closure of the class of -matrices, with every principal minor 0.
Spectra of -matrices
By a theorem of Kellogg, the eigenvalues of - and - matrices are bounded away from a wedge about the negative real axis as follows:
If are the eigenvalues of an -dimensional -matrix, where , then
If , , are the eigenvalues of an -dimensional -matrix, then
Remarks
The class of nonsingular M-matrices is a subset of the class of -matrices. More precisely, all matrices that are both -matrices and Z-matrices are nonsingular -matrices. The class of sufficient matrices is another generalization of -matrices.
The linear complementarity problem has a unique solution for every vector if and only if is a -matrix. This implies that if is a -matrix, then is a -matrix.
If the Jacobian of a function is a -matrix, then the function is injective on any rectangular region of .
A related class of interest, particularly with reference to stability, is that of -matrices, sometimes also referred to as -matrices. A matrix is a -matrix if and only if is a -matrix (similarly for -matrices). Since , the eigenvalues of these matrices are bounded away from the positive real axis.
See also
Hurwitz matrix
Linear complementarity problem
M-matrix
Q-matrix
Z-matrix
Perron–Frobenius theorem
Notes |
https://en.wikipedia.org/wiki/Streblomastix | A symbiotic eukaryote that lives in the stomach of termites, and other insects, Streblomastix is a protist that helps to digest wood along with other protists.
The Streblomastix engages in a relationship similar to that of bacteria endosymbionts of rumenous animals such as the cow.
Motility
The Streblomastix moves by beating its anterior flagella.
Morphology
This protozoan looks like a spade, with the stomach on the posterior, and the flagella on the anterior end.
These animals measure around 100 micrometers in length. |
https://en.wikipedia.org/wiki/Long%20terminal%20repeat | A long terminal repeat (LTR) is a pair of identical sequences of DNA, several hundred base pairs long, which occur in eukaryotic genomes on either end of a series of genes or pseudogenes that form a retrotransposon or an endogenous retrovirus or a retroviral provirus. All retroviral genomes are flanked by LTRs, while there are some retrotransposons without LTRs. Typically, an element flanked by a pair of LTRs will encode a reverse transcriptase and an integrase, allowing the element to be copied and inserted at a different location of the genome. Copies of such an LTR-flanked element can often be found hundreds or thousands of times in a genome. LTR retrotransposons comprise about 8% of the human genome.
The first LTR sequences were found by A.P. Czernilofsky and J. Shine in 1977 and 1980.
Transcription
The LTR-flanked sequences are partially transcribed into an RNA intermediate, followed by reverse transcription into complementary DNA (cDNA) and ultimately dsDNA (double-stranded DNA) with full LTRs. The LTRs then mediate integration of the DNA via an LTR specific integrase into another region of the host chromosome.
Retroviruses such as human immunodeficiency virus (HIV) use this basic mechanism.
Dating retroviral insertions
As 5' and 3' LTRs are identical upon insertion, the difference between paired LTRs can be used to estimate the age of ancient retroviral insertions. This method of dating is used by paleovirologists, though it fails to take into account confounding factors such as gene conversion and homologous recombination.
HIV-1
The HIV-1 LTR is 634 bp in length and, like other retroviral LTRs, is segmented into the U3, R, and U5 regions. U3 and U5 has been further subdivided according to transcription factor sites and their impact on LTR activity and viral gene expression. The multi-step process of reverse transcription results in the placement of two identical LTRs, each consisting of a U3, R, and U5 region, at either end of the proviral DNA. The |
https://en.wikipedia.org/wiki/Crescendo%20Networks | Crescendo Networks, Ltd. was a privately held computer networking company headquartered in Sunnyvale, California with regional offices in EMEA and APAC. Crescendo Networks is not to be confused with Crescendo Communications, Inc. a CDDI/FDDI network equipment manufacturer that Cisco Systems Inc. acquired in 1993.
Founded in 2002, Crescendo Networks manufactured and sold application delivery controllers which accelerate and optimize website and web application performance.
On August 2011, company assets have been acquired by F5 Networks through liquidation proceedings in Israel. A number of key Crescendo employees joined F5's office in Tel Aviv.
Products
AppBeat DC is a web-facing application delivery controller that helps web applications serve more users and traffic, faster, with fewer resources. It accelerates web applications by offloading and consolidating common CPU intensive tasks. The product's purpose-built platform uses separate and dedicated hardware engines for each feature with multiple network processors and Field Programmable Gate Arrays (FPGAs) to deliver features such as TCP multiplexing and connection management, load balancing, data compression, caching, SSL acceleration, and more.
AppBeat SC is an application service controller designed to monitor multiple AppBeat DC application delivery controllers and provide insight into application behavior. It monitors and analyzes web applications and the state of the servers running them. The product gives users the option to be alerted via Twitter, as well as email or SNMP, if any degradation to their network is detected. |
https://en.wikipedia.org/wiki/Translational%20efficiency | In cell biology, translational efficiency or translation efficiency is the rate of mRNA translation into proteins within cells.
It has been measured in protein per mRNA per hour. Several RNA elements within mRNAs have been shown to affect the rate. These include miRNA and protein binding sites. RNA structure may also affect translational efficiency through the altered protein or microRNA binding.
See also
List of cis-regulatory RNA elements
Transterm
UTRdb |
https://en.wikipedia.org/wiki/Intertemporal%20CAPM | Within mathematical finance, the Intertemporal Capital Asset Pricing Model, or ICAPM, is an alternative to the CAPM provided by Robert Merton. It is a linear factor model with wealth as state variable that forecasts changes in the distribution of future returns or income.
In the ICAPM investors are solving lifetime consumption decisions when faced with more than one uncertainty. The main difference between ICAPM and standard CAPM is the additional state variables that acknowledge the fact that investors hedge against shortfalls in consumption or against changes in the future investment opportunity set.
Continuous time version
Merton considers a continuous time market in equilibrium.
The state variable (X) follows a Brownian motion:
The investor maximizes his Von Neumann–Morgenstern utility:
where T is the time horizon and B[W(T),T] the utility from wealth (W).
The investor has the following constraint on wealth (W).
Let be the weight invested in the asset i. Then:
where is the return on asset i.
The change in wealth is:
We can use dynamic programming to solve the problem. For instance, if we consider a series of discrete time problems:
Then, a Taylor expansion gives:
where is a value between t and t+dt.
Assuming that returns follow a Brownian motion:
with:
Then canceling out terms of second and higher order:
Using Bellman equation, we can restate the problem:
subject to the wealth constraint previously stated.
Using Ito's lemma we can rewrite:
and the expected value:
After some algebra
, we have the following objective function:
where is the risk-free return.
First order conditions are:
In matrix form, we have:
where is the vector of expected returns, the covariance matrix of returns, a unity vector the covariance between returns and the state variable. The optimal weights are:
Notice that the intertemporal model provides the same weights of the CAPM. Expected returns can be expressed as follows:
where m is the market portfolio and h a |
https://en.wikipedia.org/wiki/Thursday%20Night%20Football | Thursday Night Football (often abbreviated as TNF) is the branding used for broadcasts of National Football League (NFL) games that broadcast primarily on Thursday nights. Most of the games kick off at 8:15 Eastern Time (8:20 prior to 2022 and 8:25 prior to 2018).
In the past, games in the package also air occasionally on Saturdays in the later portion of the season, as well as select games from the NFL International Series (these games were branded since 2017 as NFL Network Special).
Debuting on November 23, 2006, the telecasts were originally part of NFL Network's Run to the Playoffs package, which consisted of eight total games broadcast on Thursday and Saturday nights (five on Thursdays, and three on Saturdays, originally branded as Saturday Night Football) during the latter portion of the season. Since 2012, the TNF package has begun during the second week of the NFL season; the NFL Kickoff Game and the NFL on Thanksgiving are both broadcast as part of NBC Sports' Sunday Night Football contract and are not included in Thursday Night Football, although the Thanksgiving primetime game was previously part of the package from 2006 until 2011.
In 2014, the NFL shifted the package to a new model to increase its prominence. The entire TNF package would be produced by a separate rightsholder, who would hold rights to simulcast a portion of the package on their respective network. CBS was the first rightsholder under this model, airing nine games on broadcast television, and producing the remainder of the package to air exclusively on NFL Network to satisfy its carriage agreements. The package was also extended to Week 16 of the season, and included a new Saturday doubleheader split between CBS and NFL Network. On January 18, 2015, CBS and NFL Network extended the same arrangement for a second season. In 2016 and 2017, the NFL continued with a similar arrangement, but adding NBC as a second rightsholder alongside CBS, with each network airing five games on broadcast |
https://en.wikipedia.org/wiki/Invariant%20measure | In mathematics, an invariant measure is a measure that is preserved by some function. The function may be a geometric transformation. For examples, circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping, and a difference of slopes is invariant under shear mapping.
Ergodic theory is the study of invariant measures in dynamical systems. The Krylov–Bogolyubov theorem proves the existence of invariant measures under certain conditions on the function and space under consideration.
Definition
Let be a measurable space and let be a measurable function from to itself. A measure on is said to be invariant under if, for every measurable set in
In terms of the pushforward measure, this states that
The collection of measures (usually probability measures) on that are invariant under is sometimes denoted The collection of ergodic measures, is a subset of Moreover, any convex combination of two invariant measures is also invariant, so is a convex set; consists precisely of the extreme points of
In the case of a dynamical system where is a measurable space as before, is a monoid and is the flow map, a measure on is said to be an invariant measure if it is an invariant measure for each map Explicitly, is invariant if and only if
Put another way, is an invariant measure for a sequence of random variables (perhaps a Markov chain or the solution to a stochastic differential equation) if, whenever the initial condition is distributed according to so is for any later time
When the dynamical system can be described by a transfer operator, then the invariant measure is an eigenvector of the operator, corresponding to an eigenvalue of this being the largest eigenvalue as given by the Frobenius-Perron theorem.
Examples
Consider the real line with its usual Borel σ-algebra; fix and consider the translation map given by: Then one-dimensional Lebesgue measure is an invariant measure for
More generally, |
https://en.wikipedia.org/wiki/Krylov%E2%80%93Bogolyubov%20theorem | In mathematics, the Krylov–Bogolyubov theorem (also known as the existence of invariant measures theorem) may refer to either of the two related fundamental theorems within the theory of dynamical systems. The theorems guarantee the existence of invariant measures for certain "nice" maps defined on "nice" spaces and were named after Russian-Ukrainian mathematicians and theoretical physicists Nikolay Krylov and Nikolay Bogolyubov who proved the theorems.
Formulation of the theorems
Invariant measures for a single map
Theorem (Krylov–Bogolyubov). Let (X, T) be a compact, metrizable topological space and F : X → X a continuous map. Then F admits an invariant Borel probability measure.
That is, if Borel(X) denotes the Borel σ-algebra generated by the collection T of open subsets of X, then there exists a probability measure μ : Borel(X) → [0, 1] such that for any subset A ∈ Borel(X),
In terms of the push forward, this states that
Invariant measures for a Markov process
Let X be a Polish space and let be the transition probabilities for a time-homogeneous Markov semigroup on X, i.e.
Theorem (Krylov–Bogolyubov). If there exists a point for which the family of probability measures { Pt(x, ·) | t > 0 } is uniformly tight and the semigroup (Pt) satisfies the Feller property, then there exists at least one invariant measure for (Pt), i.e. a probability measure μ on X such that
See also
For the 1st theorem: Ya. G. Sinai (Ed.) (1997): Dynamical Systems II. Ergodic Theory with Applications to Dynamical Systems and Statistical Mechanics. Berlin, New York: Springer-Verlag. . (Section 1).
For the 2nd theorem: G. Da Prato and J. Zabczyk (1996): Ergodicity for Infinite Dimensional Systems. Cambridge Univ. Press. . (Section 3).
Notes
Ergodic theory
Theorems in dynamical systems
Probability theorems
Random dynamical systems
Theorems in measure theory |
https://en.wikipedia.org/wiki/Dickman%20function | In analytic number theory, the Dickman function or Dickman–de Bruijn function ρ is a special function used to estimate the proportion of smooth numbers up to a given bound.
It was first studied by actuary Karl Dickman, who defined it in his only mathematical publication, which is not easily available, and later studied by the Dutch mathematician Nicolaas Govert de Bruijn.
Definition
The Dickman–de Bruijn function is a continuous function that satisfies the delay differential equation
with initial conditions for 0 ≤ u ≤ 1.
Properties
Dickman proved that, when is fixed, we have
where is the number of y-smooth (or y-friable) integers below x.
Ramaswami later gave a rigorous proof that for fixed a, was asymptotic to , with the error bound
in big O notation.
Applications
The main purpose of the Dickman–de Bruijn function is to estimate the frequency of smooth numbers at a given size. This can be used to optimize various number-theoretical algorithms such as P-1 factoring and can be useful of its own right.
It can be shown using that
which is related to the estimate below.
The Golomb–Dickman constant has an alternate definition in terms of the Dickman–de Bruijn function.
Estimation
A first approximation might be A better estimate is
where Ei is the exponential integral and ξ is the positive root of
A simple upper bound is
Computation
For each interval [n − 1, n] with n an integer, there is an analytic function such that . For 0 ≤ u ≤ 1, . For 1 ≤ u ≤ 2, . For 2 ≤ u ≤ 3,
with Li2 the dilogarithm. Other can be calculated using infinite series.
An alternate method is computing lower and upper bounds with the trapezoidal rule; a mesh of progressively finer sizes allows for arbitrary accuracy. For high precision calculations (hundreds of digits), a recursive series expansion about the midpoints of the intervals is superior.
Extension
Friedlander defines a two-dimensional analog of . This function is used to estimate a function similar to de Bru |
https://en.wikipedia.org/wiki/Breyer%20Animal%20Creations | Breyer Animal Creations (commonly referred to as simply Breyer) is primarily a manufacturer of model horses. Founded in 1950, the company, now a division of Reeves International, Inc, specializes in model horses made from cellulose acetate, a form of plastic, and produces other animal models from the same material as well. Less well known are its porcelain horse figures, which are aimed at the adult collector market. The company also produces model tack accessories and horse-related structures, such as stables, barns, and grooming implements in scale to its model horses.
History
Breyer Animal Creations was founded in 1950 in Chicago, Illinois, as Breyer Molding Company. It gained recognition when commissioned by F.W. Woolworth to create a horse statue (now known as the # 57 Western Horse) to adorn a mantel clock. It was approximately 1:9 scale and the model was retained as payment for molding the parts. Orders began to roll in for the horse only and the Breyer Animal Creations company was founded. Since then, Breyer has become a leader in producing model horses.
In 1984, Reeves International acquired Breyer Animal Creations and spent the next 20 years completing its transformation from toy distribution to manufacturing. Model horses are sold through independent distributors and the Breyer website.
While Breyer products were originally manufactured in the United States, production now takes place in China.
Production processes
Each horse is cast in a two to three piece mold. Both halves are then put together and the seams are sanded and polished. Markings and color patterns are usually obtained by using a stencil known as a mask, although most older models were airbrushed by hand, with markings such as undefined socks or a bald face merely left unpainted. Most detailing, such as eye-whites (common on 1950s and 1960s models and is now enjoying a resurgence in modern models), brands, or other individual markings are painstakingly hand-painted. Sometimes, a variatio |
https://en.wikipedia.org/wiki/ISPConfig | ISPConfig is a widely used open source hosting control panel for Linux, licensed under BSD license and developed by the company ISPConfig UG. The ISPConfig project was started in autumn 2005 by Till Brehm from the German company projektfarm GmbH.
Overview
Using the dashboard, administrators have the ability to manage websites, email addresses, MySQL and MariaDB databases, FTP accounts, Shell accounts and DNS records through a web-based interface. The software has 4 login levels: administrator, reseller, client, and email-user, each with a different set of permissions.
Operating Systems
ISPConfig is only available on Linux, with CentOS, Debian, and Ubuntu being among the supported distributions.
Features
The following services and features are supported:
Manage single or multiple servers from one control panel.
Web server management for Apache HTTP Server and Nginx.
Mail server management (with virtual mail users) with spam and antivirus filter using Postfix (software) and Dovecot (software).
DNS server management (BIND, Powerdns).
Configuration mirroring and clusters.
Administrator, reseller, client and mail-user login.
Virtual server management for OpenVZ Servers.
Website statistics using Webalizer and AWStats
See feature list for a more detailed list.
See also
Web hosting control panel
Comparison of web hosting control panels |
https://en.wikipedia.org/wiki/Bubble%20raft | A bubble raft is an array of bubbles. It demonstrates materials' microstructural and atomic length-scale behavior by modelling the {111} plane of a close-packed crystal. A material's observable and measurable mechanical properties strongly depend on its atomic and microstructural configuration and characteristics. This fact is intentionally ignored in continuum mechanics, which assumes a material to have no underlying microstructure and be uniform and semi-infinite throughout.
Bubble rafts assemble bubbles on a water surface, often with the help of amphiphilic soaps. These assembled bubbles act like atoms, diffusing, slipping, ripening, straining, and otherwise deforming in a way that models the behavior of the {111} plane of a close-packed crystal. The ideal (lowest energy) state of the assembly would undoubtedly be a perfectly regular single crystal, but just as in metals, the bubbles often form defects, grain boundaries, and multiple crystals.
History of bubble rafts
The concept of bubble raft modelling was first presented in 1947 by Nobel Laureate Sir William Lawrence Bragg and John Nye of Cambridge University's Cavendish Laboratory in Proceedings of the Royal Society A. Legend claims that Bragg conceived of bubble raft models while pouring oil into his lawn mower. He noticed that bubbles on the surface of the oil assembled into rafts resembling the {111} plane of close-packed crystals. Nye and Bragg later presented a method of generating and controlling bubbles on the surface of a glycerine-water-oleic acid-triethanolamine solution, in assemblies of 100,000 or more sub-millimeter sized bubbles. In their paper, they go on at length about the microstructural phenomena observed in bubble rafts and hypothesized in metals.
Dynamics
Bubble rafts exhibit complex dynamics, as illustrated in the video. This is triggered by rupture of a first bubble, driven by thermal fluctuations and a cascade of subsequent bursting bubbles, which can give rise to self-organized |
https://en.wikipedia.org/wiki/Behavioral%20script | In the behaviorism approach to psychology, behavioral scripts are a sequence of expected behaviors for a given situation. Scripts include default standards for the actors, props, setting, and sequence of events that are expected to occur in a particular situation. The classic script example involves an individual dining at a restaurant. This script has several components: props including tables, menus, food, and money, as well as roles including customers, servers, chefs, and a cashier. The sequence of expected events for this script begins with a hungry customer entering the restaurant, ordering, eating, paying and then ends with the customer exiting. People continually follow scripts which are acquired through habit, practice and simple routine. Following a script can be useful because it could help to save the time and mental effort of deciding on appropriate behavior each time a situation is encountered.
Psychology
Semantic memory builds schemas and scripts. With this, semantic memory is known as the knowledge that people gain from experiencing events in the everyday world. This information is then organized into a concept that people can understand in their own way. Semantic memory relates to scripts because scripts are made through the knowledge that one gains through these everyday experiences and habituation.
There have been many empirical research studies conducted in order to test the validity of the script theory. One such study, conducted by Bower, Black, and Turner in 1979, asked participants to read 18 different scenarios, all of which represented a doctor’s office script. The participants were later asked to complete either a recall task or a recognition task. In the recall task, the participants were asked to remember as much as they could about each scenario. Here, the participants tended to recall certain parts of the stories that were not actually present, but that were parts of the scripts that the stories represented. In the recognitio |
https://en.wikipedia.org/wiki/Photoinhibition | Photoinhibition is light-induced reduction in the photosynthetic capacity of a plant, alga, or cyanobacterium. Photosystem II (PSII) is more sensitive to light than the rest of the photosynthetic machinery, and most researchers define the term as light-induced damage to PSII. In living organisms, photoinhibited PSII centres are continuously repaired via degradation and synthesis of the D1 protein of the photosynthetic reaction center of PSII. Photoinhibition is also used in a wider sense, as dynamic photoinhibition, to describe all reactions that decrease the efficiency of photosynthesis when plants are exposed to light.
History
The first measurements of photoinhibition were published in 1956 by Bessel Kok. Even in the very first studies, it was obvious that plants have a repair mechanism that continuously repairs photoinhibitory damage. In 1966, Jones and Kok measured the action spectrum of photoinhibition and found that ultraviolet light is highly photoinhibitory. The visible-light part of the action spectrum was found to have a peak in the red-light region, suggesting that chlorophylls act as photoreceptors of photoinhibition. In the 1980s, photoinhibition became a popular topic in photosynthesis research, and the concept of a damaging reaction counteracted by a repair process was re-invented. Research was stimulated by a paper by Kyle, Ohad and Arntzen in 1984, showing that photoinhibition is accompanied by selective loss of a 32-kDa protein, later identified as the PSII reaction center protein D1. The photosensitivity of PSII from which the oxygen evolving complex had been inactivated with chemical treatment was studied in the 1980s and early 1990s. A paper by Imre Vass and colleagues in 1992 described the acceptor-side mechanism of photoinhibition. Measurements of production of singlet oxygen by photoinhibited PSII provided further evidence for an acceptor-side-type mechanism. The concept of a repair cycle that continuously repairs photoinhibitory damage, evo |
https://en.wikipedia.org/wiki/Generic%20Model%20Organism%20Database | The Generic Model Organism Database (GMOD) project provides biological research communities with a toolkit of open-source software components for visualizing, annotating, managing, and storing biological data. The GMOD project is funded by the United States National Institutes of Health, National Science Foundation and the USDA Agricultural Research Service.
History
The GMOD project was started in the early 2000s as a collaboration between several model organism databases (MODs) who shared a need to create similar software tools for processing data from sequencing projects. MODs, or organism-specific databases, describe genome and other information about important experimental organisms in the life sciences and capture the large volumes of data and information being generated by modern biology. Rather than each group designing their own software, four major MODs--FlyBase, Saccharomyces Genome Database, Mouse Genome Database, and WormBase—worked together to create applications that provide functionality needed by all MODs, such as software to help manage the data within the MOD, and to help users access and query the data.
The GMOD project works to keep software components interoperable. To this end, many of the tools use a common input/output file format or run off a Chado schema database.
Chado database schema
The Chado schema aims to cover many of the classes of data frequently used by modern biologists, from genetic data to phylogenetic trees to publications to organisms to microarray data to IDs to RNA/protein expression. Chado makes extensive use of controlled vocabularies to type all entities in the database; for example: genes, transcripts, exons, transposable elements, etc., are stored in a feature table, with the type provided by Sequence Ontology. When a new type is added to the Sequence Ontology, the feature table requires no modification, only an update of the data in the database. The same is largely true of analysis data that can be stored in Chado |
https://en.wikipedia.org/wiki/Convention%20for%20the%20Protection%20of%20Individuals%20with%20Regard%20to%20Automatic%20Processing%20of%20Personal%20Data | The Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data is a 1981 Council of Europe treaty that protects the right to privacy of individuals, taking account of the increasing flow across frontiers of personal data undergoing automatic processing.
All members of the Council of Europe have ratified the treaty. Being non–Council of Europe states, Argentina, Cabo Verde, Mauritius, Mexico, Morocco, Senegal, Tunisia, and Uruguay have acceded to the treaty.
Since 1985, this data protection convention has been updated, and a new instrument on artificial intelligence has been added. The Council of Europe approved a proposed modernization of the agreement in 2018. The modernization included an obligation to report when data breaches occur, additional accountability for data storers, and new rights for the algorithmic decision making.
See also
General Data Protection Regulation
Directive 95/46/EC on the protection of personal data
Data privacy
Data Privacy Day
Information privacy
List of Council of Europe treaties |
https://en.wikipedia.org/wiki/Non-exact%20solutions%20in%20general%20relativity | Non-exact solutions in general relativity are solutions of Albert Einstein's field equations of general relativity which hold only approximately. These solutions are typically found by treating the gravitational field, , as a background space-time, , (which is usually an exact solution) plus some small perturbation, . Then one is able to solve the Einstein field equations as a series in , dropping higher order terms for simplicity.
A common example of this method results in the linearised Einstein field equations. In this case we expand the full space-time metric about the flat Minkowski metric, :
,
and dropping all terms which are of second or higher order in .
See also
Exact solutions in general relativity
Linearized gravity
Post-Newtonian expansion
Parameterized post-Newtonian formalism
Numerical relativity |
https://en.wikipedia.org/wiki/Velocity%20Micro | Velocity Micro is a privately held boutique computer manufacturer located in Richmond, Virginia (USA), specializing in custom high-performance gaming computers, professional workstations, and high-performance computer solutions. Its extended product line includes gaming PCs, notebooks, CAD workstations, digital media creation workstations, home and home office PCs, home entertainment media centers, Tesla-based supercomputers, and business solutions. All products are custom assembled by hand and supported at the company's headquarters.
History
Velocity Micro traces its origins to 1992 when founder Randy Copeland began designing and producing high-performance computer systems to run CAD software and other demanding applications. These computer systems were custom-built to facilitate the design process and tailored to the extreme needs of each client. Velocity Micro was officially founded in 1997 as an extension of this highly individualized, high-performance computing philosophy.
In 2001, Copeland accepted the opportunity to appear in Maximum PC'''s boutique roundup article entitled "Minor League, Major Performance". The quote which appeared in that February 2002 issue – "put together with the kind of care and craftsmanship the behemoth manufacturers can't offer" – propelled Velocity Micro forward and is still used by the company today.
In May 2007, Velocity Micro acquired former competing boutique builder, Overdrive PC, known for their extreme overclocking capabilities they term "HyperClocking." Since the acquisition, Velocity Micro has incorporated HyperClocking into many of its extreme gaming systems. Overdrive PC remains a separate brand under Velocity Micro ownership.
In 2010, Velocity Micro entered the eReader and tablet computer markets with the release of the first Cruz products: the Cruz Reader and the Cruz Tablet (T100). These Android-based devices featured 7" full-color screens. The Cruz Reader utilized a Resistive touchscreen, whereas the Cruz Tablet m |
https://en.wikipedia.org/wiki/Rhodanese | Rhodanese is a mitochondrial enzyme that detoxifies cyanide (CN−) by converting it to thiocyanate (SCN−, also known as "rhodanate"). In enzymatology, the common name is listed as thiosulfate sulfurtransferase (). It catalyzes the following reaction:
thiosulfate + cyanide sulfite + thiocyanate
Structure and mechanism
This reaction takes place in two steps. The diagram on the right shows the crystallographically-determined structure of rhodanese. In the first step, thiosulfate is reduced by the thiol group on cysteine-247 1, to form a persulfide and a sulfite 2. In the second step, the persulfide reacts with cyanide to produce thiocyanate, re-generating the cysteine thiol 1.
Rhodanese shares evolutionary relationship with a large family of proteins, including
Cdc25 phosphatase catalytic domain.
non-catalytic domains of eukaryotic dual-specificity MAPK-phosphatases
non-catalytic domains of yeast PTP-type MAPK-phosphatases
non-catalytic domains of yeast Ubp4, Ubp5, Ubp7
non-catalytic domains of mammalian Ubp-Y
Drosophila heat shock protein HSP-67BB
several bacterial cold-shock and phage shock proteins
plant senescence associated proteins
catalytic and non-catalytic domains of rhodanese
Rhodanese has an internal duplication. This domain is found as a single copy in other proteins, including phosphatases and ubiquitin C-terminal hydrolases.
Clinical relevance
This reaction is important for the treatment of exposure to cyanide, since the thiocyanate formed is around 1 / 200 as toxic.:p. 15938 The use of thiosulfate solution as an antidote for cyanide poisoning is based on the activation of this enzymatic cycle.
Human proteins
The human mitochondrial rhodanese is TST.
The following other human genes match the "Rhodanese-like" domain on InterPro, but are not the rodanase with its catalytic activity (see also the list of related families in #Structure and mechanism):
M-phase inducer phosphatase: CDC25A; CDC25B; CDC25C;
Dual specificity protein phos |
https://en.wikipedia.org/wiki/BIT%20predicate | In mathematics and computer science, the BIT predicate, sometimes is a predicate that tests whether the bit of the (starting from the least significant digit) when is written as a binary number. Its mathematical applications include modeling the membership relation of hereditarily finite sets, and defining the adjacency relation of the Rado graph. In computer science, it is used for efficient representations of set data structures using bit vectors, in defining the private information retrieval problem from communication complexity, and in descriptive complexity theory to formulate logical descriptions of complexity classes.
History
The BIT predicate was first introduced in 1937 by Wilhelm Ackermann to define the Ackermann coding, which encodes hereditarily finite sets as The BIT predicate can be used to perform membership tests for the encoded sets: is true if and only if the set encoded is a member of the set encoded
Ackermann denoted the predicate using a Fraktur font to distinguish it from the notation that he used for set membership (short for an element in German). The notation and the name "the BIT predicate", come from the work of Ronald Fagin and Neil Immerman, who applied this predicate in computational complexity theory as a way to encode and decode information in the late 1980s and early
Description and implementation
The binary representation of a number is an expression for as a sum of distinct powers of two,
where each bit in this expression is either 0 or 1. It is commonly written in binary notation as just the sequence of these bits, . Given this expansion for , the BIT predicate is defined to equal . It can be calculated from the formula
where is the floor function and mod is the modulo function.
The BIT predicate is a primitive recursive function. As a binary relation (producing true and false values rather than 1 and 0 respectively), the BIT predicate is asymmetric: there do not exist two numbers and for which both and |
https://en.wikipedia.org/wiki/Goal%20node%20%28computer%20science%29 | In computer science, a goal node is a node in a graph that meets defined criteria for success or termination.
Heuristical artificial intelligence algorithms, like A* and B*, attempt to reach such nodes in optimal time by defining the distance to the goal node. When the goal node is reached, A* defines the distance to the goal node as 0 and all other nodes' distances as positive values. |
https://en.wikipedia.org/wiki/Map%20algebra | Map algebra is an algebra for manipulating geographic data, primarily fields. Developed by Dr. Dana Tomlin and others in the late 1970s, it is a set of primitive operations in a geographic information system (GIS) which allows one or more raster layers ("maps") of similar dimensions to produce a new raster layer (map) using mathematical or other operations such as addition, subtraction etc.
History
Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects and city planners, starting with Warren Manning and further refined and popularized by Jaqueline Tyrwhitt, Ian McHarg and others during the 1950s and 1960s.
In the mid-1970s, landscape architecture student C. Dana Tomlin developed some of the first tools for overlay analysis in raster as part of the IMGRID project at the Harvard Laboratory for Computer Graphics and Spatial Analysis, which he eventually transformed into the Map Analysis Package (MAP), a popular raster GIS during the 1980s. While a graduate student at Yale University, Tomlin and Joseph K. Berry re-conceptualized these tools as a mathematical model, which by 1983 they were calling "map algebra." This effort was part of Tomlin's development of cartographic modeling, a technique for using these raster operations to implement the manual overlay procedures of McHarg. Although the basic operations were defined in his 1983 PhD dissertation, Tomlin had refined the principles of map algebra and cartographic modeling into their current form by 1990. Although the term cartographic modeling has not gained as wide an acceptance as synonyms such as suitability analysis, suitability modeling and multi-criteria decision making, "map algeb |
https://en.wikipedia.org/wiki/Metro%20WSIT | Web Services Interoperability Technology (WSIT) is an open-source project started by Sun Microsystems to develop the next-generation of Web service technologies. It provides interoperability between Java Web Services and Microsoft's Windows Communication Foundation (WCF).
It consists of Java programming language APIs that enable advanced WS-* features to be used in a way that is compatible with Microsoft's Windows Communication Foundation as used by .NET. The interoperability between different products is accomplished by implementing a number of Web Services specifications, like JAX-WS that provides interoperability between Java Web Services and Microsoft Windows Communication Foundation.
WSIT is currently under development as part of Eclipse Metro.
WSIT is a series of extensions to the basic SOAP protocol, and so uses JAX-WS and JAXB. It is not a new protocol such as the binary DCOM.
WSIT implements the WS-I specifications, including:
Metadata
WS-MetadataExchange
WS-Transfer
WS-Policy
Security
WS-Security
WS-SecureConversation
WS-Trust
WS-SecurityPolicy
Messaging
WS-ReliableMessaging
WS-RMPolicy
Transactions
WS-Coordination
WS-AtomicTransaction
See also
JAX-WS |
https://en.wikipedia.org/wiki/Scooter%20%28talking%20baseball%29 | Scooter was an animated character used by Fox Sports during Major League Baseball games. The character, a baseball with human facial characteristics, is voiced by Tom Kenny (best known for his work as the voice of SpongeBob SquarePants) and was designed by Fox to explain different types of pitches with the education of children in mind.
Critical reaction
Scooter debuted in the 2004 baseball season on April 16, during a game between the New York Yankees and Boston Red Sox. While Fox Sports television chairman David Hill called Scooter "really cute and really terrific," the character garnered few positive reactions otherwise, with Sports Illustrated writer John Donovan warning: "purists everywhere, grab the barf bag," and Sports Illustrated media writer Richard Deitsch using Scooter as an example of "how technology does not always help society." The Sporting News reported polling their staff with the question "What best summarizes your feelings for Scooter, FOX's talking baseball?", and 45% of respondents chose the answer "Send him to a slow, painful death." Despite the negative reactions, Scooter would still be used in televised baseball games until after the 2006 World Series.
Peter Puck comparisons
Some television historians have noticed the similarities between Scooter and Peter Puck, an animated hockey puck that was used by Hockey Night in Canada and NHL on NBC in the 1970s to explain the rules of hockey to viewers. However, Peter Puck was well loved by viewers and is often looked at with nostalgia, whereas Scooter has been met with little but derision. |
https://en.wikipedia.org/wiki/Nucellar%20embryony | Nucellar embryony (notated Nu+) is a form of seed reproduction that occurs in certain plant species, including many citrus varieties. Nucellar embryony is a type of apomixis, where eventually nucellar embryos from the nucellus tissue of the ovule are formed, independent of meiosis and sexual reproduction. During the development of seeds in plants that possess this genetic trait, the nucellus tissue which surrounds the megagametophyte can produce nucellar cells, also termed initial cells. These additional embryos (polyembryony) are genetically identical to the parent plant, rendering them as clones. By contrast, zygotic seedlings are sexually produced and inherit genetic material from both parents. Most angiosperms reproduce sexually through double fertilization. Different from nucellar embryony, double fertilization occurs via the syngamy of sperm and egg cells, producing a triploid endosperm and a diploid zygotic embryo. In nucellar embryony, embryos are formed asexually from the nucellus tissue. Zygotic and nucellar embryos can occur in the same seed (monoembryony), and a zygotic embryo can divide to produce multiple embryos. The nucellar embryonic initial cells form, divide, and expand. Once the zygotic embryo becomes dominant, the initial cells stop dividing and expanding. Following this stage, the zygotic embryo continues to develop and the initial cells continue to develop as well, forming nucellar embryos. The nucellar embryos generally end up outcompeting the zygotic embryo, rending the zygotic embryo dormant. The polyembryonic seed is then formed by the many adventitious embryos within the ovule (to picture this process, refer to Figure 1). The nucellar embryos produced via apomixis inherit its mother's genetics, making them desirable for citrus propagation, research, and breeding.
Nucellar embryony outside of citrus varieties
Nucellar embryos have also been found in polyembryonic Mango varieties, where generally one of the embryos is zygotic and the rest |
https://en.wikipedia.org/wiki/Ionic%20atmosphere | Ionic Atmosphere is a concept employed in Debye–Hückel theory which explains the electrolytic conductivity behaviour of solutions. It can be generally defined as the area at which a charged entity is capable of attracting an entity of the opposite charge.
Asymmetry, or relaxation effect
If an electrical potential is applied to an electrolytic solution, a positive ion will move towards the negative electrode and drag along an entourage of negative ions with it. The more concentrated the solution, the closer these negative ions are to the positive ion and thus the greater the resistance experienced by the positive ion. This influence on the speed of an ion is known as the "Asymmetry effect" because the ionic atmosphere moving around the ion is not symmetrical; the charge density behind is greater than in the front, slowing the motion of the ion. The time required to form a new ionic atmosphere on the right or time required for ionic atmosphere on the left to fade away is known as time of relaxation. The asymmetrization of ionic atmosphere does not occur in the case of Debye Falkenhagen effect due to high frequency dependence of conductivity.
Electrophoretic effect
This is another factor which slows the motion of ions within a solution. It is the tendency of the applied potential to move the ionic atmosphere itself. This drags the solvent molecules along because of the attractive forces between ions and solvent molecules. As a result, the central ion at the centre of the ionic atmosphere is influenced to move towards the pole opposite its ionic atmosphere. This inclination retards its motion.
Limits to the model
The model of ionic atmosphere is less adequate for concentrated ionic solutions near saturation. These solutions as well as molten salts or ionic liquids have a structure similar to the crystalline lattice where water molecules are located between ions. |
https://en.wikipedia.org/wiki/MCEF | MCEF or Major Cdk9-interacting elongation factor is a transcription factor related to Af4. It is the fourth member of the Af4 family (AFF) of transcription factors, involved in numerous pathologies, including Acute Lymphoblastic Leukemia (ALL), abnormal CNS development, breast cancer and azoospermia.
Because it apparently interacts with the species-specific human co-factor (P-TEFb) for HIV-1 transcription, and because it can repress HIV-1 replication, MCEF (also known as AFF4 or AF5q31) may have future therapeutic uses.
MCEF was originally cloned and named by Mario Clemente Estable of Ryerson University, while he was a post-doctoral fellow in the laboratory of Robert G. Roeder, at the Rockefeller University.
See also
Transcription factors |
https://en.wikipedia.org/wiki/Pituicyte | Pituicytes are glial cells of the posterior pituitary. Their main role is to assist in the storage and release of neurohypophysial hormones.
Structure
Pituicytes are located in the pars nervosa of the posterior pituitary and interspersed with unmyelinated axons and Herring bodies. They generally stain dark purple with an H&E stain and are among the easiest structures to identify in the region. Pituicytes have an irregular and branched shape which resembles that of another type of glial cell: the astrocyte. Like astrocytes, their cytoplasm presents specific intermediate filaments made up of glial fibrillary acidic protein (GFAP).
Function
Pituicytes are similar to astrocytes, another type of glial cell. Their main role is to assist in the storage and release of hormones of the posterior pituitary. Pituicytes surround axonal endings and regulate hormone secretion by releasing their processes from these endings.
Clinical significance
Pituicytomas are rare tumors that arise from pituicytes. They may be mistaken for the much more common pituitary adenoma, as well as craniopharyngioma and meningioma. Symptoms from the mass effect of the tumor usually include vision disorders, and less often headaches, hypopituitarism (decreased function of the pituitary gland), fatigue, and decreased libido.
See also
List of distinct cell types in the adult human body |
https://en.wikipedia.org/wiki/Argentaffin | Argentaffin refers to cells which take up silver stain.
Enteroendocrine cells are sometimes also called "argentaffins", because they take up this stain. An argentaffin cell is any enteroendocrine cell, a hormone-secreting cell present throughout the digestive tract.
It is a property of melanin, and special stain can be applied to identify those granules. Fontana-Masson stain uses the fact that those cells can reduce the silver salts to metallic silver (brownish-black) color without the aid of reducing agent, which is the definition of Argentaffin cells.
Argentaffin cells
, one of the round or partly flattened cells occurring in the lining tissue of the digestive tract and containing granules thought to be of secretory function. These epithelial cells, though common throughout the digestive tract, are most concentrated in the small intestine and appendix. The cells located randomly within the mucous membrane lining of the intestine and in tubelike depressions in that lining known as the Lieberkühn glands. Their granules contain a chemical called serotonin, which stimulates smooth muscle contractions. Functionally, it is believed that serotonin diffuses out of the argentaffin cells into the walls of the digestive tract, where neurons leading to the muscles are stimulated to produce the wavelike contractions of peristalsis. Peristaltic movements encourage the passage of food substances through the intestinal tract.
The mucosa of bronchi contains numerous neuroendocrine cells which are bronchial counterparts of argentaffin cells of alimentary canal....
Notes
Staining |
https://en.wikipedia.org/wiki/APUD%20cell | APUD cells (DNES cells) constitute a group of apparently unrelated endocrine cells, which were named by the scientist A.G.E. Pearse, who developed the APUD concept in the 1960s based on calcitonin-secreting parafollicular C cells of dog thyroid. These cells share the common function of secreting a low molecular weight polypeptide hormone. There are several different types which secrete the hormones secretin, cholecystokinin and several others. The name is derived from an acronym, referring to the following:
Amine Precursor Uptake – for high uptake of amine precursors including 5-hydroxytryptophan (5-HTP) and dihydroxyphenylalanine (DOPA).
Decarboxylase – for high content of the enzyme amino acid decarboxylase (for conversion of precursors to amines).
Cells in APUD system
Adenohypophysis
Neurons of Hypothalamus
Chief Cells of Parathyroid
Adrenal Medullary Cells
Glomus cells in Carotid Body
Melanocytes of Skin
Cells of Pineal Gland
Renin producing cells in the kidney
See also
Apudoma
Enteroendocrine cell
Neuroendocrine cell
List of human cell types derived from the germ layers |
https://en.wikipedia.org/wiki/Plasma%20speaker | Plasma speakers or ionophones are a form of loudspeaker which varies air pressure via an electrical plasma instead of a solid diaphragm. The plasma arc heats the surrounding air causing it to expand. Varying the electrical signal that drives the plasma and connected to the output of an audio amplifier, the plasma size varies which in turn varies the expansion of the surrounding air creating sound waves.
The plasma is typically in the form of a glow discharge and acts as a massless radiating element. The technique is a much later development of physics principles demonstrated by William Duddell's "singing arc" of 1900, and can be related to modern ion thruster spacecraft propulsion.
The term ionophone was used by Dr. Siegfried Klein who developed a plasma tweeter that was licensed for commercial production by DuKane with the Ionovac and Fane Acoustics with the Ionofane in the late 1940s and 1950s.
The effect takes advantage of several physical principles: Firstly, ionization of gases causes their electrical resistance to drop significantly, making them conductive. This resulting plasma can be made to vibrate sympathetically with alternating electric fields and magnetic fields. Secondly, the involved plasma, itself a field of ions, has a relatively negligible mass. Thus the air remains mechanically coupled with the massless plasma allowing it to radiate a potentially ideal reproduction of the sound source when the electric or magnetic field is modulated with an audio frequency signal.
Comparison to conventional loudspeakers
Conventional loudspeaker transducer designs use the input electrical audio frequency signal to vibrate a significant mass: In a dynamic loudspeaker this driver is coupled to a stiff speaker cone—a diaphragm which pushes air at audio frequencies. But the inertia inherent in its mass resists acceleration—and all changes in cone position. Additionally, speaker cones will eventually suffer tensile fatigue from the repeated shaking of sonic vibrati |
https://en.wikipedia.org/wiki/Centroacinar%20cell | Centroacinar cells are spindle-shaped cells in the exocrine pancreas. They represent an extension of the intercalated duct into each pancreatic acinus. These cells are commonly known as duct cells, and secrete an aqueous bicarbonate solution under stimulation by the hormone secretin. They also secrete mucin.
The intercalated ducts take the bicarbonate to intralobular ducts which become lobular ducts. These lobular ducts finally converge to form the main pancreatic duct.
See also
List of human cell types derived from the germ layers
List of distinct cell types in the adult human body |
https://en.wikipedia.org/wiki/Perisinusoidal%20space | The perisinusoidal space (or space of Disse) is a location in the liver between a hepatocyte and a sinusoid. It contains the blood plasma. Microvilli of hepatocytes extend into this space, allowing proteins and other plasma components from the sinusoids to be absorbed by the hepatocytes. Fenestration and discontinuity of the endothelium facilitates this transport. This space may be obliterated in liver disease, leading to decreased uptake by hepatocytes of nutrients and wastes such as bilirubin.
The perisinusoidal space also contains hepatic stellate cells (also known as Ito cells), which store fat or fat soluble vitamins including vitamin A). A variety of insults that cause inflammation can result in the cells transforming into myofibroblasts, resulting in collagen production, fibrosis, and cirrhosis.
The Space of Disse was named after German anatomist Joseph Disse (1852–1912). |
https://en.wikipedia.org/wiki/Snell%20Limited | Snell Limited, branded as Snell Advanced Media or SAM, was a British company that designed and developed solutions for the media production market including applications for central operations, live production, post production, playout and media management. They were headquartered in Newbury, UK.
SAM delivers agile technology across Live Production, Production, Editing & Finishing, Playout & Delivery, Infrastructure & Image Processing, all running under enterprise-wide Management & Workflow automation.
Snell Limited, owned by bankers LDC, was created from the merger of Snell & Wilcox and Pro-Bel in 2009. In March 2014 Snell was acquired by another company owned by LDC, Quantel Ltd. After LDC's replacement of Snell CEO Simon Derry with Quantel CEO Ray Cross, the process of merging the companies began. LDC had previously in 2006 appointed Cross to replace Quantel CEO Richard Taylor. However within a year LDC then replaced Cross with new CEO Tim Thorsteinson. The company was rebranded as Snell Advanced Media in September 2015. After the rebrand, SAM continued to carry the Quantel name on its Quantel Rio, formerly Pablo Rio, line of post-production solutions.
History
Snell & Wilcox was founded by engineer Roderick Snell in 1973.
Around the turn of the 21st century, Snell & Wilcox created the SW2 and SW4 "Zone Plate" test cards for use with their TPG20 and TPG21 test pattern generators.
In 2007 Snell and Wilcox Ltd. was awarded the Queens Award for Technological Innovation for the Kahuna Multiformat SD/HD production switcher.
In April 2009 Snell & Wilcox merged with Pro-Bel. The company took the name Snell. In March 2014 it was announced that Snell had been acquired by Quantel.
After the departure of Quantel CEO Ray Cross and Tim Thorsteinson was appointed CEO in February 2015. Thorsteinson held senior roles in the media technology industry; he has twice been the CEO of Grass Valley, President of the Broadcast Communications division of Harris Corp., and Presiden |
https://en.wikipedia.org/wiki/Electronic%20Yellow%20Pages | Electronic Yellow Pages are online versions of traditional printed business directories produced by telephone companies around the world. Typical functionalities of online yellow pages include the alphabetical listings of businesses and search functionality of the business database by name, business or location. Since Electronic Yellow Pages are not limited by space considerations, they often contain far more comprehensive business information such as vicinity maps, company profiles, product information, and more.
An advantage of Electronic Yellow Pages is that they can be updated in real time; therefore, listed businesses are not constrained by once-a-year publishing of the printed version which leads to greater accuracy of the listings since contact information may change at any time.
Before the popularity of the internet, business telephone numbers in the United Kingdom could be searched by accessing a remote computer terminal by modem. The initial prototype of this was superseded in 1990 with a commercial service. This service allowed searches via Name, Business classification and locality for business listings and a free text field was provided to allow "unstructured text" searching of Adverts. This dialup service was available via Prestel and "BT Gold" services. The service Electronic Yellow Pages was superseded in the mid-1990s by the internet service www.yell.com. A similar system called Phonebase for published residential phone numbers was discontinued in the 1990s, being superseded by a web-based search interface.
History
The first true online Yellow Pages, was a creation based on the independent YP publisher in Seattle, Washington called Banana Pages. This was the first print directory which was registered with both YPPA (the Yellow Pages Publishers Association), and the ADP (Association of Directory Publishers) to place their listings online. The Yellow Pages product was the brain child of the co-owner brothers of the company, Peter and John Richards. |
https://en.wikipedia.org/wiki/Hassall%27s%20corpuscles | Hassall's corpuscles (or thymic corpuscles (bodies)) are structures found in the medulla of the human thymus, formed from eosinophilic type VI epithelial reticular cells arranged concentrically. These concentric corpuscles are composed of a central mass, consisting of one or more granular cells, and of a capsule formed of epithelioid cells. They vary in size with diameters from 20 to more than 100μm, and tend to grow larger with age. They can be spherical or ovoid and their epithelial cells contain keratohyalin and bundles of cytoplasmic fibres. Later studies indicate that Hassall's corpuscles differentiate from medullary thymic epithelial cells after they lose autoimmune regulator (AIRE) expression. This makes them an example of Thymic mimetic cells. They are named for Arthur Hill Hassall, who discovered them in 1846.
The function of Hassall's corpuscles is currently unclear, and the absence of this structure in the thymus of most murine species (except for the New Zealand White Mouse strain) has previously restricted mechanistic dissection. It is known that Hassall's corpuscles are a potent source of the cytokine TSLP. In vitro, TSLP directs the maturation of dendritic cells, and increases the ability of dendritic cells to convert naive thymocytes to a Foxp3+ regulatory T cell lineage. It is unknown if this is the physiological function of Hassall's corpuscles in vivo.
In the past decade, researchers found tissue-specific self-antigens in Hassall's corpuscles and revealed their role in the pathogenesis of diseases such as type 1 diabetes, rheumatoid arthritis, multiple sclerosis, autoimmune thyroiditis, Goodpasture's syndrome, and others. They also discovered that Hassall's corpuscles synthesize chemokines affecting different cell populations in thymic medulla. Despite this, the information on the relationship between Hassall's corpuscles with other cell types of thymic medulla (dendritic, myoid, neuroendocrine cells, thymocytes, macrophages, eosinophils, etc.) |
https://en.wikipedia.org/wiki/Schwartz%20kernel%20theorem | In mathematics, the Schwartz kernel theorem is a foundational result in the theory of generalized functions, published by Laurent Schwartz in 1952. It states, in broad terms, that the generalized functions introduced by Schwartz (Schwartz distributions) have a two-variable theory that includes all reasonable bilinear forms on the space of test functions. The space itself consists of smooth functions of compact support.
Statement of the theorem
Let and be open sets in .
Every distribution defines a
continuous linear map such that
for every .
Conversely, for every such continuous linear map
there exists one and only one distribution such that () holds.
The distribution is the kernel of the map .
Note
Given a distribution one can always write the linear map K informally as
so that
.
Integral kernels
The traditional kernel functions of two variables of the theory of integral operators having been expanded in scope to include their generalized function analogues, which are allowed to be more singular in a serious way, a large class of operators from to its dual space of distributions can be constructed. The point of the theorem is to assert that the extended class of operators can be characterised abstractly, as containing all operators subject to a minimum continuity condition. A bilinear form on arises by pairing the image distribution with a test function.
A simple example is that the natural embedding of the test function space into - sending every test function into the corresponding distribution - corresponds to the delta distribution
concentrated at the diagonal of the underlined Euclidean space, in terms of the Dirac delta function . While this is at most an observation, it shows how the distribution theory adds to the scope. Integral operators are not so 'singular'; another way to put it is that for a continuous kernel, only compact operators are created on a space such as the continuous functions on . The operator is far from comp |
https://en.wikipedia.org/wiki/Predator%20%28video%20game%29 | Predator is a 1987 side-scrolling action game based on the film of the same title, and the first game based on the franchise.
Gameplay
The player starts off with no weapons and must collect them as the game progresses.
MSX Version
The MSX version was developed by Klon and is an action-platformer. The player takes the role of Dutch Schaefer. He can use his fists, a submachine gun, or mines as weapons, with each weapon containing a limited supply of ammunition Every time the player exits a level, a map screen appears, in which they may enter levels adjacent to their current level.
NES Version
The NES version was developed by Pack-In-Video, and is based loosely on the MSX version, even borrowing most of its graphics and music. The player takes on the role of "Dutch" Schaefer and must make it from the beginning to the end of the level. The player starts out with his fists, but can also collect a machine gun, laser gun (the only weapon that can do damage to the Predator), and fragmentation grenades. Unlike the MSX version, the player has infinite ammo for each weapon he carries, but unlike the MSX version, can only carry one weapon at a time. Also unlike the MSX version, the player loses any weapons they carry from the previous level into the next. The laser and grenades can be used to break certain walls and ground. Some levels have two exits, one of which will warp the player ahead several levels. At the end of every few levels, a Predator will appear, which the player must neutralize before proceeding to the next level.
Exclusive to the NES version is the "Big Mode", named after the larger sprites than in the normal action stages. Here, the game takes place in an auto-scrolling environment where the screen scrolls to the right. Dutch must shoot blue and red bubbles to collect weapon powerups, whilst avoiding touching them and getting damaged. At the end of each level, he must fight the Predator in a boss battle to proceed to the next level. These levels also act |
https://en.wikipedia.org/wiki/Pasko%20Rakic | Pasko Rakic (; ; born May 15, 1933) is a Yugoslav-born American neuroscientist, who presently works in the Yale School of Medicine Department of Neuroscience in New Haven, Connecticut. His main research interest is in the development and evolution of the human brain. He was the founder and served as Chairman of the Department of Neurobiology at Yale, and was founder and Director of the Kavli Institute for Neuroscience. He is best known for elucidating the mechanisms involved in development and evolution of the cerebral cortex. In 2008, Rakic shared the inaugural Kavli Prize in Neuroscience. He is currently the Dorys McConell Duberg Professor of Neuroscience, leads an active research laboratory, and serves on Advisory Boards and Scientific Councils of a number of Institutions and Research Foundations.
Early life and education
Rakic was born on May 15, 1933, in Ruma (formerly Kingdom of Yugoslavia). His father, Toma Rakić, was Croatian, originally from Pula (Istria, at that time part of Italy), but emigrated to Yugoslavia, where in the town of Novi Sad (Bačka) he studied to become an accountant and tax official. His mother, Juliana Todorić, of Serbian and Slovakian descent was born in Dubrovnik (Dalmatia) and moved to Ruma, where they met and got married in 1929.
Due to the nature of his father's job as Director of Regional Tax Services, the family moved to different towns every few years. Finally, their daughter, Vera, and son, Pasko, completed Gimnasium (High School) in the town of Sremska Mitrovica. Vera eventually graduated in mathematics from Belgrade University, and Pasko obtained his medical degree (MD) from the University of Belgrade School of Medicine, where he embarked on a career as a neurosurgeon.
His research career began in 1962, with a Fulbright Fellowship at Harvard University in Boston, MA, where he met professor Paul Yakovlev, who introduced him to the joy of studying human brain development, which inspired him to abandon neurosurgery. In 1966 |
https://en.wikipedia.org/wiki/International%20Union%20of%20Food%20Science%20and%20Technology | The International Union of Food Science and Technology (IUFoST) ( ) is the global scientific organization and voice for food science and technology representing more than 300,000 food scientists, engineers and technologists through its work in more than 100 countries. It is a voluntary, non-profit association of national food science organizations. IUFoST is the only elected scientific representative of Food Science and Technology in the International Science Council (ISC), elected by its peers across scientific disciplines. It is the only global representative of food science and technology to notable organizations such as the World Health Organization (WHO), Food and Agriculture Organization (FAO) of the United Nations, United Nations Development Programme and (UNDP), CODEX Alimentarius.
Background
The feasibility of establishing an international organization of food scientists and technologists dedicated to the nutritional needs of the people of the world was informally explored during the First International Congress of Food Science and Technology (1962) held in London. The President of the Congress was Lord Rank who crystallised informal discussions that had already been taking place among a number of food scientists from around the world when he stated in his presidential message: "If the potentialities of ... food science and technology are to ... culminate and nutritionally adequate, then there must be international collaboration." From the Congress emerged the International Committee of Food Science and Technology.
The work of this Committee culminated in the formal inauguration of the International Union of Food Science and Technology during the Third International Congress of Food Science and Technology convened in 1970 in Washington, DC, USA. The 1970 meeting in Washington, DC, USA was referred to as "SOS/70" with SOS referring to Science and Survival.
NATO's Involvement in the Conception of IUFoST
In 1960, several British scientific societies and the |
https://en.wikipedia.org/wiki/Five-hundred-meter%20Aperture%20Spherical%20Telescope | The Five-hundred-meter Aperture Spherical radio Telescope (FAST; ), nicknamed Tianyan (, lit. "Sky's/Heaven's Eye"), is a radio telescope located in the Dawodang depression (), a natural basin in Pingtang County, Guizhou, southwest China. FAST has a diameter dish constructed in a natural depression in the landscape. It is the world's largest filled-aperture radio telescope and the second-largest single-dish aperture, after the sparsely-filled RATAN-600 in Russia.
It has a novel design, using an active surface made of 4,500 metal panels which form a moving parabola shape in real time. The cabin containing the feed antenna, suspended on cables above the dish, can move automatically by using winches to steer the instrument to receive signals from different directions. It observes at wavelengths of 10 cm to 4.3 m.
Construction of FAST began in 2011. It observed first light in September 2016. After three years of testing and commissioning, it was declared fully operational on 11 January 2020.
The telescope made its first discovery, of two new pulsars, in August 2017. The new pulsars PSR J1859-01 and PSR J1931-02—also referred to as FAST pulsar #1 and #2 (FP1 and FP2), were detected on 22 and 25 August 2017; they are 16,000 and 4,100 light years away, respectively. Parkes Observatory in Australia independently confirmed the discoveries on 10 September 2017. By September 2018, FAST had discovered 44 new pulsars, and by 2021, 500.
History
The telescope was first proposed in 1994. The project was approved by the National Development and Reform Commission (NDRC) in July 2007. A 65-person village was relocated from the valley to make room for the telescope and an additional 9,110 people living within a 5 km radius of the telescope were relocated to create a radio-quiet area. The Chinese government spent around $269 million in poverty relief funds and bank loans for the relocation of the local residents, while the construction of the telescope itself cost .
On 26 Decembe |
https://en.wikipedia.org/wiki/Diastasis%20%28pathology%29 | In pathology, diastasis is the separation of parts of the body that are normally joined, such as the separation of certain abdominal muscles during pregnancy, or of adjacent bones without fracture.
See also
Diastasis recti
Diastasis symphysis pubis
Compare with:
Diastasis (physiology) |
https://en.wikipedia.org/wiki/Lib%20Sh | Sh was an early metaprogramming language for programmable GPUs. It offered a general-purpose programming language, following a stream-processing model. Programs written in Sh could either run on CPUs or GPUs, obviating the need to write programs in a mix of two programming languages as was the case with earlier GPU programming systems such as Cg or HLSL.
As of August 2006, it is no longer maintained. RapidMind Inc. was formed to commercialize the research behind Sh. RapidMind was then bought by Intel and ceased Sh development as well.
See also
BrookGPU
CUDA
Close to Metal
OpenCL
RapidMind |
https://en.wikipedia.org/wiki/Self-reconfiguring%20modular%20robot | Modular self-reconfiguring robotic systems or self-reconfigurable modular robots are autonomous kinematic machines with variable morphology. Beyond conventional actuation, sensing and control typically found in fixed-morphology robots, self-reconfiguring robots are also able to deliberately change their own shape by rearranging the connectivity of their parts, in order to adapt to new circumstances, perform new tasks, or recover from damage.
For example, a robot made of such components could assume a worm-like shape to move through a narrow pipe, reassemble into something with spider-like legs to cross uneven terrain, then form a third arbitrary object (like a ball or wheel that can spin itself) to move quickly over a fairly flat terrain; it can also be used for making "fixed" objects, such as walls, shelters, or buildings.
In some cases this involves each module having 2 or more connectors for connecting several together. They can contain electronics, sensors, computer processors, memory and power supplies; they can also contain actuators that are used for manipulating their location in the environment and in relation with each other. A feature found in some cases is the ability of the modules to automatically connect and disconnect themselves to and from each other, and to form into many objects or perform many tasks moving or manipulating the environment.
By saying "self-reconfiguring" or "self-reconfigurable" it means that the mechanism or device is capable of utilizing its own system of control such as with actuators or stochastic means to change its overall structural shape. Having the quality of being "modular" in "self-reconfiguring modular robotics" is to say that the same module or set of modules can be added to or removed from the system, as opposed to being generically "modularized" in the broader sense. The underlying intent is to have an indefinite number of identical modules, or a finite and relatively small set of identical modules, in a mesh or m |
https://en.wikipedia.org/wiki/Verilog-A | Verilog-A is an industry standard modeling language for analog circuits. It is the continuous-time subset of Verilog-AMS. A few commercial applications may export MEMS designs in Verilog-A format.
History
Verilog-A was created out of a need to standardize the Spectre behavioral language in face of competition from VHDL (an IEEE standard), which was absorbing analog capability from other languages (e.g. MAST). Open Verilog International (OVI, the body that originally standardized Verilog) agreed to support the standardization, provided that it was part of a plan to create Verilog-AMS — a single language covering both analog and digital design. Verilog-A was an all-analog subset of Verilog-AMS that was the first phase of the project.
There was considerable delay (possibly procrastination) between the first Verilog-A language reference manual and the full Verilog-AMS, and in that time Verilog moved to the IEEE, leaving Verilog-AMS behind at Accellera.
The email log from AD 2000 can be found here.
Standard Availability
Verilog-A standard does not exist stand-alone - it is part of the complete Verilog-AMS standard. Its LRM is available at the Accellera website. However, the initial and subsequent releases can be found here, with what will probably be the final release here since future work will leverage the new net-type capabilities in SystemVerilog. Built-in types like "wreal" in Verilog-AMS will become user-defined types in SystemVerilog more in line with the VHDL methodology.
Compatibility with the C programming language
A subset of Verilog-A can be translated automatically to the C programming language using the Automatic Device Model Synthesizer (ADMS). This feature is used for example to translate the BSIM Verilog-A transistor models, which are no more released in C, for use in simulators like ngspice.
Code example
This first example gives a first demonstration of modeling in Verilog-A:
`include "constants.vams"
`include "disciplines.vams"
module example( |
https://en.wikipedia.org/wiki/Rubens%20tube | A Rubens tube, also known as a standing wave flame tube, or simply flame tube, is a physics apparatus for demonstrating acoustic standing waves in a tube. Invented by German physicist Heinrich Rubens in 1905, it graphically shows the relationship between sound waves and sound pressure, as a primitive oscilloscope. Today, it is used only occasionally, typically as a demonstration in physics education.
Overview
A length of pipe is perforated along the top and sealed at both ends - one seal is attached to a small speaker or frequency generator, the other to a supply of a flammable gas (propane tank). The pipe is filled with the gas, and the gas leaking from the perforations is lit. If a suitable constant frequency is used, a standing wave can form within the tube. When the speaker is turned on, the standing wave will create points with oscillating (higher and lower) pressure and points with constant pressure (pressure nodes) along the tube. Where there is oscillating pressure due to the sound waves, less gas will escape from the perforations in the tube, and the flames will be lower at those points. At the pressure nodes, the flames are higher. At the end of the tube gas molecule velocity is zero and oscillating pressure is maximal, thus low flames are observed. It is possible to determine the wavelength from the flame minimum and maximum by simply measuring with a ruler.
Explanation
Since the time averaged pressure is equal at all points of the tube, it is not straightforward to explain the different flame heights. The flame height is proportional to the gas flow as shown in the figure. Based on Bernoulli's principle, the gas flow is proportional to the square root of the pressure difference between the inside and outside of the tube. This is shown in the figure for a tube without standing sound wave. Based on this argument, the flame height depends non-linearly on the local, time-dependent pressure. The time average of the flow is reduced at the points with osci |
https://en.wikipedia.org/wiki/Natural%20fertility | Natural fertility is the fertility that exists without birth control. The control is the number of children birthed to the parents and is modified as the number of children reaches the maximum. Natural fertility tends to decrease as a society modernizes. Women in a pre-modernized society typically have given birth to a large number of children by the time they are 50 years old, while women in post-modernized society only bear a small number by the same age. However, during modernization natural fertility rises, before family planning is practiced.
Historical populations have traditionally honored the idea of natural fertility by displaying fertility symbols.
Birth control
Natural fertility is a concept developed by the French historical demographer Louis Henry to refer to the level of fertility that would prevail in a population that makes no conscious effort to limit, regulate, or control fertility, so that fertility depends only on physiological factors affecting fecundity. In contrast, populations that practice birth control will have lower fertility levels as a result of delaying first births (a lengthened interval between menarche and first pregnancy), extended intervals between births, or stopping child-bearing at a certain age. Such control does not assume the use of artificial means of fertility regulation or modern contraceptive methods but can result from the use of traditional means of contraception or pregnancy prevention (e.g., coitus interruptus). Many social norms or practices affect fertility regulation including celibacy, the age at marriage and the timing and frequency of sexual intercourse, including periods of prescribed sexual abstinence. Breastfeeding has also been used to space births in areas without birth control. Ansley Coale and other demographers have developed several methods for measuring the extent of such fertility control, in which the idea of a natural level of fertility is an essential component.
When women have access to birth |
https://en.wikipedia.org/wiki/CDC%20160%20series | The CDC 160 series was a series of minicomputers built by Control Data Corporation. The CDC 160 and CDC 160-A were 12-bit minicomputers built from 1960 to 1965; the CDC 160G was a 13-bit minicomputer, with an extended version of the CDC 160-A instruction set, and a compatibility mode in which it did not use the 13th bit. The 160 was designed by Seymour Cray - reportedly over a long three-day weekend.
It fit into the desk where its operator sat.
The 160 architecture uses ones' complement arithmetic with end-around carry.
NCR joint-marketed the 160-A under its own name for several years in the 1960s.
Overview
A publishing company that purchased a CDC 160-A described it as "a single user machine with no batch processing capability. Programmers and/or users would go to the computer room, sit at the console, load the paper tape bootstrap and start up a program."
The CDC 160-A was a simple piece of hardware, and yet provided a variety of features which were scaled-down capabilities found only on larger systems. It was therefore an ideal platform for introducing neophyte programmers to the sophisticated concepts of low-level input/output (I/O) and interrupt systems.
All 160 systems had a paper-tape reader, and a punch, and most had an IBM Electric typewriter modified to act as a computer terminal. Memory on the 160 was 4096 12-bit words. The CPU had a 12-bit ones' complement accumulator but no multiply or divide. There was a full complement of instructions and several addressing modes. Indirect addressing was almost as good as index registers. The instruction set supported both relative (to the current P register) and absolute. The original instruction set did not have a subroutine call instruction and could only address one bank of memory.
In the 160-A model, a "return jump" and a memory bank-switch instruction was added. Return-jump allowed simple subroutine calls and bank switching allowed other 4K banks of memory to be addressed, albeit clumsily, up to a |
https://en.wikipedia.org/wiki/Picture%20language | In formal language theory, a picture language is a set of pictures, where a picture is a 2D array of characters over some alphabet.
For example, the language defines the language of rectangles composed of the character . This language contains pictures such as:
The study of picture languages was initially motivated by the problems of pattern recognition and image processing, but two-dimensional patterns also appear in the study of cellular automata and other parallel computing models. Some formal systems have been created to define picture languages, such as array grammars and tiling systems. |
https://en.wikipedia.org/wiki/Orthopterists%27%20Society | The Orthopterists' Society (formerly the Pan American Acridological Society) is an international scientific organization devoted to facilitating communication and research among persons interested in Orthoptera and related organisms. (The Orthoptera include grasshoppers, locusts, crickets, katydids and other insects.) The Society currently has 330 members from 43 countries on six continents. The journal publishes papers on all aspects of the biology of these insects from ecology and taxonomy to physiology, endocrinology, cytogenetics, and control measures. The Society publishes the refereed biannual Journal of Orthoptera Research.
The Society was founded in 1976 by some 35 orthopterists who met at San Martín de los Andes, Argentina. Its Constitution and Bylaws were adopted in 1977, and it was accorded tax-exempt status by the United States government in 1978. The meetings held since San Martín have been at Bozeman (US), Maracay, (Venezuela), Saskatoon (Canada), Valsaín (Spain), Hilo (US), Cairns (Australia), Montpellier (France), and Canmore (Canada).
Graduate students and young researchers can apply for the Orthopterists' Society Research Fund.
External links
The Orthopterists' Society
Journal of Orthoptera Research
Entomological societies
Orthoptera |
https://en.wikipedia.org/wiki/Stall%20%28engine%29 | A stall is the slowing or stopping of a process and in the case of an engine refers to a sudden stopping of the engine turning, usually brought about accidentally.
It is commonly applied to the phenomenon whereby an engine abruptly ceases operating and stops turning. It might be due to not getting enough air, energy, fuel, or electric spark, fuel starvation, a mechanical failure, or in response to a sudden increase in engine load. This increase in engine load is common in vehicles with a manual transmission when the clutch is released too suddenly.
The ways in which a car can stall are usually down to the driver, especially with a manual transmission. For instance, if a driver takes their foot off the clutch too quickly while stationary then the car will stall; taking the foot off the clutch slowly will stop this from happening. Stalling also happens when the driver forgets to depress the clutch and/or change to neutral while coming to a stop. Stalling can be dangerous, especially in heavy traffic.
A car fitted with an automatic transmission could also have its engine stalled when the vehicle is travelling in the opposite direction to the selected gear. For example, if the selector is in the 'D' position and the car is moving backwards, (on a steep enough hill to overcome the torque from the torque converter) the engine will stall, due to the fact that the engine is forced to turn in the opposite direction to what it is actually doing. This is because, hypothetically, if the car is rolling backward fast enough, the force from the rotating wheels will be transmitted backward through the transmission and act as a sudden load on the engine.
Digital electronics fuel injection and ECU ignition systems have greatly reduced stalling in modern engines.
See also
Stall torque |
https://en.wikipedia.org/wiki/Amanita%20calyptroderma | Amanita calyptroderma also known as coccora, coccoli or the Pacific amanita, is a white-spored mushroom that fruits naturally in the coastal forests of the western United States during the fall and winter and spring.
Description
This mushroom's cap is about 10–25 cm in diameter, usually orange-brown in color (but sometimes white), and partially covered by a thick white patch of universal veil. It has white, close gills. Its cream-colored stalk is about 10–20 cm in length and 2–4 cm in width, adorned with a partial veil. It has a partially hollow stem (filled with a stringy white pith), and a large, sacklike volva at the base of the stalk.
The spores of this species, which are white, do not change color when placed in a solution of Melzer's reagent, and thus are termed inamyloid. This characteristic in combination with the skirt-like annulus and absence of a bulb at the base of the stalk place this mushroom in the section Caesareae.
Distribution and habitat
This mushroom occurs in conifer forests, forming mycorrhizae with madrone (Arbutus menziesii) in the southern part of its range (Central California northwards to Washington). However, in the northern part of its range (Washington to southern Canada), its preferred host is Douglas fir (Pseudotsuga menziesii).
Edibility
Experienced mushroom hunters regard this mushroom as a good edible species, but caution must be exercised when collecting A. calyptroderma for the table, since it can be confused with other species in the genus Amanita. This genus contains some of the deadliest mushrooms in the world, most notably A. phalloides and A. ocreata.
Similar species
Amanita vernicoccora is a closely related edible species, which fruits in hilly or mountainous areas from late winter to spring. Otherwise similar in appearance, its cap is yellow. A. caesarea is also related and edible.
The deadly poisonous A. phalloides is similar in appearance.
See also
List of Amanita species |
https://en.wikipedia.org/wiki/Honest%20leftmost%20branch | In set theory, an honest leftmost branch of a tree T on ω × γ is a branch (maximal chain) ƒ ∈ [T] such that for each branch g ∈ [T], one has ∀ n ∈ ω : ƒ(n) ≤ g(n). Here, [T] denotes the set of branches of maximal length of T, ω is the smallest infinite ordinal (represented by the natural numbers N), and γ is some other ordinal.
See also
scale (computing)
Suslin set |
https://en.wikipedia.org/wiki/Diagram%20%28category%20theory%29 | In category theory, a branch of mathematics, a diagram is the categorical analogue of an indexed family in set theory. The primary difference is that in the categorical setting one has morphisms that also need indexing. An indexed family of sets is a collection of sets, indexed by a fixed set; equivalently, a function from a fixed index set to the class of sets. A diagram is a collection of objects and morphisms, indexed by a fixed category; equivalently, a functor from a fixed index category to some category.
The universal functor of a diagram is the diagonal functor; its right adjoint is the limit of the diagram and its left adjoint is the colimit. The natural transformation from the diagonal functor to some arbitrary diagram is called a cone.
Definition
Formally, a diagram of type J in a category C is a (covariant) functor
The category J is called the index category or the scheme of the diagram D; the functor is sometimes called a J-shaped diagram. The actual objects and morphisms in J are largely irrelevant; only the way in which they are interrelated matters. The diagram D is thought of as indexing a collection of objects and morphisms in C patterned on J.
Although, technically, there is no difference between an individual diagram and a functor or between a scheme and a category, the change in terminology reflects a change in perspective, just as in the set theoretic case: one fixes the index category, and allows the functor (and, secondarily, the target category) to vary.
One is most often interested in the case where the scheme J is a small or even finite category. A diagram is said to be small or finite whenever J is.
A morphism of diagrams of type J in a category C is a natural transformation between functors. One can then interpret the category of diagrams of type J in C as the functor category CJ, and a diagram is then an object in this category.
Examples
Given any object A in C, one has the constant diagram, which is the diagram that maps all o |
https://en.wikipedia.org/wiki/Suslin%20cardinal | In mathematics, a cardinal λ < Θ is a Suslin cardinal if there exists a set P ⊂ 2ω such that P is λ-Suslin but P is not λ'-Suslin for any λ' < λ. It is named after the Russian mathematician
Mikhail Yakovlevich Suslin (1894–1919).
See also
Suslin representation
Suslin line
AD+ |
https://en.wikipedia.org/wiki/Leaf%20angle%20distribution | The leaf angle distribution (or LAD) of a plant canopy refers to the mathematical description of the angular orientation of the leaves in the vegetation. Specifically, if each leaf is conceptually represented by a small flat plate, its orientation can be described with the zenith and the azimuth angles of the surface normal to that plate. If the leaf has a complex structure and is not flat, it may be necessary to approximate the actual leaf by a set of small plates, in which case there may be a number of leaf normals and associated angles. The LAD describes the statistical distribution of these angles.
Examples of leaf angle distributions
Different plant canopies exhibit different LADs: For instance, grasses and willows have their leaves largely hanging vertically (such plants are said to have an erectophile LAD), while oaks tend to maintain their leaves more or less horizontally (these species are known as having a planophile LAD). In some tree species, leaves near the top of the canopy follow an erectophile LAD while those at the bottom of the canopy are more planophile. This may be interpreted as a strategy by that plant species to maximize exposure to light, an important constraint to growth and development. Yet other species (notably sunflower) are capable of reorienting their leaves throughout the day to optimize exposure to the Sun: this is known as heliotropism.
Importance of LAD
The LAD of a plant canopy has a significant impact on the reflectance, transmittance and absorption of solar light in the vegetation layer, and thus also on its growth and development. LAD can also serve as a quantitative index to monitor the state of the plants, as wilting usually results in more erectophile LADs. Models of radiation transfer need to take this distribution into account to predict, for instance, the albedo or the productivity of the canopy.
Measuring LAD
Accurately measuring the statistical properties of leaf angle distributions is not a trivial matter, especi |
https://en.wikipedia.org/wiki/Szilassi%20polyhedron | In geometry, the Szilassi polyhedron is a nonconvex polyhedron, topologically a torus, with seven hexagonal faces.
Coloring and symmetry
The 14 vertices and 21 edges of the Szilassi polyhedron form an embedding of the Heawood graph onto the surface of a torus.
Each face of this polyhedron shares an edge with each other face. As a result, it requires seven colours to colour all adjacent faces. This example shows that, on surfaces topologically equivalent to a torus, some subdivisions require seven colors, providing the lower bound for the seven colour theorem. The other half of the theorem states that all toroidal subdivisions can be colored with seven or fewer colors.
The Szilassi polyhedron has an axis of 180-degree symmetry. This symmetry swaps three pairs of congruent faces, leaving one unpaired hexagon that has the same rotational symmetry as the polyhedron.
Complete face adjacency
The tetrahedron and the Szilassi polyhedron are the only two known polyhedra in which each face shares an edge with each other face.
If a polyhedron with f faces is embedded onto a surface with h holes, in such a way that each face shares an edge with each other face, it follows by some manipulation of the Euler characteristic that
This equation is satisfied for the tetrahedron with h = 0 and f = 4, and for the Szilassi polyhedron with h = 1 and f = 7.
The next possible solution, h = 6 and f = 12, would correspond to a polyhedron with 44 vertices and 66 edges. However, it is not known whether such a polyhedron can be realized geometrically without self-crossings (rather than as an abstract polytope). More generally this equation can be satisfied precisely when f is congruent to 0, 3, 4, or 7 modulo 12.
History
The Szilassi polyhedron is named after Hungarian mathematician Lajos Szilassi, who discovered it in 1977. The dual to the Szilassi polyhedron, the Császár polyhedron, was discovered earlier by ; it has seven vertices, 21 edges connecting every pair of vertices, and |
https://en.wikipedia.org/wiki/Dimensional%20Insight | Dimensional Insight is a software company specializing in the development and marketing of business intelligence and analytics software. Its flagship product, Diver Platform, delivers information in the form of reports, charts, and analytical applications.
History
Dimensional Insight was founded in 1989 by Frederick A. Powers and Stanley R. Zanarotti. It has subsidiaries in the People's Republic of China, Hong Kong, Panama, Germany, Norway and the Netherlands.
Products
Dimensional Insight's flagship product, Diver Platform (Diver), is a business intelligence and analytics suite that allows users to combine multiple internal and external data sources into a single data feed that enables secure, role-based access to data via interactive dashboards and reports. Diver can also be used to send email alerts, to create PDF files, or to download data into Microsoft Excel files.
In addition, Dimensional Insight develops specific applications to meet reporting and analytics needs of both healthcare as well as supplier, manufacture, and distributor organizations. These applications include a revenue and expense tracking tool, productivity analysis tools, and a suite of executive dashboards, as well as physician scorecards, surgery scorecards, and a certified Meaningful Use solution.
The company emerged as the leader in the two top-right quadrants presented in Dresner Advisory Services' Wisdom of Crowds Business Intelligence Market Study in 2020. Diver was awarded first place in the Business Intelligence segment in the 2015 Best in KLAS report. It won Best in KLAS in healthcare business intelligence and analytics again in 2019 and 2020, scoring highly in the report for its corporate culture, customer relationships, and overall value.
See also
Business intelligence tools |
https://en.wikipedia.org/wiki/Suslin%20representation | In mathematics, a Suslin representation of a set of reals (more precisely, elements of Baire space) is a tree whose projection is that set of reals. More generally, a subset A of κω is λ-Suslin if there is a tree T on κ × λ such that A = p[T].
By a tree on κ × λ we mean here a subset T of the union of κi × λi for all i ∈ N (or i < ω in set-theoretical notation).
Here, p[T] = { f | ∃g : (f,g) ∈ [T] } is the projection of T,
where [T] = { (f, g ) | ∀n ∈ ω : (f(n), g(n)) ∈ T } is the set of branches through T.
Since [T] is a closed set for the product topology on κω × λω where κ and λ are equipped with the discrete topology (and all closed sets in κω × λω come in this way from some tree on κ × λ), λ-Suslin subsets of κω are projections of closed subsets in κω × λω.
When one talks of Suslin sets without specifying the space, then one usually means Suslin subsets of R, which descriptive set theorists usually take to be the set ωω.
See also
Suslin cardinal
Suslin operation
External links
R. Ketchersid, The strength of an ω1-dense ideal on ω1 under CH, 2004.
Set theory |
https://en.wikipedia.org/wiki/Soy%20nut | Soy nuts are soybeans soaked in water, drained, and then baked or roasted. They can be used in place of nuts and are high in protein and dietary fiber. Soy nuts along with various soy products are common in vegan and plant-based diets all over the world as soy is a complete protein and is inexpensive to purchase. |
https://en.wikipedia.org/wiki/Metallic%20path%20facilities | Metallic path facility (MPF) are the unshielded twisted pair of copper wires that run from a main distribution frame (MDF) at a local telephone exchange to the customer. In this variant, both broadband and voice (baseband) services, together potentially with a video on demand service, are provided to the end user by a single communications provider. MPF services are typically delivered through use of an MSAN.
Shared metallic path facility (SMPF) is based on the same technology as MPF, but denotes a variant whereby an Internet Service Provider (ISP) provides a broadband service to the end user but hands the voice (baseband) service back to the PTT/ILEC. Hence the provision of services over the end users copper wires might be shared between two providers. With SMPF, the non-incumbent service provider could purchase wholesale the voice service provision from the PTT/ILEC to allow the former to control the customer relationship for both broadband and voice services. In the UK at least, this service is called Wholesale Line Rental (WLR). SMPF services are typically delivered through use of a DSLAM.
Both terms are commonly used, for example by Ofcom and Openreach in the UK, to denote a local-loop unbundling service, designed to ensure a former monopoly player (deemed to have Significant Market Power, or SMP) allows a level playing field or Equivalence of Inputs.
See also
Local loop
Local-loop unbundling
Main distribution frame
Telephone exchange |
https://en.wikipedia.org/wiki/State%20Plane%20Coordinate%20System | The State Plane Coordinate System (SPCS) is a set of 125 geographic zones or coordinate systems designed for specific regions of the United States. Each state contains one or more state plane zones, the boundaries of which usually follow county lines. There are 108 zones in the contiguous US, with 10 more in Alaska, 5 in Hawaii, one for Puerto Rico and US Virgin Islands, and one for Guam. The system is widely used for geographic data by state and local governments. Its popularity is due to at least two factors. First, it uses a simple Cartesian coordinate system to specify locations rather than a more complex spherical coordinate system (the geographic coordinate system of latitude and longitude). By using the Cartesian coordinate system's simple XY coordinates, "plane surveying" methods can be used, speeding up and simplifying calculations. Second, the system is highly accurate within each zone (error less than 1:10,000). Outside a specific state plane zone accuracy rapidly declines, thus the system is not useful for regional or national mapping.
Most state plane zones are based on either a transverse Mercator projection or a Lambert conformal conic projection. The choice between the two map projections is based on the shape of the state and its zones. States that are long in the east–west direction are typically divided into zones that are also long east–west. These zones use the Lambert conformal conic projection, because it is good at maintaining accuracy along an east–west axis, due to the projection cone intersecting the earth's surface along two lines of latitude. Zones that are long in the north–south direction use the transverse Mercator projection because it is better at maintaining accuracy along a north–south axis, due to the circumference of the projection cylinder being oriented along a meridian of longitude. The panhandle of Alaska, whose maximum dimension is on a diagonal, uses an Oblique Mercator projection, which minimizes the combined error in th |
https://en.wikipedia.org/wiki/Adobe%20Soundbooth | Soundbooth was a digital audio editor by Adobe Systems Incorporated for Windows XP, Windows Vista, 7 and Mac OS X. Adobe has described it as being "in the spirit of SoundEdit 16 and Cool Edit 2000". Adobe also has a more powerful program called Adobe Audition, which replaced Soundbooth as of Adobe Creative Suite 5.5 Production Premium. Soundbooth, discontinued in 2011, was aimed at creative professionals who do not specialize in audio or people who need a simple editing program and do not require the full features of Adobe Audition. Due to Intel-specific code, Adobe stated that the Mac OS X version would only be available for machines using Intel processors. Soundbooth CS4 was the first version to support 64-bit officially.
Key features
Creation of the Adobe Sound Document allows Adobe Flash to create multi-track audio projects in Soundbooth. Soundbooth also features dynamic linking that allows video sequences from Adobe After Effects and Adobe Premiere Pro to be played in Soundbooth without having to first be rendered, a feature that is expected to save users time.
Comparing Soundbooth to Audition
The major difference between the programs is that Soundbooth uses a task-based interface and Adobe Audition uses a tool-based interface. Another difference is that Soundbooth uses royalty-free scores and sound effects whereas Adobe Audition uses music loops and allows for low latency multi-track recording.
Criticism
Many users have commented on the lack of simple features that were found in programs like Sound Edit 16 and Cool Edit Pro; for example, the ability to create a new file or to "reverse" a sound.
Lack of simple batch processing makes it a chore when needing to simply speed up, clean up (pops or crackles), or apply a pitch change to all of the files in a project. Each chapter, track, or MP3 file must be opened, applied, and saved independently; contrary to customer expectations of features that freeware has provided for many years.
In response, Durin Gle |
https://en.wikipedia.org/wiki/Myrmecophily | Myrmecophily ( , ) is the term applied to positive interspecies associations between ants and a variety of other organisms, such as plants, other arthropods, and fungi. Myrmecophily refers to mutualistic associations with ants, though in its more general use, the term may also refer to commensal or even parasitic interactions.
The term "myrmecophile" is used mainly for animals that associate with ants. An estimated 10,000 species of ants (Formicidae) are known, with a higher diversity in the tropics. In most terrestrial ecosystems, ants are ecologically and numerically dominant, being the main invertebrate predators. As a result, ants play a key role in controlling arthropod richness, abundance, and community structure. Some evidence shows that the evolution of myrmecophilous interactions has contributed to the abundance and ecological success of ants, by ensuring a dependable and energy-rich food supply, thus providing a competitive advantage for ants over other invertebrate predators. Most myrmecophilous associations are opportunistic, unspecialized, and facultative (meaning both species are capable of surviving without the interaction), though obligate mutualisms (those in which one or both species are dependent on the interaction for survival) have also been observed for many species.
As ant nests grow, they are more likely to house more and greater varieties of myrmecophiles. This is partly because larger colonies have greater specializations, so more diversity of ecology within the nests, allowing for more diversity and population sizes among the myrmecophiles.
Myrmecophile
A "myrmecophile" is an organism that lives in association with ants.
Myrmecophiles may have various roles in their host ant colony. Many consume waste materials in the nests, such as dead ants, dead larvae, or fungi growing in the nest. Some myrmecophiles, however, feed on the stored food supplies of ants, and a few are predatory on ant eggs, larvae, or pupae. Others benefit the ants by |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.