source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Brander%E2%80%93Spencer%20model
|
The Brander–Spencer model is an economic model in international trade originally developed by James Brander and Barbara Spencer in the early 1980s. The model illustrates a situation where, under certain assumptions, a government can subsidize domestic firms to help them in their competition against foreign producers and in doing so enhances national welfare. This conclusion stands in contrast to results from most international trade models, in which government non-interference is socially optimal.
The basic model is a variation on the Stackelberg–Cournot "leader and follower" duopoly game. Alternatively, the model can be portrayed in game theoretic terms as initially a game with multiple Nash equilibria, with government having the capability of affecting the payoffs to switch to a game with just one equilibrium. Although it is possible for the national government to increase a country's welfare in the model through export subsidies, the policy is of beggar thy neighbor type. This also means that if all governments simultaneously attempt to follow the policy prescription of the model, all countries would wind up worse off.
The model was part of the "New Trade Theory" that was developed in the late 1970s and early 1980s, which incorporated then recent developments from literature on industrial organization into theories of international trade. In particular, like in many other New Trade Theory models, economies of scale (in this case, in the form of fixed entry costs) play an important role in the Brander–Spencer model.
Entry game version
A simplified version of the model was popularized by Paul Krugman in the 1990s in his book Peddling Prosperity. In this set up there are two firms, one foreign and one domestic which are considering entering a new export market in a third country (or possibly the whole world). The demand in the export market is such that if only one firm enters, it will make a profit, but if they both enter each will make a loss, perhaps because
|
https://en.wikipedia.org/wiki/Elongated%20triangular%20gyrobicupola
|
In geometry, the elongated triangular gyrobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a "triangular gyrobicupola," or cuboctahedron, by inserting a hexagonal prism between its two halves, which are congruent triangular cupolae (). Rotating one of the cupolae through 60 degrees before the elongation yields the triangular orthobicupola ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Related polyhedra and honeycombs
The elongated triangular gyrobicupola forms space-filling honeycombs with tetrahedra and square pyramids.
|
https://en.wikipedia.org/wiki/Pseudosphere
|
In geometry, a pseudosphere is a surface with constant negative Gaussian curvature.
A pseudosphere of radius is a surface in having curvature in each point. Its name comes from the analogy with the sphere of radius , which is a surface of curvature . The term was introduced by Eugenio Beltrami in his 1868 paper on models of hyperbolic geometry.
Tractroid
The same surface can be also described as the result of revolving a tractrix about its asymptote.
For this reason the pseudosphere is also called tractroid. As an example, the (half) pseudosphere (with radius 1) is the surface of revolution of the tractrix parametrized by
It is a singular space (the equator is a singularity), but away from the singularities, it has constant negative Gaussian curvature and therefore is locally isometric to a hyperbolic plane.
The name "pseudosphere" comes about because it has a two-dimensional surface of constant negative Gaussian curvature, just as a sphere has a surface with constant positive Gaussian curvature.
Just as the sphere has at every point a positively curved geometry of a dome the whole pseudosphere has at every point the negatively curved geometry of a saddle.
As early as 1693 Christiaan Huygens found that the volume and the surface area of the pseudosphere are finite, despite the infinite extent of the shape along the axis of rotation. For a given edge radius , the area is just as it is for the sphere, while the volume is and therefore half that of a sphere of that radius.
Universal covering space
The half pseudosphere of curvature −1 is covered by the interior of a horocycle. In the Poincaré half-plane model one convenient choice is the portion of the half-plane with . Then the covering map is periodic in the direction of period 2, and takes the horocycles to the meridians of the pseudosphere and the vertical geodesics to the tractrices that generate the pseudosphere. This mapping is a local isometry, and thus exhibits the portion of the upper h
|
https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29
|
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.
Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory.
From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics.
For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory, see interpretation (model theory).
In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.
History
In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it.
Definition
Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure.
Domain
The domain of a struct
|
https://en.wikipedia.org/wiki/Mir-394%20microRNA%20precursor%20family
|
In molecular biology mir-394 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
See also
MicroRNA
|
https://en.wikipedia.org/wiki/Tristichopterus
|
Tristichopterus, with a maximum length of sixty centimetres, is the smallest genus in the family of prehistoric lobe-finned fish, Tristichopteridae that was believed to have originated in the north and dispersed throughout the course of the Upper Devonian into Gondwana. Tristichopterus currently has only one named species, first described by Egerton in 1861. The Tristichopterus node is thought to have originated during the Givetian part of the Devonian. Tristichopterus was thought by Egerton to be unique for its time period as a fish with ossified vertebral centers, breaking the persistent notochord rule of most Devonian fish but this was later reinspected and shown to be only partial ossification by Dr. R. H. Traquair. Tristichopterus alatus closely resembles Eusthenopteron and this sparked some debate after its discovery as to whether it was a separate taxon.
Geology
It is believed that Tristichopterus originated in the Laurasian continent along with the similar Eusthenopteron, and that later derived members, like Eusthenodon, of the Tristichopteridae family achieved wider distribution into Gondwana. The modern day geographical locations that Tristichopterus is thought to have lived in are Australia, Western Europe, and Greenland.
Historical Information and Discovery
The two first specimens of Tristichopterus were dug up in the Old Redstone of the John o’ Groats group in Caithness by Charles William Peach and described by Sir Philip Egerton in 1861. A lot of confusion has surrounded this taxon as the first specimens lacked head, fin, and dentition osteology. The original classification by Egerton was to put it in the same family as Dipterus with Coelacanthi.
In 1864 and 1865 Peach obtained further specimens of the genus with clear paired fin, head, and dentition osteology that prevented its placement within the Coelacanthi clade with Dipterus. Ramsay Traquair in 1875 instead included Tristichopterus in the Cyclopteridae family. Later Tristichopterus was
|
https://en.wikipedia.org/wiki/User%20interface%20specification
|
A user interface specification (UI specification) is a document that captures the details of the software user interface into a written document. The specification covers all possible actions that an end user may perform and all visual, auditory and other interaction elements.
Purpose
The UI specification is the main source of implementation information for how the software should work . Beyond implementation, a UI specification should consider usability, localization, and demo limits. A UI spec may also be incorporated by those within the organization responsible for marketing, graphic design, and software testing. As future designers might continue or build on top of existing work, a UI specification should consider forward compatibility constraints in order to assist the implementation team.
The UI specification can be regarded as the document that bridges the gap between the product management functions and implementation. One of the main purposes of a UI specification is to process the product requirements into a more detailed format. The level of detail and document type varies depending the needs and design practices of the organizations. The small scale prototypes might require only modest documentation with high-level details.
In general, the goal of requirement specifications are to describe what a product is capable of, whereas the UI specification details how these requirements are implemented in practice.
The process
Before UI specification is created, a lot of work is done already for defining the application and desired functionality.
Usually there are requirements for the software which are basis for the use case creation and use case prioritizing. UI specification is only as good as the process by which it has been created, so lets consider the steps in the process:
Use case definition
Use cases are then used as basis for drafting the UI concept (which can contain for example main views of the software, some textual explanations about the vie
|
https://en.wikipedia.org/wiki/Reinsurance%20to%20close
|
Reinsurance to close (RITC) is a business transaction whereby the estimated future liabilities of an insurance company are reinsured into another, in order that the profitability of the former can be finally determined. It is most closely associated with the Lloyd's of London insurance market that comprises numerous competing "syndicates", and in order to close each accounting year and declare a profit or loss, each syndicate annually "reinsures to close" its books. In most cases, the liabilities are simply reinsured into the subsequent accounting year of the same syndicate, however, in some circumstances the RITC may be made to a different syndicate or even to a company outside of the Lloyd's market.
History
At Lloyd's, traditionally each year of each syndicate is a separate enterprise, and the profitability of each year is determined essentially by payments for known liabilities (claims) and money reserved for unknown liabilities that may emerge in the future on claims that have been incurred but not reported (IBNR). The estimation of the quantity of IBNR is difficult and can be inaccurate.
Capital providers typically "joined" their syndicate for one calendar year only, and at the end of the year the syndicate as an ongoing trading entity was effectively disbanded. However, usually the syndicate re-formed for the next calendar year with more or less the same capital membership. In this way, a syndicate could have a continuous existence for many years, but each year was accounted for separately. Since some claims can take time to be reported and then paid, the profitability of each syndicate took time to realise. The practice at Lloyd's was to wait three years from the beginning of the year in which the business was written before "closing" the year and declaring a result. For example, for the 1984 year a syndicate would ordinarily declare its result at 31 December 1986. The syndicate's 1984 members would therefore be paid any profit during 1987 (in proportion to
|
https://en.wikipedia.org/wiki/STEM%20pipeline
|
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
|
https://en.wikipedia.org/wiki/Symbol%20%28formal%29
|
A logical symbol is a fundamental concept in logic, tokens of which may be marks or a configuration of marks which form a particular pattern. Although the term "symbol" in common use refers at some times to the idea being symbolized, and at other times to the marks on a piece of paper or chalkboard which are being used to express that idea; in the formal languages studied in mathematics and logic, the term "symbol" refers to the idea, and the marks are considered to be a token instance of the symbol. In logic, symbols build literal utility to illustrate ideas.
Overview
Symbols of a formal language need not be symbols of anything. For instance there are logical constants which do not refer to any idea, but rather serve as a form of punctuation in the language (e.g. parentheses). Symbols of a formal language must be capable of being specified without any reference to any interpretation of them.
A symbol or string of symbols may comprise a well-formed formula if it is consistent with the formation rules of the language.
In a formal system a symbol may be used as a token in formal operations. The set of formal symbols in a formal language is referred to as an alphabet (hence each symbol may be referred to as a "letter")
A formal symbol as used in first-order logic may be a variable (member from a universe of discourse), a constant, a function (mapping to another member of universe) or a predicate (mapping to T/F).
Formal symbols are usually thought of as purely syntactic structures, composed into larger structures using a formal grammar, though sometimes they may be associated with an interpretation or model (a formal semantics).
Can words be modeled as formal symbols?
The move to view units in natural language (e.g. English) as formal symbols was initiated by Noam Chomsky (it was this work that resulted in the Chomsky hierarchy in formal languages). The generative grammar model looked upon syntax as autonomous from semantics. Building on these models, the
|
https://en.wikipedia.org/wiki/T/TCP
|
T/TCP (Transactional Transmission Control Protocol) was a variant of the Transmission Control Protocol (TCP).
It was an experimental TCP extension for efficient transaction-oriented (request/response) service.
It was developed to fill the gap between TCP and UDP, by Bob Braden in 1994.
Its definition can be found in RFC 1644 (that obsoletes RFC 1379). It is faster than TCP and delivery reliability is comparable to that of TCP.
T/TCP suffers from several major security problems as described by Charles Hannum in September 1996. It has not gained widespread popularity.
RFC 1379 and RFC 1644 that define T/TCP were moved to Historic Status in May 2011 by RFC 6247 for security reasons.
Alternatives
TCP Fast Open is a more recent alternative.
See also
TCP Cookie Transactions
Further reading
Richard Stevens, Gary Wright, "TCP/IP Illustrated: TCP for transactions, HTTP, NNTP, and the UNIX domain protocols" (Volume 3 of TCP/IP Illustrated) // Addison-Wesley, 1996 (), 2000 (). Part 1 "TCP for Transactions". Chapters 1-12, pages 1–159
|
https://en.wikipedia.org/wiki/Muscle%20hypertrophy
|
Muscle hypertrophy or muscle building involves a hypertrophy or increase in size of skeletal muscle through a growth in size of its component cells. Two factors contribute to hypertrophy: sarcoplasmic hypertrophy, which focuses more on increased muscle glycogen storage; and myofibrillar hypertrophy, which focuses more on increased myofibril size. It is the primary focus of bodybuilding-related activities.
Hypertrophy stimulation
A range of stimuli can increase the volume of muscle cells. These changes occur as an adaptive response that serves to increase the ability to generate force or resist fatigue in anaerobic conditions.
Strength training
Strength training (resistance training) causes neural and muscular adaptations which increase the capacity of an athlete to exert force through voluntary muscular contraction: After an initial period of neuro-muscular adaptation, the muscle tissue expands by creating sarcomeres (contractile elements) and increasing non-contractile elements like sarcoplasmic fluid.
Muscular hypertrophy can be induced by progressive overload (a strategy of progressively increasing resistance or repetitions over successive bouts of exercise to maintain a high level of effort). However, the precise mechanisms are not clearly understood; currently accepted hypotheses involve some combination of mechanical tension, metabolic fatigue, and muscular damage.
Muscular hypertrophy plays an important role in competitive bodybuilding and strength sports like powerlifting, American football, and Olympic weightlifting.
Anaerobic training
The best approach to specifically achieve muscle growth remains controversial (as opposed to focusing on gaining strength, power, or endurance); it was generally considered that consistent anaerobic strength training will produce hypertrophy over the long term, in addition to its effects on muscular strength and endurance. Muscular hypertrophy can be increased through strength training and other short-duration, high-i
|
https://en.wikipedia.org/wiki/Azodicarbonamide
|
Azodicarbonamide, ADCA, ADA, or azo(bis)formamide, is a chemical compound with the molecular formula . It is a yellow to orange-red, odorless, crystalline powder. It is sometimes called a 'yoga mat' chemical because of its widespread use in foamed plastics. It was first described by John Bryden in 1959.
Synthesis
It is prepared in two steps via treatment of urea with hydrazine to form biurea, as described in this idealized equation:
Oxidation with chlorine or chromic acid yields azodicarbonamide:
Applications
Blowing agent
The principal use of azodicarbonamide is in the production of foamed plastics as a blowing agent. The thermal decomposition of azodicarbonamide produces nitrogen, carbon monoxide, carbon dioxide, and ammonia gases, which are trapped in the polymer as bubbles to form a foamed article.
Azodicarbonamide is used in plastics, synthetic leather, and other industries and can be pure or modified. Modification affects the reaction temperatures. Pure azodicarbonamide generally reacts around 200 °C. In the plastic, leather, and other industries, modified azodicarbonamide (average decomposition temperature 170 °C) contains additives that accelerate the reaction or react at lower temperatures.
An example of the use of azodicarbonamide as a blowing agent is found in the manufacture of vinyl (PVC) and EVA-PE foams, where it forms bubbles upon breaking down into gas at high temperature. Vinyl foam is springy and does not slip on smooth surfaces. It is useful for carpet underlay and floor mats. Commercial yoga mats made of vinyl foam have been available since the 1980s; the first mats were cut from carpet underlay.
Food additive
As a food additive, azodicarbonamide is used as a flour bleaching agent and a dough conditioner. It reacts with moist flour as an oxidizing agent. The main reaction product is biurea, which is stable during baking. Secondary reaction products include semicarbazide and ethyl carbamate. It is known by the E number E927. Many restauran
|
https://en.wikipedia.org/wiki/Trichothiodystrophy
|
Trichothiodystrophy (TTD) is an autosomal recessive inherited disorder characterised by brittle hair and intellectual impairment. The word breaks down into tricho – "hair", thio – "sulphur", and dystrophy – "wasting away" or literally "bad nourishment". TTD is associated with a range of symptoms connected with organs of the ectoderm and neuroectoderm. TTD may be subclassified into four syndromes: Approximately half of all patients with trichothiodystrophy have photosensitivity, which divides the classification into syndromes with or without photosensitivity; BIDS and PBIDS, and IBIDS and PIBIDS. Modern covering usage is TTD-P (photosensitive), and TTD.
Presentation
Features of TTD can include photosensitivity, ichthyosis, brittle hair and nails, intellectual impairment, decreased fertility and short stature. A more subtle feature associated with this syndrome is a "tiger tail" banding pattern in hair shafts, seen in microscopy under polarized light. The acronyms PIBIDS, IBIDS, BIDS and PBIDS give the initials of the words involved. BIDS syndrome, also called Amish brittle hair brain syndrome and hair-brain syndrome, is an autosomal recessive inherited disease. It is nonphotosensitive. BIDS is characterized by brittle hair, intellectual impairment, decreased fertility, and short stature. There is a photosensitive syndrome, PBIDS.
BIDS is associated with the gene MPLKIP (TTDN1). IBIDS syndrome, following the acronym from ichthyosis, brittle hair and nails, intellectual impairment and short stature, is the Tay syndrome or sulfur-deficient brittle hair syndrome, first described by Tay in 1971. (Chong Hai Tay was the Singaporean doctor who was the first doctor in South East Asia to have a disease named after him.) Tay syndrome should not be confused with the Tay–Sachs disease. It is an autosomal recessive congenital disease. In some cases, it can be diagnosed prenatally. IBIDS syndrome is nonphotosensitive.
Cause
The photosensitive form is referred to as PIBIDS, and
|
https://en.wikipedia.org/wiki/Standardized%20rate
|
Standardized rates are a statistical measure of any rates in a population. These are adjusted rates that take into account the vital differences between populations that may affect their birthrates or death rates.
Examples
The most common are birth, death and unemployment rates. For example, in a community made up of primarily young couples, the birthrate might appear to be high when compared to that of other populations. However, by calculating the standardized birthrates that is by comparing the same age group in other populations), a more realistic picture of childbearing capacity will be developed.
Formula
The formula for standardized rates is as follows:
Σ(crude rate for age group × standard population for age group) / Σstandard population
See also
Mortality ratio
|
https://en.wikipedia.org/wiki/Telomere
|
A telomere (; ) is a region of repetitive nucleotide sequences associated with specialized proteins at the ends of linear chromosomes. Telomeres are a widespread genetic feature most commonly found in eukaryotes. In most, if not all species possessing them, they protect the terminal regions of chromosomal DNA from progressive degradation and ensure the integrity of linear chromosomes by preventing DNA repair systems from mistaking the very ends of the DNA strand for a double-strand break.
Discovery
The existence of a special structure at the ends of chromosomes was independently proposed in 1938 by Hermann Joseph Müller, studying the fruit fly Drosophila melanogaster, and in 1939 by Barbara McClintock, working with maize. Müller observed that the ends of irradiated fruit fly chromosomes did not present alterations such as deletions or inversions. He hypothesized the presence of a protective cap, which he coined "telomeres", from the Greek telos (end) and meros (part).
In the early 1970s, Soviet theorist Alexei Olovnikov first recognized that chromosomes could not completely replicate their ends; this is known as the "end replication problem". Building on this, and accommodating Leonard Hayflick's idea of limited somatic cell division, Olovnikov suggested that DNA sequences are lost every time a cell replicates until the loss reaches a critical level, at which point cell division ends. According to his theory of marginotomy DNA sequences at the ends of telomeres are represented by tandem repeats, which create a buffer that determines the number of divisions that a certain cell clone can undergo. Furthermore, it was predicted that a specialized DNA polymerase (originally called a tandem-DNA-polymerase) could extend telomeres in immortal tissues such as germ line, cancer cells and stem cells. It also followed from this hypothesis that organisms with circular genome, such as bacteria, do not have the end replication problem and therefore do not age.
In 1975–1977, E
|
https://en.wikipedia.org/wiki/Chemisorption
|
Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds.
In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species.
Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties.
The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent.
Uses
An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface.
Self-assembled monolayers
Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface.
Gas-surface chemisorption
Adsorption kinetics
As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would
|
https://en.wikipedia.org/wiki/Preferential%20alignment
|
The preferential alignment is a criterion of an orientation of a molecule or atom. The preferential alignment can be related to the formation of the crystal structure of an amorphous structure.
For a polymer material with liquid crystals, the liquid crystals are molecules shaped like rigid rods. Just as logs being floated down a river tend to travel parallel to the direction of the river, liquid crystals have a preferential alignment with each other. At high temperatures, this alignment is disrupted and the material is said to be in the isotropic state. At lower temperatures, the alignment will take place and the liquid crystals are said to be in the pneumatic state [Hoong.C.C].
Crystallography
|
https://en.wikipedia.org/wiki/ARP%20cache
|
An ARP cache is a collection of Address Resolution Protocol entries (mostly dynamic) that are created when an IP address is resolved to a MAC address (so the computer can effectively communicate with the IP address).
An ARP cache has the disadvantage of potentially being used by hackers and cyber attackers (an ARP cache poisoning attack). An ARP cache helps the attackers hide behind a fake IP address. Beyond the fact that ARP caches may help attackers, it may also prevent the attacks by "distinguish[ing] between low level IP and IP based vulnerabilities".
|
https://en.wikipedia.org/wiki/Congenital%20hypofibrinogenemia
|
Congenital hypofibrinogenemia is a rare disorder in which one of the three genes responsible for producing fibrinogen, a critical blood clotting factor, is unable to make a functional fibrinogen glycoprotein because of an inherited mutation. In consequence, liver cells, the normal site of fibrinogen production, make small amounts of this critical coagulation protein, blood levels of fibrinogen are low, and individuals with the disorder may develop a coagulopathy, i.e. a diathesis or propensity to experience episodes of abnormal bleeding. However, individuals with congenital hypofibrinogenemia may also have episodes of abnormal blood clot formation, i.e. thrombosis. This seemingly paradoxical propensity to develop thrombosis in a disorder causing a decrease in a critical protein for blood clotting may be due to the function of fibrin (the split product of fibrinogen that is the basis for forming blood clots) to promote the lysis or disintegration of blood clots. Lower levels of fibrin may reduce the lysis of early fibrin strand depositions and thereby allow these depositions to develop into clots.
Congenital hypofibrinogenemia must be distinguished from: a) congenital afibrinogenemia, a rare disorder in which blood fibrinogen levels are either exceedingly low or undetectable due to mutations in both fibrinogen genes; b) congenital hypodysfibrinogenemia, a rare disorder in which one or more genetic mutations cause low levels of blood fibrinogen, at least some of which is dysfunctional and thereby contributes to excessive bleeding; and c) acquired hypofibrinogenemia, a non-hereditary disorder in which blood fibrinogen levels are low because of e.g. severe liver disease or because of excessive fibrinogen consumption resulting from, e.g. disseminated intravascular coagulation.
Certain gene mutations causing congenital hypofibrinogenemia disrupt the ability of liver cells to secrete fibrinogen. In these instances, the un-mutated gene maintains blood fibrinogen at reduce
|
https://en.wikipedia.org/wiki/Potential%20well
|
A potential well is the region surrounding a local minimum of potential energy. Energy captured in a potential well is unable to convert to another type of energy (kinetic energy in the case of a gravitational potential well) because it is captured in the local minimum of a potential well. Therefore, a body may not proceed to the global minimum of potential energy, as it would naturally tend to do due to entropy.
Overview
Energy may be released from a potential well if sufficient energy is added to the system such that the local maximum is surmounted. In quantum physics, potential energy may escape a potential well without added energy due to the probabilistic characteristics of quantum particles; in these cases a particle may be imagined to tunnel through the walls of a potential well.
The graph of a 2D potential energy function is a potential energy surface that can be imagined as the Earth's surface in a landscape of hills and valleys. Then a potential well would be a valley surrounded on all sides with higher terrain, which thus could be filled with water (e.g., be a lake) without any water flowing away toward another, lower minimum (e.g. sea level).
In the case of gravity, the region around a mass is a gravitational potential well, unless the density of the mass is so low that tidal forces from other masses are greater than the gravity of the body itself.
A potential hill is the opposite of a potential well, and is the region surrounding a local maximum.
Quantum confinement
Quantum confinement can be observed once the diameter of a material is of the same magnitude as the de Broglie wavelength of the electron wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials.
A particle behaves as if it were free when the confining dimension is large compared to the wavelength of the particle. During this state, the bandgap remains at its original energy due to a continuous energy stat
|
https://en.wikipedia.org/wiki/Common%20coding%20theory
|
Common coding theory is a cognitive psychology theory describing how perceptual representations (e.g. of things we can see and hear) and motor representations (e.g. of hand actions) are linked. The theory claims that there is a shared representation (a common code) for both perception and action. More important, seeing an event activates the action associated with that event, and performing an action activates the associated perceptual event.
The idea of direct perception-action links originates in the work of the American psychologist William James and more recently, American neurophysiologist and Nobel prize winner Roger Sperry. Sperry argued that the perception–action cycle is the fundamental logic of the nervous system. Perception and action processes are functionally intertwined: perception is a means to action and action is a means to perception. Indeed, the vertebrate brain has evolved for governing motor activity with the basic function to transform sensory patterns into patterns of motor coordination.
Background
The classical approach to cognition is a 'sandwich' model which assumes three stages of information processing: perception, cognition and then action. In this model, perception and action do not interact directly, instead cognitive processing is needed to convert perceptual representations into action. For example, this might require creating arbitrary linkages (mapping between sensory and motor codes).
In contrast, the common coding account claims that perception and action are directly linked by a common computational code.
This theory, put forward by Wolfgang Prinz and his colleagues from the Max Planck Institute for Human Cognitive and Brain Sciences, claims parity between perception and action. Its core assumption is that actions are coded in terms of the perceivable effects (i.e., the distal perceptual events) they should generate. This theory also states that perception of an action should activate action representations to the deg
|
https://en.wikipedia.org/wiki/Ethosome
|
Ethosomes are phospholipid nanovesicles used for dermal and transdermal delivery of molecules. Ethosomes were developed by Touitou et al.,1997, as additional novel lipid carriers composed of ethanol, phospholipids, and water. They are reported to improve the skin delivery of various drugs. Ethanol is an efficient permeation enhancer that is believed to act by affecting the intercellular region of the stratum corneum. Ethosomes are soft malleable vesicles composed mainly of phospholipids, ethanol (relatively high concentration), and water. These soft vesicles represent novel vesicles carriers for enhanced delivery through the skin. The size of the ethosomes vesicles can be modulated from tens of nanometers to microns.
Structure and composition
Ethosomes are mainly composed of multiple, concentric layers of flexible phospholipid bylayers, with a relative high concentration of ethanol (20-45%), glycols and water. Their overall structure has been confirmed by 31P-NMR, EM and DSC. They have high penetration of the horny layer of the skin, which enhances the permeation of encapsulated drugs. The mechanism of permeation enhancement is attributed to the overall properties of the system.
Applications
Because of their unique structure, ethosomes are able to efficiently encapsulate and deliver into the skin highly lipophilic molecules such as testosterone, cannabinoids and ibuprofen, as well as hydrophilic drugs such as clindamycin phosphate, buspirone hydrochloride. They have been studied for the transdermal and intradermal delivery of peptides, steroids, antibiotics, prostaglandins, antivirals and anti-pyretics. The components used to make ethosomes are already approved for pharmaceutical and cosmetic use and the formulated vesicles are stable when stored. They can be incorporated in various pharmaceutical formulations such as gels, creams, emulsions and sprays. They're consequently being developed for pharmaceutical and cosmeceutical products. Ethosomal systems compare
|
https://en.wikipedia.org/wiki/Cabal%20%28set%20theory%29
|
The Cabal was, or perhaps is, a set of set theorists in Southern California, particularly at UCLA and Caltech, but also at UC Irvine. Organization and procedures range from informal to nonexistent, so it is difficult to say whether it still exists or exactly who has been a member, but it has included such notable figures as Donald A. Martin, Yiannis N. Moschovakis, John R. Steel, and Alexander S. Kechris. Others who have published in the proceedings of the Cabal seminar include Robert M. Solovay, W. Hugh Woodin, Matthew Foreman, and Steve Jackson.
The work of the group is characterized by free use of large cardinal axioms, and research into the descriptive set theoretic behavior of sets of reals if such assumptions hold.
Some of the philosophical views of the Cabal seminar were described in and .
Publications
|
https://en.wikipedia.org/wiki/Cubicity
|
In graph theory, cubicity is a graph invariant defined to be the smallest dimension such that a graph can be realized as an intersection graph of unit cubes in Euclidean space. Cubicity was introduced by Fred S. Roberts in 1969 along with a related invariant called boxicity that considers the smallest dimension needed to represent a graph as an intersection graph of axis-parallel rectangles in Euclidean space.
Definition
Let be a graph. Then the cubicity of , denoted by , is the smallest integer such that can be realized as an intersection graph of axis-parallel unit cubes in -dimensional Euclidean space.
The cubicity of a graph is closely related to the boxicity of a graph, denoted . The definition of boxicity is essentially the same as cubicity, except in terms of using axis-parallel rectangles instead of cubes. Since a cube is a special case of a rectangle, the cubicity of a graph is always an upper bound for the boxicity of a graph. In the other direction, it can be shown that for any graph on vertices, the inequality , where is the ceiling function, i.e., the smallest integer greater than or equal to .
|
https://en.wikipedia.org/wiki/Four-slide
|
A four-slide, also known as a multislide, multi-slide, or four-way, is a metalworking machine tool used in the high-volume manufacture of small stamped components from bar or wire stock. The press is most simply described as a horizontal stamping press that uses cams to control tools. The machine is used for progressive or transfer stamping operations.
Design
A four-slide is quite different from most other presses. The key of the machine is its moving slides that have tools attached, which strike the workpiece together or in sequence to form it. These slides are driven by four shafts that outline the machine. The shafts are connected by bevel gears so that one shaft is driven by an electric motor, and then that shaft's motion drives the other three shafts. Each shaft then has cams which drive the slides, usually of a split-type. This shafting arrangement allows the workpiece to be worked for four sides, which makes this machine extremely versatile. A hole near the center of the machine is provided to expel the completed workpiece.
Advantages and disadvantages
The greatest advantage of the four-slide machine is its ability to complete all of the operations required to form the workpiece from start to finish. Moreover, it can handle certain parts that transfer or progressive dies cannot, because it can manipulate from four axes. Due to this flexibility it reduces the cost of the finished part because it requires less machines, setups, and handling. Also, because only one machine is required, less space is required for any given workpiece. As compared to standard stamping presses the tooling is usually inexpensive, due to the simplicity of the tools. A four-slide can usually produce 20,000 to 70,000 finished parts per 16-hour shift, depending on the number of operations per part; this speed usually results in a lower cost per part.
The biggest disadvantage is its size constraints. The largest machines can handle stock up to wide, long, and thick. For wires the li
|
https://en.wikipedia.org/wiki/Phytotoxin
|
Phytotoxins are substances that are poisonous or toxic to the growth of plants. Phytotoxic substances may result from human activity, as with herbicides, or they may be produced by plants, by microorganisms, or by naturally occurring chemical reactions.
The term is also used to describe toxic chemicals produced by plants themselves, which function as defensive agents against their predators. Most examples pertaining to this definition of phytotoxin are members of various classes of specialised or secondary metabolites, including alkaloids, terpenes, and especially phenolics, though not all such compounds are toxic or serve defensive purposes. Phytotoxins may also be toxic to humans.
Toxins produced by plants
Alkaloids
Alkaloids are derived from amino acids, and contain nitrogen. They are medically important by interfering with components of the nervous system affecting membrane transport, protein synthesis, and enzyme activities. They generally have a bitter taste. Alkaloids usually end in -ine (caffeine, nicotine, cocaine, morphine, ephedrine).
Terpenes
Terpenes are made of water-insoluble lipids, and synthesized from acetyl-CoA or basic intermediates of glycolysis They often end in -ol (menthol) and comprise the majority of plant essential oils.
Monoterpenes are found in gymnosperms and collect in the resin ducts and may be released after an insect begins to feed to attract the insect's natural enemies.
Sesquiterpenes are bitter tasting to humans and are found on glandular hairs or subdermal pigments.
Diterpenes are contained in resin and block and deter insect feeding. Taxol, an important anticancer drug is found in this group.
Triterpenes mimic the insect molting hormone ecdysone, disrupting molting and development and is often lethal. They are usually found in citrus fruit, and produce a bitter substance called limonoid that deters insect feeding.
Glycosides are made of one or more sugars combined with a non-sugar like aglycone, which usually determines
|
https://en.wikipedia.org/wiki/Earth-centered%2C%20Earth-fixed%20coordinate%20system
|
The Earth-centered, Earth-fixed coordinate system (acronym ECEF), also known as the geocentric coordinate system, is a cartesian spatial reference system that represents locations in the vicinity of the Earth (including its surface, interior, atmosphere, and surrounding outer space) as X, Y, and Z measurements from its center of mass. Its most common use is in tracking the orbits of satellites and in satellite navigation systems for measuring locations on the surface of the Earth, but it is also used in applications such as tracking crustal motion.
The distance from a given point of interest to the center of Earth is called the geocentric distance, , which is a generalization of the geocentric radius, , not restricted to points on the reference ellipsoid surface.
The geocentric altitude is a type of altitude defined as the difference between the two aforementioned quantities: ; it is not to be confused for the geodetic altitude.
Conversions between ECEF and geodetic coordinates (latitude and longitude) are discussed at geographic coordinate conversion.
Structure
As with any spatial reference system, ECEF consists of an abstract coordinate system (in this case, a conventional three-dimensional right-handed system), and a geodetic datum that binds the coordinate system to actual locations on the Earth. The ECEF that is used for the Global Positioning System (GPS) is the geocentric WGS 84, which currently includes its own ellipsoid definition. Other local datums such as NAD 83 may also be used. Due to differences between datums, the ECEF coordinates for a location will be different for different datums, although the differences between most modern datums is relatively small, within a few meters.
The ECEF coordinate system has the following parameters:
The origin at the center of the chosen ellipsoid. In WGS 84, this is center of mass of the Earth.
The Z axis is the line between the North and South Poles, with positive values increasing northward. In WGS 84, this
|
https://en.wikipedia.org/wiki/162%20%28number%29
|
162 (one hundred [and] sixty-two) is the natural number between 161 and 163.
In mathematics
Having only 2 and 3 as its prime divisors, 162 is a 3-smooth number. 162 is also an abundant number, since its sum of divisors is greater than it. As the product of numbers three units apart from each other, it is a triple factorial number.
There are 162 ways of partitioning seven items into subsets of at least two items per subset. 16264 + 1 is a prime number.
In religion
Jared was 162 when he became the father of Enoch.
In sports
162 is the total number of baseball games each team plays during a regular season in Major League Baseball.
|
https://en.wikipedia.org/wiki/Mir-42%20microRNA%20precursor%20family
|
In molecular biology, mir-42 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
See also
MicroRNA
|
https://en.wikipedia.org/wiki/Rempeyek
|
Rempeyek or peyek is a deep-fried savoury Indonesian-Javanese cracker made from flour (usually rice flour) with other ingredients, bound or coated by crispy flour batter. The most common type of rempeyek is peyek kacang ("peanut peyek"); however, other ingredients can be used instead, such as teri (dried anchovies), rebon (small shrimp), or ebi (dried shrimp). Today, rempeyek is commonly found in Indonesia and Malaysia, as well as in countries with considerable Indonesian migrant populations, such as The Netherlands and Suriname.
Coconut milk, salt, and spices such as ground candlenut and coriander are often mixed within the flour batter. Some recipes also add a chopped citrus leaf. The spiced batter, mixed or sprinkled with the granule ingredients, is deep fried in hot coconut oil. The flour batter acts as a binding agent for the granules (peanuts, anchovy, shrimp, etc.). It hardens upon frying and turns into a golden brown and crispy cracker.
In Indonesia, rempeyek making is traditionally a small-scale home industry, yet today some rempeyek producers have reached a larger production scale and distribute widely with a rempeyek-brand trading value reaching 25 million Rupiah (around US$2,100) monthly. In Malaysia, rempeyek now is widely made using machines.
Etymology and origin
Rempeyek is derived from the Javanese onomatopoeia peyek, depicting the sound of a crisp cracker breaking.
Rempeyek is often associated with Javanese cuisine, served to accompany pecel (vegetables in peanut sauce) or other meals, or as a stand-alone snack. Today, it is common throughout Indonesia, and is also popular in Malaysia following the migration of Javanese immigrants in the early 19th century.
Variants
The most common and widely distributed type of rempeyek is rempeyek kacang (peanut rempeyek); however, anchovy, small shrimp, dried shrimp, spinach (rempeyek bayam), and beans such as mung beans and soybeans are also common types. Rempeyek kacang is especially common in the Banyu
|
https://en.wikipedia.org/wiki/Pelagicoccus%20mobilis
|
Pelagicoccus mobilis is a Gram-negative and chemoheterotrophic bacterium from the genus of Prosthecobacter which has been isolated from seawater from Japan.
|
https://en.wikipedia.org/wiki/Mini-DVI
|
The Mini-DVI connector is used on certain Apple computers as a digital alternative to the Mini-VGA connector. Its size is between the full-sized DVI and the tiny Micro-DVI. It is found on the 12-inch PowerBook G4 (except the original 12-inch 867 MHz PowerBook G4, which used Mini-VGA), the Intel-based iMac, the MacBook Intel-based laptop, the Intel-based Xserve, the 2009 Mac mini, and some late model eMacs.
In October 2008, Apple announced the company was phasing Mini-DVI out in favor of Mini DisplayPort.
Mini-DVI connectors on Apple hardware are capable of carrying DVI, VGA, or TV signals through the use of adapters, detected with EDID (Extended display identification data) via DDC. This connector is often used in place of a DVI connector in order to save physical space on devices. Mini-DVI does not support dual-link connections and hence cannot support resolutions higher than 1920×1200 @60 Hz.
There are various types of Mini-DVI adapter:
Apple Mini-DVI to VGA Adapter Apple part number M9320G/A (discontinued)
Apple Mini-DVI to Video Adapter Apple part number M9319G/A, provided both S-Video and Composite video connectors (discontinued)
Apple Mini-DVI to DVI Adapter (DVI-D) Apple part number M9321G/B (discontinued)
Non-OEM Mini-DVI to HDMI adapters are also available at online stores such as eBay and Amazon, and from some retail stores, but were not sold by Apple.
The physical connector is similar to Mini-VGA, but is differentiated by having four rows of pins arranged in two vertically stacked slots rather than the two rows of pins in the Mini-VGA.
Connecting to a DVI-I connector requires a Mini-DVI to DVI-D cable plus a DVI-D to DVI-I adapter.
Criticisms
Apple's Mini-DVI to DVI-D cable does not carry the analog signal coming from the mini-DVI port on the Apple computer. This means that it is not possible to use this cable with an inexpensive DVI-to-VGA adapter for VGA output; Apple's mini-DVI to VGA cable must be used instead. This could be avoided if Apple pro
|
https://en.wikipedia.org/wiki/Supercomputer%20operating%20system
|
A supercomputer operating system is an operating system intended for supercomputers. Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as fundamental changes have occurred in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been moving away from in-house operating systems and toward some form of Linux, with it running all the supercomputers on the TOP500 list in November 2017. In 2021, top 10 computers run for instance Red Hat Enterprise Linux (RHEL), or some variant of it or other Linux distribution e.g. Ubuntu.
Given that modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g., using a small and efficient lightweight kernel such as Compute Node Kernel (CNK) or Compute Node Linux (CNL) on compute nodes, but a larger system such as a Linux-derivative on server and input/output (I/O) nodes.
While in a traditional multi-user computer system job scheduling is in effect a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully dealing with inevitable hardware failures when tens of thousands of processors are present.
Although most modern supercomputers use the Linux operating system, each manufacturer has made its own specific changes to the Linux-derivative they use, and no industry standard exists, partly because the differences in hardware architectures require changes to optimize the operating system to each hardware design.
Context and overview
In the early days of supercomputing, the basic architectural concepts were evolving rapidly, and system software had to follow hardware innovations that usually took rapid turns. In the ea
|
https://en.wikipedia.org/wiki/Barents%20Sea%20Opening
|
The Barents Sea Opening (BSO) is an oceanographic term for the Western Barents Sea, the sea area between Bear Island in the south of Svalbard and the northern extremity of Norway through which a water mass of Atlantic origin flows into the Arctic Ocean. The inflow of relative warm water into the Arctic Ocean occurs not only through the Barents Sea Opening, but also through Fram Strait which is much deeper. The internal energy entering the colder waters has an influence on the atmosphere and the sea ice above and therefore possibly on the global climate.
Oceanographic measurements
The Norwegian Polar Institute performs about six
hydrographic survey per year across the Barents Sea Opening from to since 1977.
A set of oceanographic current meters captures the seasonal cycle of the inflow since 1997. Roughly every 30 nm two instruments are deployed, one at 50 m depth, and another 15 m above the sea floor.
|
https://en.wikipedia.org/wiki/Uterosacral%20ligament
|
The uterosacral ligaments (or rectouterine ligaments) are major ligaments of uterus that extend posterior-ward from the cervix to attach onto the (anterior aspect of the) sacrum.
Anatomy
Microanatomy/histology
The uterosacral ligaments consist of fibrous connective tissue, and smooth muscle tissue.
Relations
The uterosacral ligaments pass inferior to the peritoneum. They embrace the rectouterine pouch, and rectum. The pelvic splanchnic nerves run on top of the ligament.
Function
The uterosacral ligaments pull the cervix posterior-ward, counteracting the anterior-ward pull exerted by the round ligament of uterus upon the fundus of the uterus, thus maintaining anteversion of the body of the uterus.
Clinical significance
The uterosacral ligaments may be palpated during a rectal examination, but not during pelvic examination.
|
https://en.wikipedia.org/wiki/Alois%20Riedler
|
Alois Riedler (May 15, 1850 - October 25, 1936) was a noted Austrian mechanical engineer, and, as professor in Germany, a vigorous proponent of practically-oriented engineering education.
Riedler was born in Graz, Austria, and studied mechanical engineering at the Technische Hochschule (TH) Graz from 1866-1871. After graduation he took on a succession of academic appointments. He first became an assistant at the TH Brünn (1871-1873); then in 1873 moved to the TH Vienna, first as an assistant, then from 1875 onwards as a designer of machines. From 1880 to 1883, Riedler worked as associate professor at the TH Munich. In 1883 he became full professor at the TH Aachen.
In 1888 he joined the TH Berlin as Professor for Mechanical Engineering, where he remained until retirement in 1920. From 1899 to 1900, he was appointed the school's principal (rector) and led discussions on how to celebrate its 100th anniversary. As a result, Riedler and Adolf Slaby (1849–1913) convinced Kaiser Wilhelm II (1859–1941) to allow Prussian technical universities to award doctorates. Although the government did not immediately consent, this effort led eventually to the school's reconstitution as today's Technical University of Berlin.
Riedler first received international recognition for his reports on the Philadelphia Centennial Exposition (1876) and Paris Exposition Universelle (1878). He was later widely known for his efficient, high-speed pumps widely adopted in waterworks and in draining mines. Riedler was also known for his 1896 book "Das Maschinen-Zeichnen", (Machine Drawing) which introduced modern technical drawing.
Riedler was actively involved in the early development of internal combustion engines, both for gasoline and diesel fuel. In 1903 he established the Laboratory for Internal Combustion Engines at the TH Berlin, expanded in 1907 to include investigations of motor vehicles. As laboratory director, Riedler designed a pioneering roller test stand. He also received what was
|
https://en.wikipedia.org/wiki/Biofilm%20prevention
|
Biofilm formation occurs when free floating microorganisms attach themselves to a surface. Although there are some beneficial uses of biofilms, they are generally considered undesirable, and means of biofilm prevention have been developed. Biofilms secrete extracellular polymeric substance that provides a structural matrix and facilitates adhesion for the microorganisms; the means of prevention have thus concentrated largely on two areas: killing the microbes that form the film, or preventing the adhesion of the microbes to a surface. Because biofilms protect the bacteria, they are often more resistant to traditional antimicrobial treatments, making them a serious health risk. For example, there are more than one million cases of catheter-associated urinary tract infections (CAUTI) reported each year, many of which can be attributed to bacterial biofilms. There is much research into the prevention of biofilms.
Methods
Biofilm prevention methods fall into two categories:
prevention of microbe growth; and
prevention of microbe surface attachment.
Prevention of microbe growth
Antimicrobial coatings
Chemical modifications are the main strategy for biofilm prevention on indwelling medical devices. Antibiotics, biocides, and ion coatings are commonly used chemical methods of biofilm prevention. They prevent biofilm formation by interfering with the attachment and expansion of immature biofilms. Typically, these coatings are effective only for a short time period (about 1 week), after which leaching of the antimicrobial agent reduces the effectiveness of the coating.
The medical uses of silver and silver ions have been known for some time; its use can be traced to the Phoenicians, who would store their water, wine, and vinegar in silver bottles to keep them from spoiling. There has been renewed interest in silver coatings for antimicrobial purposes. The antimicrobial property of silver is known as an oligodynamic effect, a process in which metal ions interfere with
|
https://en.wikipedia.org/wiki/Bypass%20ratio
|
The bypass ratio (BPR) of a turbofan engine is the ratio between the mass flow rate of the bypass stream to the mass flow rate entering the core. A 10:1 bypass ratio, for example, means that 10 kg of air passes through the bypass duct for every 1 kg of air passing through the core.
Turbofan engines are usually described in terms of BPR, which together with engine pressure ratio, turbine inlet temperature and fan pressure ratio are important design parameters. In addition, BPR is quoted for turboprop and unducted fan installations because their high propulsive efficiency gives them the overall efficiency characteristics of very high bypass turbofans. This allows them to be shown together with turbofans on plots which show trends of reducing specific fuel consumption (SFC) with increasing BPR. BPR is also quoted for lift fan installations where the fan airflow is remote from the engine and doesn't physically touch the engine core.
Bypass provides a lower fuel consumption for the same thrust, measured as thrust specific fuel consumption (grams/second fuel per unit of thrust in kN using SI units). Lower fuel consumption that comes with high bypass ratios applies to turboprops, using a propeller rather than a ducted fan. High bypass designs are the dominant type for commercial passenger aircraft and both civilian and military jet transports.
Business jets use medium BPR engines.
Combat aircraft use engines with low bypass ratios to compromise between fuel economy and the requirements of combat: high power-to-weight ratios, supersonic performance, and the ability to use afterburners.
Principles
If all the gas power from a gas turbine is converted to kinetic energy in a propelling nozzle, the aircraft is best suited to high supersonic speeds. If it is all transferred to a separate big mass of air with low kinetic energy, the aircraft is best suited to zero speed (hovering). For speeds in between, the gas power is shared between a separate airstream and the gas turbine
|
https://en.wikipedia.org/wiki/Mouse%20Genome%20Informatics
|
Mouse Genome Informatics (MGI) is a free, online database and bioinformatics resource hosted by The Jackson Laboratory, with funding by the National Human Genome Research Institute (NHGRI), the National Cancer Institute (NCI), and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). MGI provides access to data on the genetics, genomics and biology of the laboratory mouse to facilitate the study of human health and disease. The database integrates multiple projects, with the two largest contributions coming from the Mouse Genome Database and Mouse Gene Expression Database (GXD). , MGI contains data curated from over 230,000 publications.
The MGI resource was first published online in 1994 and is a collection of data, tools, and analyses created and tailored for use in the laboratory mouse, a widely used model organism. It is "the authoritative source of official names for mouse genes, alleles, and strains", which follow the guidelines established by the International Committee on Standardized Genetic Nomenclature for Mice. The history and focus of Jackson Laboratory research and production facilities generates tremendous knowledge and depth which researchers can mine to advance their research. A dedicated community of mouse researchers, worldwide enhances and contributes to the knowledge as well. This is an indispensable tool for any researcher using the mouse as a model organism for their research, and for researchers interested in genes that share homology with the mouse genes. Various mouse research support resources including animal collections and free colony management software are also available at the MGI site.
Mouse Genome Database
The Mouse Genome Database collects and curates comprehensive phenotype and functional annotations for mouse genes and alleles. This is an NHGRI-funded project which contributes to the Mouse Genome Informatics database.
Mouse gene expression database
The Gene Expression Database is a comm
|
https://en.wikipedia.org/wiki/Bird%20ichnology
|
Bird ichnology is the study of avian life traces in ornithology and paleontology. Such life traces can include footprints, nests, feces and coproliths. Scientists gain insight about the behavior and diversity of birds by studying such evidence.
Ichnofossils (or ichnites) are especially important for clarifying the evolution and prehistoric diversity of taxa. These cannot usually be associated with a particular genus, let alone species of bird, as hardly ever they are associated with fossil bones. But it is possible to group them into ichnotaxa based on their morphology (form). In practice, the details of shape that reveal the birds' behavior or biologic affinity are generally given more weight in ichnologic classification.
Bird ichnofossils
These fossil traces of birds are sometimes hard to interpret correctly, especially when they are from the Mesozoic when the birds' dinosaurian relatives were still in existence. Nests at least of Neornithes are usually quite easy to identify as such due to the unique structures of their eggshells; there is some uncertainty as regards the origin of certain Mesozoic eggshells, which makes nests of this age problematic.
Mesozoic fossil footprints are hardest to attribute. "Proto-bird" and related theropod feet were very much alike; non-avian theropod tracks such as the ichnogenus Grallator were initially attributed to ratites because in the early 19th century when these were described, the knowledge about dinosaurian diversity was marginal compared to today, whereas ratites were well-known. Also, under the creationist dogma, scientists would believe that e.g. rheas had been around for all eternity. In the Jurassic and Early Cretaceous, juvenile non-avian theropods left very birdlike footprints. Towards the end of the Cretaceous, the tracks of aquatic birds are usually recognizable due to the presence of webbing between the toes; indeed, most avian ichnotaxa fall into this group. However, giant flightless birds also existed by th
|
https://en.wikipedia.org/wiki/Project%20Shield
|
Project Shield is an anti-distributed-denial-of-service (anti-DDoS) service that is offered by Jigsaw, a subsidiary of Google, to websites that have "media, elections, and human rights related content." The main goal of the project is to serve "small, under-resourced news sites that are vulnerable to the web's growing epidemic of DDOS attacks", according to team lead George Conard.
Google initially announced Project Shield at their Ideas Conference on October 21, 2013. The service was initially only offered to trusted testers, but on February 25, 2016, Google opened up the service to any qualifying website a Google-owned reverse proxy that identifies and filters malicious traffic. In May 2018, Jigsaw announced that it would start offering free protection from distributed denial of service attacks to US political campaigns, candidates, and political action committees.
|
https://en.wikipedia.org/wiki/Leaf%20plastochron%20index
|
Leaf plastocron index is a measure of plant leaf age based on morphological development (the plastochron). It is useful in studying plant development requiring destructive measurement on multiple individuals. By measuring a metric against morphological age, instead of chronological time, one can reduce variations occurring between individuals, thus allowing greater focus on variations due to development.
What is the leaf plastochron index?
The leaf plastochron index, also referred to simply as the plastochron index (PI) as it is derived from, is a demography formula used to determine the developmental age and growing rate of a leaf or other growing plant organ. This formula was useful when first introduced as it allowed scientists to be able to track the progression and growth of a plant. According to American Journal of Botany, the typical variation of the formula for the plastochron index is as follows;
However, another common variation as according to Hans Burström's book Growth and Growth Substances/Wachstum und Wuchsstoffe, the leaf plastochron index for a certain leaf on a plant is as follows;
This is derived from Burström's variation of the plastochron index which is as follows;
Other data, not including morphological features, that can be used to determine the leaf's developmental age include uptake of oxygen, weight, chlorophyll, etc.
How to use the leaf plastochron index
To use the plastochron index, it is important to be able to understand how to use the formula. The following is the key to use the typical variation of the formula;
“n” is the leaf's sequential index (or serial) number. This number is increasing but if the seedling's leaves of the organ are the source of study, n = 0.
“R” is the reference length of the leaf.
“” is the length of the leaf that is longer, or equal, to the reference length.
“” is the length of which is shorter than the reference length.
However, when using the formula supplied by the American Journal of Botany there
|
https://en.wikipedia.org/wiki/Salt%20fingering
|
Salt fingering is a mixing process, example of double diffusive instability, that occurs when relatively warm, salty water overlies relatively colder, fresher water. It is driven by the fact that heated water diffuses more readily than salty water. A small parcel of warm, salty water sinking downwards into a colder, fresher region will lose its heat before losing its salt, making the parcel of water increasingly denser than the water around it and sinking further. Likewise, a small parcel of colder, fresher water will be displaced upwards and gain heat by diffusion from surrounding water, which will then make it lighter than the surrounding waters, and cause it to rise further. Paradoxically, the fact that salinity diffuses less readily than temperature means that salinity mixes more efficiently than temperature due to the turbulence caused by salt fingers.
Salt fingering was first described mathematically by Professor Melvin Stern of Florida State University in 1960 and important field measurements of the process have been made by Raymond Schmitt of the Woods Hole Oceanographic Institution and Mike Gregg and Eric Kunze of the University of Washington, Seattle. A particularly interesting area for salt fingering is found in the Caribbean Sea, where it is responsible for producing a "staircase" of well-mixed layers a few metres in thickness that extend for hundreds of kilometres.
Pre-dating the work of Stern, a paper by the American oceanographer Henry Stommel discussed the creation of a large-scale salt finger in which a column of water would be surrounded by a membrane that would allow diffusion of temperature but not salinity. Once primed by the upward movement of the colder and fresher intermediate water, the resultant "perpetual salt fountain" would be able to draw energy (heat) from the local ocean water stratification.
|
https://en.wikipedia.org/wiki/Comparison%20microscope
|
A comparison microscope is a device used to analyze side-by-side specimens. It consists of two microscopes connected by an optical bridge, which results in a split view window enabling two separate objects to be viewed simultaneously. This avoids the observer having to rely on memory when comparing two objects under a conventional microscope.
History
One of the first prototypes of a comparison microscope was developed in 1913 in Germany.
In 1929, using a comparison microscope adapted for forensic ballistics, Calvin Goddard and his partner Phillip Gravelle were able to absolve the Chicago Police Department of participation in the St. Valentine's Day Massacre.
Col. Calvin H. Goddard
Philip O. Gravelle, a chemist, developed a comparison microscope for use in the identification of fired bullets and cartridge cases with the support and guidance of forensic ballistics pioneer Calvin Goddard. It was a significant advance in the science of firearms identification in forensic science. The firearm from which a bullet or cartridge case has been fired is identified by the comparison of the unique striae left on the bullet or cartridge case from the worn, machined metal of the barrel, breach block, extractor, or firing pin in the gun. It was Gravelle who mistrusted his memory. "As long as he could inspect only one bullet at a time with his microscope, and had to keep the picture of it in his memory until he placed the comparison bullet under the microscope, scientific precision could not be attained. He therefore developed the comparison microscope and Goddard made it work." Calvin Goddard perfected the comparison microscope and subsequently popularized its use. Sir Sydney Smith also appreciated the idea, emphasizing its importance in forensic science and firearms identification. He took the comparison microscope to Scotland and introduced it to the European scientists for firearms identification and other forensic science needs.
Modern comparison microscope
The modern inst
|
https://en.wikipedia.org/wiki/Defective%20matrix
|
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.
An n × n defective matrix always has fewer than n distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues λ with algebraic multiplicity m > 1 (that is, they are multiple roots of the characteristic polynomial), but fewer than m linearly independent eigenvectors associated with λ. If the algebraic multiplicity of λ exceeds its geometric multiplicity (that is, the number of linearly independent eigenvectors associated with λ), then λ is said to be a defective eigenvalue. However, every eigenvalue with algebraic multiplicity m always has m linearly independent generalized eigenvectors.
A Hermitian matrix (or the special case of a real symmetric matrix) or a unitary matrix is never defective; more generally, a normal matrix (which includes Hermitian and unitary as special cases) is never defective.
Jordan block
Any nontrivial Jordan block of size or larger (that is, not completely diagonal) is defective. (A diagonal matrix is a special case of the Jordan normal form with all trivial Jordan blocks of size and is not defective.) For example, the Jordan block
has an eigenvalue, with algebraic multiplicity n (or greater if there are other Jordan blocks with the same eigenvalue), but only one distinct eigenvector , where The other canonical basis vectors form a chain of generalized eigenvectors such that for .
Any defective matrix has a nontrivial Jordan normal form, which is as close as one can come to diagon
|
https://en.wikipedia.org/wiki/Intel%20i860
|
The Intel i860 (also known as 80860) is a RISC microprocessor design introduced by Intel in 1989. It is one of Intel's first attempts at an entirely new, high-end instruction set architecture since the failed Intel iAPX 432 from the beginning of the 1980s. It was the world's first million-transistor chip. It was released with considerable fanfare, slightly obscuring the earlier Intel i960, which was successful in some niches of embedded systems. The i860 never achieved commercial success and the project was terminated in the mid-1990s.
Implementations
The first implementation of the i860 architecture is the i860 XR microprocessor (code-named N10), which ran at 25, 33, or 40 MHz. The second-generation i860 XP microprocessor (code named N11) added 4 Mbyte pages, larger on-chip caches, second level cache support, faster buses, and hardware support for bus snooping, for cache consistency in multiprocessor systems. A process shrink for the XP (from 1 μm to 0.8 CHMOS V) increased the clock to 40 and 50 MHz. Both microprocessors supported the same instruction set for application programs.
Technical features
The i860 combined a number of features that were unique at the time, most notably its very long instruction word (VLIW) architecture and powerful support for high-speed floating-point operations. The design uses two classes of instructions: "core" instructions which use a 32-bit ALU, and "floating-point or graphics" instructions which operate on a floating-point adder, a floating-point multiplier, or a 64-bit integer graphics unit. The system had separate pipelines for the ALU, floating-point adder, floating-point multiplier, and graphics unit. It can fetch and decode one "core" instruction and one "floating-point or graphics" instruction per clock. When using dual-operation floating-point instructions (which transfer values between subsequent dual-operation instructions), it is able to execute up to three operations (one ALU, one floating-point multiply, and one fl
|
https://en.wikipedia.org/wiki/Smart%20battery
|
A smart battery or a smart battery pack is a rechargeable battery pack with a built-in battery management system (BMS), usually designed for use in a portable computer such as a laptop. In addition to the usual positive and negative terminals, a smart battery has two or more terminals to connect to the BMS; typically the negative terminal is also used as BMS "ground". BMS interface examples are: SMBus, PMBus, EIA-232, EIA-485, and Local Interconnect Network.
Internally, a smart battery can measure voltage and current, and deduce charge level and SoH (State of Health) parameters, indicating the state of the cells. Externally, a smart battery can communicate with a smart battery charger and a "smart energy user" via the bus interface. A smart battery can demand that the charging stop, request charging, or demand that the smart energy user stop using power from this battery. There are standard specifications for smart batteries: Smart Battery System, MIPI BIF and many ad-hoc specifications.
Charging
A smart battery charger is mainly a switch mode power supply (also known as high frequency charger) that has the ability to communicate with a smart battery pack's battery management system (BMS) in order to control and monitor the charging process. This communication may be by a standard bus such as CAN bus in automobiles or System Management Bus (SMBus) in computers. The charge process is controlled by the BMS and not by the charger, thus increasing security in the system. Not all chargers have this type of communication, which is commonly used for lithium batteries.
Besides the usual plus (positive) and minus (negative) terminals, a smart battery charger also has multiple terminals to connect to the smart battery pack's BMS. The Smart Battery System standard is commonly used to define this connection, which includes the data bus and the communications protocol between the charger and battery. There are other ad-hoc specifications also used.
Hardware
Smart battery c
|
https://en.wikipedia.org/wiki/Quasi-set%20theory
|
Quasi-set theory is a formal mathematical theory for dealing with collections of objects, some of which may be indistinguishable from one another. Quasi-set theory is mainly motivated by the assumption that certain objects treated in quantum physics are indistinguishable and don't have individuality.
Motivation
The American Mathematical Society sponsored a 1974 meeting to evaluate the resolution and consequences of the 23 problems Hilbert proposed in 1900. An outcome of that meeting was a new list of mathematical problems, the first of which, due to Manin (1976, p. 36), questioned whether classical set theory was an adequate paradigm for treating collections of indistinguishable elementary particles in quantum mechanics. He suggested that such collections cannot be sets in the usual sense, and that the study of such collections required a "new language".
The use of the term quasi-set follows a suggestion in da Costa's 1980 monograph Ensaio sobre os Fundamentos da Lógica (see da Costa and Krause 1994), in which he explored possible semantics for what he called "Schrödinger Logics". In these logics, the concept of identity is restricted to some objects of the domain, and has motivation in Schrödinger's claim that the concept of identity does not make sense for elementary particles (Schrödinger 1952). Thus in order to provide a semantics that fits the logic, da Costa submitted that "a theory of quasi-sets should be developed", encompassing "standard sets" as particular cases, yet da Costa did not develop this theory in any concrete way. To the same end and independently of da Costa, Dalla Chiara and di Francia (1993) proposed a theory of quasets to enable a semantic treatment of the language of microphysics. The first quasi-set theory was proposed by D. Krause in his PhD thesis, in 1990 (see Krause 1992). A related physics theory, based on the logic of adding fundamental indistinguishability to equality and inequality, was developed and elaborated independently in t
|
https://en.wikipedia.org/wiki/Flash%20memory%20emulator
|
A flash emulator or flash memory emulator is a tool that is used to temporarily replace flash memory or ROM chips in an embedded device for the purpose of debugging embedded software. Such tools contain Dual-ported RAM, one port of which is connected to a target system (i.e. system, that is being debugged), and second is connected to a host (i.e. PC, which runs debugger). This allows the programmer to change executable code while it is running, set break points, and use other advanced debugging techniques on an embedded system, where such operations would not be possible otherwise.
This type of tool appeared in 1980s-1990s, when most embedded systems were using discrete ROM (or later flash memory) chip, containing executable code. This allowed for easy replacing of ROM/flash chip with emulator. Together with excellent productivity of this tool this had driven an almost universal use of it among embedded developers. Later, when most embedded systems started to include both processor and flash on a single chip for cost and IP protection reasons, thus making external flash emulator tool impossible, search for a replacement tool started. And as often happens when a direct replacement is being searched for, many replacement techniques contain words "flash emulation" in them, for example, TI's "Flash Emulation Tool" debugging interface (FET) for its MSP430 chips, or more generic in-circuit emulators, even though none of two above had anything to do with flash or emulation as it is.
Flash emulator could also be retrofitted to an embedded system to facilitate reverse engineering. For example, that was main hardware instrument in reverse engineering Wii gaming console bootloader.
See also
In-circuit emulator
|
https://en.wikipedia.org/wiki/Bond%20length
|
In molecular geometry, bond length or bond distance is defined as the average distance between nuclei of two bonded atoms in a molecule. It is a transferable property of a bond between atoms of fixed types, relatively independent of the rest of the molecule.
Explanation
Bond length is related to bond order: when more electrons participate in bond formation the bond is shorter. Bond length is also inversely related to bond strength and the bond dissociation energy: all other factors being equal, a stronger bond will be shorter. In a bond between two identical atoms, half the bond distance is equal to the covalent radius.
Bond lengths are measured in the solid phase by means of X-ray diffraction, or approximated in the gas phase by microwave spectroscopy. A bond between a given pair of atoms may vary between different molecules. For example, the carbon to hydrogen bonds in methane are different from those in methyl chloride. It is however possible to make generalizations when the general structure is the same.
Bond lengths of carbon with other elements
A table with experimental single bonds for carbon to other elements is given below. Bond lengths are given in picometers. By approximation the bond distance between two different atoms is the sum of the individual covalent radii (these are given in the chemical element articles for each element). As a general trend, bond distances decrease across the row in the periodic table and increase down a group. This trend is identical to that of the atomic radius.
Bond lengths in organic compounds
The bond length between two atoms in a molecule depends not only on the atoms but also on such factors as the orbital hybridization and the electronic and steric nature of the substituents. The carbon–carbon (C–C) bond length in diamond is 154 pm. It is generally considered the average length for a carbon–carbon single bond, but is also the largest bond length that exists for ordinary carbon covalent bonds. Since one atomic unit
|
https://en.wikipedia.org/wiki/Applications%20of%20sensitivity%20analysis%20to%20business
|
Sensitivity analysis can be usefully applied to business problem, allowing the identification of those variables which may influence a business decision, such as e.g. an investment.
In a decision problem, the analyst may want to identify cost drivers as well as other quantities for which we need to acquire better knowledge to make an informed decision. On the other hand, some quantities have no influence on the predictions, so that we can save resources at no loss in accuracy by relaxing some of the conditions. See Corporate finance: Quantifying uncertainty.
Additionally to the general motivations listed above, sensitivity analysis can help in a variety of other circumstances specific to business:
To identify critical assumptions or compare alternative model structures
To guide future data collections
To optimize the tolerance of manufactured parts in terms of the uncertainty in the parameters
To optimize resources allocation
However, there are also some problems associated with sensitivity analysis in the business context:
Variables are often interdependent (correlated), which makes examining each variable individually unrealistic. E.G. changing one factor such as sales volume, will most likely affect other factors such as the selling price.
Often the assumptions upon which the analysis is based are made by using past experience/data which may not hold in the future.
Assigning a maximum and minimum (or optimistic and pessimistic) value is open to subjective interpretation. For instance, one person's 'optimistic' forecast may be more conservative than that of another person performing a different part of the analysis. This sort of subjectivity can adversely affect the accuracy and overall objectivity of the analysis.
|
https://en.wikipedia.org/wiki/Growth%20function
|
The growth function, also called the shatter coefficient or the shattering number, measures the richness of a set family. It is especially used in the context of statistical learning theory, where it measures the complexity of a hypothesis class.
The term 'growth function' was coined by Vapnik and Chervonenkis in their 1968 paper, where they also proved many of its properties.
It is a basic concept in machine learning.
Definitions
Set-family definition
Let be a set family (a set of sets) and a set. Their intersection is defined as the following set-family:
The intersection-size (also called the index) of with respect to is . If a set has elements then the index is at most . If the index is exactly 2m then the set is said to be shattered by , because contains all the subsets of , i.e.:
The growth function measures the size of as a function of . Formally:
Hypothesis-class definition
Equivalently, let be a hypothesis-class (a set of binary functions) and a set with elements. The restriction of to is the set of binary functions on that can be derived from :
The growth function measures the size of as a function of :
Examples
1. The domain is the real line .
The set-family contains all the half-lines (rays) from a given number to positive infinity, i.e., all sets of the form for some .
For any set of real numbers, the intersection contains sets: the empty set, the set containing the largest element of , the set containing the two largest elements of , and so on. Therefore: . The same is true whether contains open half-lines, closed half-lines, or both.
2. The domain is the segment .
The set-family contains all the open sets.
For any finite set of real numbers, the intersection contains all possible subsets of . There are such subsets, so .
3. The domain is the Euclidean space .
The set-family contains all the half-spaces of the form: , where is a fixed vector.
Then ,
where Comp is the number of components in a partitioning
|
https://en.wikipedia.org/wiki/Bugzilla
|
Bugzilla is a web-based general-purpose bug tracking system and testing tool originally developed and used by the Mozilla project, and licensed under the Mozilla Public License.
Released as open-source software by Netscape Communications in 1998, it has been adopted by a variety of organizations for use as a bug tracking system for both free and open-source software and proprietary projects and products. Bugzilla is used, among others, by the Mozilla Foundation, WebKit, Linux kernel, FreeBSD, KDE, Apache, Eclipse and LibreOffice. Red Hat uses it, but is gradually migrating its product to use Jira. It is also self-hosting.
History
Bugzilla was originally devised by Terry Weissman in 1998 for the nascent Mozilla.org project, as an open source application to replace the in-house system then in use at Netscape Communications for tracking defects in the Netscape Communicator suite. Bugzilla was originally written in Tcl, but Weissman decided to port it to Perl before its release as part of Netscape's early open-source code drops, in the hope that more people would be able to contribute to it, given that Perl seemed to be a more popular language at the time.
Bugzilla 2.0 was the result of that port to Perl, and the first version was released to the public via anonymous CVS. In April 2000, Weissman handed over control of the Bugzilla project to Tara Hernandez. Under her leadership, some of the regular contributors were coerced into taking more responsibility, and Bugzilla development became more community-driven. In July 2001, facing distraction from her other responsibilities in Netscape, Hernandez handed control to Dave Miller, who was still in charge .
Bugzilla 3.0 was released on May 10, 2007 and brought a refreshed UI, an XML-RPC interface, custom fields and resolutions, mod_perl support, shared saved searches, and improved UTF-8 support, along with other changes.
Bugzilla 4.0 was released on February 15, 2011 and Bugzilla 5.0 was released in July 2015.
Timeli
|
https://en.wikipedia.org/wiki/Look-ahead%20%28backtracking%29
|
In backtracking algorithms, look ahead is the generic term for a subprocedure that attempts to foresee the effects of choosing a branching variable to evaluate one of its values. The two main aims of look-ahead are to choose a variable to evaluate next and to choose the order of values to assign to it.
Constraint satisfaction
In a general constraint satisfaction problem, every variable can take a value in a domain. A backtracking algorithm therefore iteratively chooses a variable and tests each of its possible values; for each value the algorithm is recursively run. Look ahead is used to check the effects of choosing a given variable to evaluate or to decide the order of values to give to it.
Look ahead techniques
The simpler technique for evaluating the effect of a specific assignment to a variable is called forward checking. Given the current partial solution and a candidate assignment to evaluate, it checks whether another variable can take a consistent value. In other words, it first extends the current partial solution with the tentative value for the considered variable; it then considers every other variable that is still unassigned, and checks whether there exists an evaluation of that is consistent with the extended partial solution. More generally, forward checking determines the values for that are consistent with the extended assignment.
A look-ahead technique that may be more time-consuming but may produce better results is based on arc consistency. Namely, given a partial solution extended with a value for a new variable, it enforces arc consistency for all unassigned variables. In other words, for any unassigned variables, the values that cannot consistently be extended to another variable are removed. The difference between forward checking and arc consistency is that the former only checks a single unassigned variable at time for consistency, while the second also checks pairs of unassigned variables for mutual consistency. The most common wa
|
https://en.wikipedia.org/wiki/Artifact%20%28error%29
|
In natural science and signal processing, an artifact or artefact is any error in the perception or representation of any information introduced by the involved equipment or technique(s).
Computer science
In computer science, digital artifacts are anomalies introduced into digital signals as a result of digital signal processing.
Microscopy
In microscopy, visual artifacts are sometimes introduced during the processing of samples into slide form.
Econometrics
In econometrics, which trades on computing relationships between related variables, an artifact is a spurious finding, such as one based on either a faulty choice of variables or an over-extension of the computed relationship. Such an artifact may be called a statistical artifact. For instance, imagine a hypothetical finding that presidential approval rating is approximately equal to twice the percentage of citizens making more than $50,000 annually; if 60% of citizens make more than $50,000 annually, this would predict that the approval rating will be 120%. This prediction is a statistical artifact, since it is spurious to use the model when the percentage of citizens making over $50,000 is so high, and gross error to predict an approval rating greater than 100%.
Remote sensing
Medical imaging
In medical imaging, artifacts are misrepresentations of tissue structures produced by imaging techniques such as ultrasound, X-ray, CT scan, and magnetic resonance imaging (MRI). These artifacts may be caused by a variety of phenomena such as the underlying physics of the energy-tissue interaction as between ultrasound and air, susceptibility artifacts, data acquisition errors (such as patient motion), or a reconstruction algorithm's inability to represent the anatomy. Physicians typically learn to recognize some of these artifacts to avoid mistaking them for actual pathology.
In ultrasound imaging, several assumptions are made from the computer system to interpret the returning echoes. These are: echoes origina
|
https://en.wikipedia.org/wiki/Fish%20fillet%20processor
|
A fish fillet processor processes fish into a fillet. Fish processing starts from the time the fish is caught. Popular species processed include cod, hake, haddock, tuna, herring, mackerel, salmon and pollock.
Commercial fish processing is a global practice. Processing varies regionally in productivity, type of operation, yield and regulation. Approximately 90% of processed fish are oceanic fish. The remaining 10% are from conciliatory freshwater operations and aquacultural production. Most fish processing industries are near commercial fishing zones. In certain regions, fish are transported or exported for processing.
Major fish processing countries
The largest fish processing countries in order are:
These countries produce over half the world's fish products. The Pacific Northwest region of the United States provider the greatest volume.
Uses of processed fish
Seventy-five percent of fish processed is for human consumption. Fish oil and fish meal comprise the remaining 25% of fish processing, with fish meal predominantly used in livestock feed and aquaculture.
Fresh fish accounts for 30% of production. Most processed fish is sold frozen as fillets or whole fish, canned fish and as other fish protein products (e.g. surimi). The consumption of frozen fish products as ready-to-eat meals, fillets, and whole fish is increasing globally.
Processing procedures
Processing can start either on the fishing vessel or at the plants. For example, some time the fishes are beheaded and gutted on par the fishing vessel its self.
The process involved in filleting of whitefish is moderately different as compared to the filleting of oily fish.
Whitefish
In certain fish processing industries, filleting is done manually.
The fish is be-headed, gutted, de-iced and de-scaled. It is then graded and filleted by hand. After the processing phase, the fish fillet is trimmed for blood, bones, fins, black membrane, fleas, loose fish scales and sorted. It is then packed and frozen i
|
https://en.wikipedia.org/wiki/Axenic
|
In biology, axenic (, ) describes the state of a culture in which only a single species, variety, or strain of organism is present and entirely free of all other contaminating organisms. The earliest axenic cultures were of bacteria or unicellular eukaryotes, but axenic cultures of many multicellular organisms are also possible. Axenic culture is an important tool for the study of symbiotic and parasitic organisms in a controlled environment.
Preparation
Axenic cultures of microorganisms are typically prepared by subculture of an existing mixed culture. This may involve use of a dilution series, in which a culture is successively diluted to the point where subsamples of it contain only a few individual organisms, ideally only a single individual (in the case of an asexual species). These subcultures are allowed to grow until the identity of their constituent organisms can be ascertained. Selection of those cultures consisting solely of the desired organism produces the axenic culture. Subculture selection may also involve manually sampling the target organism from an uncontaminated growth front in an otherwise mixed culture, and using this as an inoculum source for the subculture.
Axenic cultures are usually checked routinely to ensure that they remain axenic. One standard approach with microorganisms is to spread a sample of the culture onto an agar plate, and to incubate this for a fixed period of time. The agar should be an enriched medium that will support the growth of common "contaminating" organisms. Such "contaminating" organisms will grow on the plate during this period, identifying cultures that are no longer axenic.
Experimental use
As axenic cultures are derived from very few organisms, or even a single individual, they are useful because the organisms present within them share a relatively narrow gene pool. In the case of an asexual species derived from a single individual, the resulting culture should consist of identical organisms (though pro
|
https://en.wikipedia.org/wiki/Quantum%20programming
|
Quantum programming is the process of designing or assembling sequences of instructions, called quantum circuits, using gates, switches, and operators to manipulate a quantum system for a desired outcome or results of a given experiment. Quantum circuit algorithms can be implemented on integrated circuits, conducted with instrumentation, or written in a programming language for use with a quantum computer or a quantum processor.
With quantum processor based systems, quantum programming languages help express quantum algorithms using high-level constructs. The field is deeply rooted in the open-source philosophy and as a result most of the quantum software discussed in this article is freely available as open-source software.
Quantum computers, such as those based on the KLM protocol, a linear optical quantum computing (LOQC) model, use quantum algorithms (circuits) implemented with electronics, integrated circuits, instrumentation, sensors, and/or by other physical means.
Other circuits designed for experimentation related to quantum systems can be instrumentation and sensor based.
Quantum instruction sets
Quantum instruction sets are used to turn higher level algorithms into physical instructions that can be executed on quantum processors. Sometimes these instructions are specific to a given hardware platform, e.g. ion traps or superconducting qubits.
cQASM
cQASM, also known as common QASM, is a hardware-agnostic quantum assembly language which guarantees the interoperability between all the quantum compilation and simulation tools. It was introduced by the QCA Lab at TUDelft.
Quil
Quil is an instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng in A Practical Quantum Instruction Set Architecture. Many quantum algorithms (including quantum teleportation, quantum error correction, simulation, and optimization algorithms) require
|
https://en.wikipedia.org/wiki/Zassenhaus%20algorithm
|
In mathematics, the Zassenhaus algorithm
is a method to calculate a basis for the intersection and sum of two subspaces of a vector space.
It is named after Hans Zassenhaus, but no publication of this algorithm by him is known. It is used in computer algebra systems.
Algorithm
Input
Let be a vector space and , two finite-dimensional subspaces of with the following spanning sets:
and
Finally, let be linearly independent vectors so that and can be written as
and
Output
The algorithm computes the base of the sum and a base of the intersection .
Algorithm
The algorithm creates the following block matrix of size :
Using elementary row operations, this matrix is transformed to the row echelon form. Then, it has the following shape:
Here, stands for arbitrary numbers, and the vectors
for every and for every are nonzero.
Then with
is a basis of
and with
is a basis of .
Proof of correctness
First, we define to be the projection to the first component.
Let
Then and
.
Also, is the kernel of , the projection restricted to .
Therefore, .
The Zassenhaus algorithm calculates a basis of . In the first columns of this matrix, there is a basis of .
The rows of the form (with ) are obviously in . Because the matrix is in row echelon form, they are also linearly independent.
All rows which are different from zero ( and ) are a basis of , so there are such s. Therefore, the s form a basis of .
Example
Consider the two subspaces and of the vector space .
Using the standard basis, we create the following matrix of dimension :
Using elementary row operations, we transform this matrix into the following matrix:
(Some entries have been replaced by "" because they are irrelevant to the result.)
Therefore
is a basis of , and
is a basis of .
See also
Gröbner basis
|
https://en.wikipedia.org/wiki/Salted%20squid
|
Salted squid is squid or cuttlefish cured with dry salt and thus preserved for later consumption. Drying or salting, either with dry salt or with brine is a widely available method of seafood preservation. Salted squid is often mistaken with dried shredded squid, which is specifically shredded and seasoned dried squid. The salted squid production method is similar to salted fish and often considered as a specific variant of salted fish. Salted squid commonly found in coastal Asian countries, especially Indonesia, Malaysia, Thailand, Vietnam, Taiwan, Hong Kong, Southern China, South Korea and Japan.
Method
The squid meat is washed with dilute brine or seawater, to wash off contaminants on the surface. Draining is followed with salting. The salting process can be done in wet method — by soaking squids in brine solution, or in dry salting — by sprinkling salt upon squids. The process is followed by sun drying. In East Asian countries, such as Japan and China, dried salted squid are usually gutted and flattened prior to sun drying. In Indonesia however, dried salted squid are usually not gutted and remain in its cylindric form.
In cuisine
In Indonesia, dried salted squid is one of popular processed seafood available in traditional markets. Usually salted dried squid are washed and fried, either deep fried or stir fried, and consumed as a side dish with steamed rice. Stir fried cuttlefish might be cooked in green sambal chili paste.
See also
Brining
Cantonese salted fish
Cured fish
Ojingeo-jeot
Squid as food
Notes
Food preservation
Salted foods
Squid dishes
|
https://en.wikipedia.org/wiki/Life%20on%20Venus
|
The possibility of life on Venus is a subject of interest in astrobiology due to Venus's proximity and similarities to Earth. To date, no definitive evidence has been found of past or present life there. In the early 1960s, studies conducted via spacecraft demonstrated that the current Venusian environment is extreme compared to Earth's. Studies continue to question whether life could have existed on the planet's surface before a runaway greenhouse effect took hold, and whether a relict biosphere could persist high in the modern Venusian atmosphere.
With extreme surface temperatures reaching nearly and an atmospheric pressure 92 times that of Earth, the conditions on Venus make water-based life as we know it unlikely on the surface of the planet. However, a few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the temperate, acidic upper layers of the Venusian atmosphere. In September 2020, research was published that reported the presence of phosphine in the planet's atmosphere, a potential biosignature. However, doubts have been cast on these observations.
As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. On 2 June 2021, NASA announced two new related missions to Venus: DAVINCI+ and VERITAS.
Surface conditions
Because Venus is completely covered in clouds, human knowledge of surface conditions was largely speculative until the space probe era. Until the mid-20th century, the surface environment of Venus was believed to be similar to Earth, hence it was widely believed that Venus could harbor life. In 1870, the British astronomer Richard A. Proctor said the existence of life on Venus was impossible near its equator, but possible near its poles. Science fiction writers were free to imagine what Venus might be like until the 1960s. Among the speculations were that it had a jungle-like environment or that it h
|
https://en.wikipedia.org/wiki/Peripheral%20Sensor%20Interface%205
|
Peripheral Sensor Interface (PSI5) is a digital interface for sensors.
PSI5 is a two-wire interface, used to connect peripheral sensors to electronic control units in automotive electronics. Both point-to-point and bus configurations with asynchronous and synchronous data transmission are supported.
Functional description
PSI5 is a current interface with modulation of the sending current for the transmission of data on the power supply lines. The relatively high sending current and the use of a Manchester code for bit encoding result in high immunity against interference from radiated emissions. The use of an inexpensive twisted pair cable is thus sufficient for most of the applications, however, in automotive more expensive cabling is employed.
Data words consist of two start bits, 8 to 24 data bits and a single parity bit or optional three bit CRC (cyclic redundancy check). The bitrate is 125 kbit/s or optionally 189 kbit/s.
Standardization
Following the goal of an open interface standard, PSI5 was developed on the basis of existing proprietary implementations of the companies Autoliv, Bosch and Continental. A first common publication was released at the "Sensor" trade fair in Nuremberg in May 2005. Since summer 2006, SiemensVDO also supports the standardization of PSI5. Siemens VDO is now a part of the
Continental group of companies.
IEC EMC standard for Peripheral Sensor Interface 5 (PSI5) IEC62228-6 "Integrated circuit - EMC Evaluation of transceivers - Part 6:PSI5 transceivers" is ongoing.
See also
List of network buses
External links
psi5.org - PSI5
psi5-forum.com - PSI5-Forum
Serial buses
|
https://en.wikipedia.org/wiki/Japanese%20postal%20mark
|
is the service mark of Japan Post and its successor, Japan Post Holdings, the postal operator in Japan. It is also used as a Japanese postal code mark since the introduction of the latter in 1968. Historically, it was used by the , which operated the postal service. The mark is a stylized katakana syllable te (テ), from the word . The mark was introduced on February 8, 1887 (Meiji 20.2.8).
Usage
To indicate a postal code, the mark is written first, and the postal code is written after. For example, one area of Meguro, Tokyo, would have 〒153-0061 written on any mail, in order to direct mail to that location. This usage has resulted in the inclusion of the mark into the Japanese character sets for computers, and thus eventually their inclusion into Unicode, where it can also be found on the Japanese Post Office emoji. In most keyboard-based Japanese input systems, it can be created by typing "yuubin" and then doing a kanji conversion.
Of the versions shown to the right, the one on the far right (〒) is the standard mark used in addressing. A circled yūbin mark is often used on maps to denote post offices. Other variants have been used as conformity marks inherited from the Ministry of Communications: for example, a similar circled mark was used for electrical certification of Category B appliances, contrasted with a triangle-enclosed postal mark (⮗) for Category A appliances, under a precursor to the Act on Product Safety of Electrical Appliances and Materials. The Unicode code chart, as of version 13.0, labels the "Circled Postal Mark" character (〶, U+3036) as "symbol for type B electronics". An enclosed version incorporating a sawtooth wave shape is used as a conformity mark for Ministry of Internal Affairs and Communications regulations on radio and other electromagnetic wave equipment.
Encoding
The postal mark appears in the following encoded characters. Before the introduction of Unicode, the simple postal mark was encoded for Japanese use in JIS X 0208 (inc
|
https://en.wikipedia.org/wiki/Include%20directive
|
Many programming languages and other computer files have a directive, often called include, import, or copy, that causes the contents of the specified file to be inserted into the original file. These included files are called s or copybooks. They are often used to define the physical layout of program data, pieces of procedural code, and/or forward declarations while promoting encapsulation and the reuse of code or data.
Header files
In computer programming, a header file is a file that allows programmers to separate certain elements of a program's source code into reusable files. Header files commonly contain forward declarations of classes, subroutines, variables, and other identifiers. Programmers who wish to declare standardized identifiers in more than one source file can place such identifiers in a single header file, which other code can then include whenever the header contents are required. This is to keep the interface in the header separate from the implementation.
The C standard library and the C++ standard library traditionally declare their standard functions in header files.
Some recently created compiled languages (such as Java and C#) do not use forward declarations; identifiers are recognized automatically from source files and read directly from dynamic library symbols. This means header files are not needed.
Purpose
The include directive allows libraries of code to be developed which help to:
ensure that everyone uses the same version of a data layout definition or procedural code throughout a program,
easily cross-reference where components are used in a system,
easily change programs when needed (only one file must be edited), and
save time by reusing data layouts.
Example
An example situation which benefits from the use of an include directive is when referring to functions in a different file. Suppose there is some C source file containing a function add, which is referred to in a second file by first declaring its external
|
https://en.wikipedia.org/wiki/Drazin%20inverse
|
In mathematics, the Drazin inverse, named after Michael P. Drazin, is a kind of generalized inverse of a matrix.
Let A be a square matrix. The index of A is the least nonnegative integer k such that rank(Ak+1) = rank(Ak). The Drazin inverse of A is the unique matrix AD that satisfies
It's not a generalized inverse in the classical sense, since in general.
If A is invertible with inverse , then .
If A is a block diagonal matrix
where is invertible with inverse and is a nilpotent matrix, then
Drazin inversion is invariant under conjugation. If is the Drazin inverse of , then is the Drazin inverse of .
The Drazin inverse of a matrix of index 0 or 1 is called the group inverse or {1,2,5}-inverse and denoted A#. The group inverse can be defined, equivalently, by the properties AA#A = A, A#AA# = A#, and AA# = A#A.
A projection matrix P, defined as a matrix such that P2 = P, has index 1 (or 0) and has Drazin inverse PD = P.
If A is a nilpotent matrix (for example a shift matrix), then
The hyper-power sequence is
for convergence notice that
For or any regular with chosen such that the sequence tends to its Drazin inverse,
Jordan normal form and Jordan-Chevalley decomposition
As the definition of the Drazin inverse is invariant under matrix conjugations, writing , where J is in Jordan normal form, implies that . The Drazin inverse is then the operation that maps invertible Jordan blocks to their inverses, and nilpotent Jordan blocks to zero.
More generally, we may define the Drazin inverse over any perfect field, by using the Jordan-Chevalley decomposition where is semisimple and is nilpotent and both operators commute. The two terms can be block diagonalized with blocks corresponding to the kernel and cokernel of . The Drazin inverse in the same basis is then defined to be zero on the kernel of , and equal to the inverse of on the cokernel of .
See also
Constrained generalized inverse
Inverse element
Moore–Penrose inverse
Jordan
|
https://en.wikipedia.org/wiki/Transversion
|
Transversion, in molecular biology, refers to a point mutation in DNA in which a single (two ring) purine (A or G) is changed for a (one ring) pyrimidine (T or C), or vice versa. A transversion can be spontaneous, or it can be caused by ionizing radiation or alkylating agents. It can only be reversed by a spontaneous reversion.
Ratio of transitions to transversions
Although there are two possible transversions but only one possible transition per base, transition mutations are more likely than transversions because substituting a single ring structure for another single ring structure is more likely than substituting a double ring for a single ring. Also, transitions are less likely to result in amino acid substitutions (due to wobble base pair), and are therefore more likely to persist as "silent substitutions" in populations as single nucleotide polymorphisms (SNPs). A transversion usually has a more pronounced effect than a transition because the third nucleotide codon position of the DNA, which to a large extent is responsible for the degeneracy of the code, is more tolerant of transition than a transversion: that is, a transition is more likely to encode for the same amino acid.
Spontaneous germline transversion
8-oxo-2'-deoxyguanosine (8-oxodG) is an oxidized derivative of deoxyguanosine, and is one of the major products of DNA oxidation. During DNA replication in the germ line of mice, the oxidized base 8-oxoguanine (8-oxoG) causes spontaneous and heritable G to T transversion mutations. These mutations occur in different stages of the germ cell lineage and are distributed throughout the chromosomes.
Consequences of transversion mutations
The location of a transversion mutation on a gene coding for a protein correlates with the extent of the mutation. If the mutation occurs at a site that is not involved with the shape of a protein or the structure of an enzyme or its active site, the mutation will not have a significant effect on the cell or the enzy
|
https://en.wikipedia.org/wiki/Van%20Vleck%20paramagnetism
|
In condensed matter and atomic physics, Van Vleck paramagnetism refers to a positive and temperature-independent contribution to the magnetic susceptibility of a material, derived from second order corrections to the Zeeman interaction. The quantum mechanical theory was developed by John Hasbrouck Van Vleck between the 1920s and the 1930s to explain the magnetic response of gaseous nitric oxide () and of rare-earth salts. Alongside other magnetic effects like Paul Langevin's formulas for paramagnetism (Curie's law) and diamagnetism, Van Vleck discovered an additional paramagnetic contribution of the same order as Langevin's diamagnetism. Van Vleck contribution is usually important for systems with one electron short of being half filled and this contribution vanishes for elements with closed shells.
Description
The magnetization of a material under an external small magnetic field is approximately described by
where is the magnetic susceptibility. When a magnetic field is applied to a paramagnetic material, its magnetization is parallel to the magnetic field and . For a diamagnetic material, the magnetization opposes the field, and .
Experimental measurements show that most non-magnetic materials have a susceptibility that behaves in the following way:
,
where is the absolute temperature; are constant, and , while can be positive, negative or null. Van Vleck paramagnetism often refers to systems where and .
Derivation
The Hamiltonian for an electron in a static homogeneous magnetic field in an atom is usually composed of three terms
where is the vacuum permeability, is the Bohr magneton, is the g-factor, is the elementary charge, is the electron mass, is the orbital angular momentum operator, the spin and is the component of the position operator orthogonal to the magnetic field. The Hamiltonian has three terms, the first one is the unperturbed Hamiltonian without the magnetic field, the second one is proportional to , and the third one i
|
https://en.wikipedia.org/wiki/BNU%20%28software%29
|
BNU is a high-performance communications device driver designed to provide enhanced support for serial port communications. The BNU serial port driver was specifically targeted for use with early (late 1980s - 1990s) DOS-based BBS software. The reason for BNU and other similar enhanced serial port drivers was to provide better support for serial communications software than what was offered by the machine's BIOS and/or DOS being used on the machine. Having serial port support as provided by BNU and other similar drivers allowed the communications software programmers to spend more time on the actual applications instead of the depths and details of how to talk to the serial ports and the modems connected to them. Sending communications data across a modem link was a lot more involved than sending data to a serial printer which was basically all that was originally capable of being done with the existing serial port software support.
BNU was written by David Nugent as an experimental driver for serial communications following the FOSSIL specification. David released BNU to the public in 1989 and its use in the BBS world spread rapidly. BNU was one of only two or three available FOSSIL drivers for the IBM PC compatible hardware and MS-DOS/PC DOS operating system. Because of this, BNU has been one of the most widely used MS-DOS FOSSIL communications drivers.
BNU was mainly used with DOS-based Bulletin Board System (BBS) software written in the late 1980s to mid-1990s. It is not used by Windows-based BBS software, but BNU can be used under Windows NTVDM to run DOS-based BBS software under Windows. BNU and other similar drivers were not limited solely to being used in the BBS world. The enhanced capabilities they offered were also used to easily communicate with other serially connected devices for the same reasons that the FOSSIL specification and FOSSIL drivers were originally created. That reason, as noted above, was to separate the details of serial port communica
|
https://en.wikipedia.org/wiki/Iris%20folding
|
Iris folding is a paper craft technique that involves folding strips of colored paper in such a way to form a design. The center of the design forms an iris—a shape reminiscent of the iris diaphragm of a camera lens.
History
Iris folding originated in 20th-century Holland, where early craft people made their designs using patterned paper cut from the inside of envelopes.
Techniques
Iris folding is done with a pattern. The crafter uses the finished product to decorate the front of a greeting card, as a scrapbook embellishment, to decor a pattern, strips of colored paper, permanent transparent tape, cutting tools and a temporary tape such as painters tape. The temporary tape is used to hold the pattern in place while the craftsperson creates the design.
Iris folding patterns are available from booksellers or as downloadable files made available on Internet web sites. Other craft persons doing iris folding create their own patterns.
Further reading
(the paper describes an interactive computational tool to assist in the design and construction of original iris folding patterns).
External sources
Iris Folding @ Circle of Crafters Explanation of iris folding, free patterns, techniques, pictures and a member forum.
Video Demonstration of Iris Folding Discussion of iris folding and video demonstration showing how to do this paper craft technique
Paper folding
|
https://en.wikipedia.org/wiki/F%28R%29%20gravity
|
{{DISPLAYTITLE:f(R) gravity}}
() is a type of modified gravity theory which generalizes Einstein's general relativity. () gravity is actually a family of theories, each one defined by a different function, , of the Ricci scalar, . The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. () gravity was first proposed in 1970 by Hans Adolph Buchdahl (although was used rather than for the name of the arbitrary function). It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.
Introduction
In () gravity, one seeks to generalize the Lagrangian of the Einstein–Hilbert action:
to
where is the determinant of the metric tensor, and is some function of the Ricci scalar.
There are two ways to track the effect of changing to , i.e., to obtain the theory field equations. The first is to use metric formalism and the second is to use the Palatini formalism. While the two formalisms lead to the same field equations for General Relativity, i.e., when , the field equations may differ when .
Metric () gravity
Derivation of field equations
In metric () gravity, one arrives at the field equations by varying the action with respect to the metric and not treating the connection independently. For completeness we will now briefly mention the basic steps of the variation of the action. The main steps are the same as in the case of the variation of the Einstein–Hilbert action (see the arti
|
https://en.wikipedia.org/wiki/Polyproline%20helix
|
A polyproline helix is a type of protein secondary structure which occurs in proteins comprising repeating proline residues. A left-handed polyproline II helix (PPII, poly-Pro II, κ-helix) is formed when sequential residues all adopt (φ,ψ) backbone dihedral angles of roughly (-75°, 150°) and have trans isomers of their peptide bonds. This PPII conformation is also common in proteins and polypeptides with other amino acids apart from proline. Similarly, a more compact right-handed polyproline I helix (PPI, poly-Pro I) is formed when sequential residues all adopt (φ,ψ) backbone dihedral angles of roughly (-75°, 160°) and have cis isomers of their peptide bonds. Of the twenty common naturally occurring amino acids, only proline is likely to adopt the cis isomer of the peptide bond, specifically the X-Pro peptide bond; steric and electronic factors heavily favor the trans isomer in most other peptide bonds. However, peptide bonds that replace proline with another N-substituted amino acid (such as sarcosine) are also likely to adopt the cis isomer.
Polyproline II helix
The PPII helix is defined by (φ,ψ) backbone dihedral angles of roughly (-75°, 150°) and trans isomers of the peptide bonds. The rotation angle Ω per residue of any polypeptide helix with trans isomers is given by the equation
Substitution of the poly-Pro II (φ,ψ) dihedral angles into this equation yields almost exactly Ω = -120°, i.e., the PPII helix is a left-handed helix (since Ω is negative) with three residues per turn (360°/120° = 3). The rise per residue is approximately 3.1 Å. This structure is somewhat similar to that adopted in the fibrous protein collagen, which is composed mainly of proline, hydroxyproline, and glycine. PPII helices are specifically bound by SH3 domains; this binding is important for many protein-protein interactions and even for interactions between the domains of a single protein.
The PPII helix is relatively open and has no internal hydrogen bonding, as opposed to the
|
https://en.wikipedia.org/wiki/Freezing%20tolerance
|
Freezing tolerance describes the ability of plants to withstand subzero temperatures through the formation of ice crystals in the xylem and intercellular space, or apoplast, of their cells. Freezing tolerance is enhanced as a gradual adaptation to low temperature through a process known as cold acclimation, which initiates the transition to prepare for subzero temperatures through alterations in rate of metabolism, hormone levels and sugars. Freezing tolerance is rapidly enhanced during the first days of the cold acclimation process when temperature drops. Depending on the plant species, maximum freezing tolerance can be reached after only two weeks of exposure to low temperatures. The ability to control intercellular ice formation during freezing is critical to the survival of freeze-tolerant plants. If intracellular ice forms, it could be lethal to the plant when adhesion between cellular membranes and walls occur. The process of freezing tolerance through cold acclimation is a two-stage mechanism:
The first stage occurs at relatively high subzero temperatures as the water present in plant tissues freezes outside the cell.
The second stage occurs at lower temperatures as intercellular ice continues to form.
Within the apoplast, antifreeze proteins localize the growth of ice crystals by ice nucleators in order to prevent physical damage to tissues and to promote supercooling within freezing-sensitive tissues and cells. Osmotic stress, including dehydration, high salinity, as well as treatment with abscisic acid, can also enhance freezing tolerance.
Freezing tolerance can be assessed by performing a simple plant survival assay or with the more time consuming but quantitative electrolyte leakage assay.
Plants are not the only organisms capable of withstanding subzero temperatures. Wood frogs, juvenile painted turtles, goldenrod gall fly larvae, and intertidal periwinkle snails have all been shown to be capable of the same. They convert up to 70% of their tota
|
https://en.wikipedia.org/wiki/Spider%20toxin
|
Spider toxins are a family of proteins produced by spiders which function as neurotoxins. The mechanism of many spider toxins is through blockage of calcium channels.
A remotely related group of atracotoxins operate by opening sodium channels. Delta atracotoxin from the venom of the Sydney funnel-web spider produces potentially fatal neurotoxic symptoms in primates by slowing the inactivation of voltage-gated sodium channels. The structure of atracotoxin comprises a core beta region containing a triple-stranded a thumb-like extension protruding from the beta region and a C-terminal helix. The beta region contains a cystine knot motif, a feature seen in other neurotoxic polypeptides and other spider toxins, of the CSTX family.
Spider potassium channel inhibitory toxins is another group of spider toxins. A representative of this group is hanatoxin, a 35 amino acid peptide toxin which was isolated from Chilean rose tarantula (Grammostola rosea, syn. G. spatulata) venom. It inhibits the drk1 voltage-gated potassium channel by altering the energetics of gating. See also Huwentoxin-1.
See also
Raventoxin
|
https://en.wikipedia.org/wiki/1%3A700%20scale
|
1:700 scale is a widely popular scale mainly used by Japanese ship model kit manufacturers, such as Aoshima, Tamiya, Hasegawa, Fujimi and Pit-Road.
History
Manufacturers such as Airfix, Renwal, and Heller were producing ship models in various scales, ranging from 1/400 to 1/600 scale. Airfix began producing constant scale 20th century warship subjects to 1/600 scale in 1959. In 1967, Revell began to produce ship kits in a unified 1/720 scale, and Italeri followed Revell ten years after. In 1971, Japanese manufactures started to produce a series of 1/700 scale water line ship kits. In this scale, approximately 1 inch equals 60 scale feet. This series steadily expanded over the years. At the beginning, only ships of the Japanese Navy were available in the series, but later American, British and German navy subjects were also included. Between 1977 and 1979, Matchbox released a small number of British, German and US waterline ship kits, they were designed to be made of different colors of plastic so that painting was not required.
Due to the large range of water line kits available in this scale, it became popular and now widely considered as a 'standardized scale' in ship modelling. Today there are many companies outside Japan producing 1/700 scale ships as well, such as the Chinese companies Trumpeter and Dragon Models. Various aftermarket photo-etched detailing parts are also widely available for adding fine details to ship models.
There are also small-run manufacturers of 1/700 scale warships, particularly with respect to ships that were designed but never built (Imperial Hobbies) and the leading resin kit manufacturer Kombrig which deals with many vessels of the predreadnought, dreadnought and WW2 eras which would not otherwise see manufacturing.
Water Line Series
The Water Line Series was created by the Shizuoka Plastic Model Manufacturers Association in May 1971. It is a collaborative effort by three manufacturers to produce constant scale models of most of
|
https://en.wikipedia.org/wiki/K%C5%91nig%27s%20lemma
|
Kőnig's lemma or Kőnig's infinity lemma is a theorem in graph theory due to the Hungarian mathematician Dénes Kőnig who published it in 1927. It gives a sufficient condition for an infinite graph to have an infinitely long path. The computability aspects of this theorem have been thoroughly investigated by researchers in mathematical logic, especially in computability theory. This theorem also has important roles in constructive mathematics and proof theory.
Statement of the lemma
Let be a connected, locally finite, infinite graph. This means that every two vertices can be connected by a finite path, each vertex is adjacent to only finitely many other vertices, and the graph has infinitely many vertices. Then contains a ray: a simple path (a path with no repeated vertices) that starts at one vertex and continues from it through infinitely many vertices.
A useful special case of the lemma is that every infinite tree contains either a vertex of infinite degree or an infinite simple path. If it is locally finite, it meets the conditions of the lemma and has a ray, and if it is not locally finite then it has an infinite-degree vertex.
Construction
The construction of a ray, in a graph that meets the conditions of the lemma, can be performed step by step, maintaining at each step a finite path that can be extended to reach infinitely many vertices (not necessarily all along the same path as each other). To begin this process, start with any single vertex . This vertex can be thought of as a path of length zero, consisting of one vertex and no edges. By the assumptions of the lemma, each of the infinitely many vertices of can be reached by a simple path that starts from .
Next, as long as the current path ends at some vertex , consider the infinitely many vertices that can be reached by simple paths that extend the current path, and for each of these vertices construct a simple path to it that extends the current path. There are infinitely many of these extended
|
https://en.wikipedia.org/wiki/Child%20Exploitation%20Tracking%20System
|
Child Exploitation Tracking System (CETS) is a Microsoft software based solution that assists in managing and linking worldwide cases related to child protection. CETS was developed in collaboration with law enforcement in Canada. Administered by the loose partnership of Microsoft and law enforcement agencies, CETS offers tools to gather and share evidence and information so they can identify, prevent and punish those who commit crimes against children.
About the CETS partnership
In 2003, Detective Sergeant Paul Gillespie, Officer in Charge of the Child Exploitation Section of the Toronto Police Service's Sex Crimes Unit, made a request directly to Bill Gates, CEO and Chief Architect at Microsoft at the time, for assistance with these types of crimes. Agencies experienced in tracking and apprehending those who perpetrate such crimes were involved in the design, implementation, and policy. The solution needed to assist law enforcement agencies from the initial point of detection, through the investigative phase, to arrest, prosecution, and conviction of the criminal. In addition, it was imperative that the solution adhered to existing rights and civil liberties of the citizens of the various countries. This included remaining independent of Internet traffic and any individual user’s computer. Finally, such a solution needed to be global in nature and enable collaboration among nations and agencies.
In order to increase the effectiveness of investigators worldwide, such a system would allow law enforcement entities to:
Collect evidence of online child exploitation gathered by multiple law enforcement agencies.
Organize and store the information safely and securely.
Search the database of information.
Securely share the information with other agencies, across jurisdictions.
Analyze the information and provide pertinent matches.
Adhere to global software industry standards.
Law enforcement partnerships worldwide
A number of law enforcement agencies use or are
|
https://en.wikipedia.org/wiki/Opitutus
|
Opitutus is a genus of bacteria from the family of Opitutaceae with one known species (Opitutus terrae).
|
https://en.wikipedia.org/wiki/Cloche%20%28agriculture%29
|
In agriculture and gardening, a cloche (from French, cloche for "bell") is a covering for protecting plants from cold temperatures. The original form of a cloche is a bell-shaped glass cover that is placed over an individual plant; modern cloches are usually made from plastic. The use of cloches is traced back to market gardens in 19th century France, where entire fields of plants would be protected with cloches. In commercial growing, cloches have largely been replaced by row cover, and nowadays are mainly found in smaller gardens.
History
Parisian market gardens in the 1800s used 18-inch diameter bell-shaped glass jars (cloches) to protect plants in cold weather. They were used to protect everything from young seedlings to mature plants. Notched wooden sticks were used to prop up and vent the jars on sunny days, and were placed back down on the soil before nightfall.
"Chase barn cloches", introduced in the early twentieth century by Major L.H. Chase, are constructed with flat panes of glass and held together by wires. They can be connected together to make a long row. They were vulnerable to falling shrapnel in World War II England. This style is still in use today where the wire assembly pieces are purchased as a kit and you use generic glass pieces.
See also
Season extension
|
https://en.wikipedia.org/wiki/International%20gateway
|
An International Gateway is a telephone number through which calls are routed to get cheaper rates on international long distance charges, or to make calls through voice over IP (VOIP) networks internationally. They also are effective in making an international call into the US appear as if it is originating from a local number rather than the real location.
Although there are numerous legitimate uses, they are also frequently used by scammers and con artists of all sorts, ranging from international fraudsters to lottery fraud as well as fake money order overpayment fraud. On some occasions the caller ID will display the call as INTL GATEWAY; at other times, anonymous or unknown. Frequently when calling the number back it will appear as if it is a disconnected number. Unknown phone numbers may be researched through many sites on the Internet.
|
https://en.wikipedia.org/wiki/509th%20Composite%20Group
|
The 509th Composite Group (509 CG) was a unit of the United States Army Air Forces created during World War II and tasked with the operational deployment of nuclear weapons. It conducted the atomic bombings of Hiroshima and Nagasaki, Japan, in August 1945.
The group was activated on 17 December 1944 at Wendover Army Air Field, Utah. It was commanded by Lieutenant Colonel Paul W. Tibbets. Because it contained flying squadrons equipped with Boeing B-29 Superfortress bombers, C-47 Skytrain, and C-54 Skymaster transport aircraft, the group was designated as a "composite", rather than a "bombardment" formation. It operated Silverplate B-29s, which were specially configured to enable them to carry nuclear weapons.
The 509th Composite Group began deploying to North Field on Tinian, Northern Mariana Islands, in May 1945. In addition to the two nuclear bombing raids, it carried out 15 practice missions against Japanese-held islands, and 12 combat missions against targets in Japan dropping high-explosive pumpkin bombs.
In the postwar era, the 509th Composite Group was one of the original ten bombardment groups assigned to Strategic Air Command on 21 March 1946 and the only one equipped with Silverplate B-29 Superfortress aircraft capable of delivering atomic bombs. It was standardized as a bombardment group and redesignated the 509th Bombardment Group, Very Heavy, on 10 July 1946.
History
Organization, training, and security
The 509th Composite Group was constituted on 9 December 1944, and activated on 17 December 1944, at Wendover Army Air Field, Utah. It was commanded by Lieutenant Colonel Paul W. Tibbets, who received promotion to full colonel in January 1945. It was initially assumed that the group would divide in two, with half going to Europe and half to the Pacific. In the first week of September Tibbets was assigned to organize a combat group to develop the means of delivering an atomic weapon by airplane against targets in Germany and Japan, then command it in
|
https://en.wikipedia.org/wiki/Apical%20ligament%20of%20dens
|
The ligament of apex dentis (or apical odontoid ligament) is a ligament that spans between the second cervical vertebra in the neck and the skull.
It lies as a fibrous cord in the triangular interval between the alar ligaments, which extends from the tip of the odontoid process on the axis to the anterior margin of the foramen magnum, being intimately blended with the deep portion of the anterior atlantooccipital membrane and superior crus of the transverse ligament of the atlas.
It is regarded as a rudimentary intervertebral fibrocartilage, and in it traces of the notochord may persist.
|
https://en.wikipedia.org/wiki/Location%20transparency
|
In computer networks, location transparency is the use of names to identify network resources, rather than their actual location. For example, files are accessed by a unique file name, but the actual data is stored in physical sectors scattered around a disk in either the local computer or in a network. In a location transparency system, the actual location where the file is stored doesn't matter to the user. A distributed system will need to employ a networked scheme for naming resources.
The main benefit of location transparency is that it no longer matters where the resource is located. Depending on how the network is set, the user may be able to obtain files that reside on another computer connected to the particular network. This means that the location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development.
An additional benefit is the flexibility it provides. Systems resources can be moved to a different computer at any time without disrupting any software systems running on them. By simply updating the location that goes with the named resource, every program using that resource will be able to find it. Location transparency effectively makes the location easy to use for users, since the data can be accessed by almost everyone who can connect to the Internet, who knows the right file names for usage, and who has proper security credentials to access it.
See also
Transparency (computing)
|
https://en.wikipedia.org/wiki/Zax%20%28Duke%20Power%29
|
Zax is an animated mascot character featured in 1980s public service announcements for Charlotte, North Carolina electric power company Duke Power. The character, introduced in 1984, was designed to appeal to children, and educate them about the dangers of electricity, and how to use energy more efficiently. Zax's voice was provided by Charlotte weatherman Larry Sprinkle.
The character appeared in animated PSAs on television, in a Sunday newspaper comic strip, and in live appearances by a costumed actor at libraries and elementary schools.
History
In 1984, Duke Power released a series of public service announcements to educate children on how to be safe and use electricity efficiently. These cartoon PSAs featured a small, eager to learn computer generated program named Zax, and a group of children who tried to keep him from getting injured or killed by electricity.
Zax appeared in a number of cartoon PSAs throughout the 1980s. In the first 60-second spot, a young boy, Billy, creates Zax on his computer. When Billy goes out to play with his friends, he says that he wishes Zax could come outside and play with them. Zax leaps from the computer screen to join the children, and immediately decides to climb a transmission tower. The children explain that he's not being safe — kids should never try to climb electric poles or towers, or fly kites and planes close to power lines. Zax thanks them, and admits, "Zax has much to learn. Will you teach me?" Billy replies, "This could be the beginning of a beautiful friendship," as the group walks into the sunset.
Other spots feature the children stopping Zax from taking a bath with electric devices nearby, going near downed power lines after a storm and other "misadventures". The PSAs concluded with the line, "This message brought to you by your friends at Duke Power."
Duke Power also produced Zax teaching kits for classroom use, including posters, light switch faceplates, a film-strip and an audio cassette that featured Zax
|
https://en.wikipedia.org/wiki/Sediment%20%28wine%29
|
Sediment is the solid material that settles to the bottom of any wine container, such as a bottle, vat, tank, cask, or barrel. Sediment is a highly heterogeneous mixture which at the start of wine-making consists of primarily dead yeast cells (lees) the insoluble fragments of grape pulp and skin, and the seeds that settle out of new wine. At subsequent stages, it consists of Tartrates, and from red wines Phenolic polymers as well as any insoluble material added to assist clarification.
Sediments in bottled wines are relatively rare, and usually, signal a fine wine that has already spent some years in the bottle. So unaccustomed have modern consumers become that many (erroneously) view it as a fault. Many winemakers therefore take great pains to ensure that the great majority of wines made today (especially those designed to be drunk within their first few years) will remain free of sediment for this time. Wines designed for long bottle aging, on the other hand, frequently deposit crystals of tartrates, and in addition, red wines deposit some pigmented tannins. Winemakers deliberately leave more tartrates and phenolics in wines designed for long aging in the bottle so that they are able to develop the aromatic compounds that constitute a bouquet.
See also
Decanter
|
https://en.wikipedia.org/wiki/Isolation%20booth
|
An isolation booth is a cabinet used to prevent a person or people from seeing or hearing certain events, usually for television programs or for blind testing of products.
Its most visual use is on game shows, where an isolation booth (either portable or built into the show's set) is in use to prevent a contestant from hearing their competitor's answers, or in the case of Family Feud, their fellow family member/friend's response to the "Fast Money" survey questions. Examples of the former include Twenty-One, Win Ben Stein's Money, 50 Grand Slam, Raise the Roof, The $64,000 Challenge, Whew!, Solitary and Double Dare (the 1976 version entitled as such unrelated to the children's game show). Another use is to prevent the audience from shouting the answer to them, as seen on The $64,000 Question, The $1,000,000 Chance of a Lifetime, and Name That Tune.
Further measures may be taken to prevent the occupant from seeing/hearing anything that occurs outside the booth, such as a blindfold or sleep mask, or headphones that play music or are equipped with noise-cancelling technology.
The isolation booth concept has been used for comic effect at times. One example is the "Cone of Silence" used as a running gag on the comedy series Get Smart. This was a clear plastic device that fitted over the heads of Maxwell Smart and the Chief, intended to let them discuss sensitive issues without being overheard. However, it invariably malfunctioned to the point that the two could not hear each other at all without shouting. Another variation appeared on the game show Idiot Savants, as the "Cylinder of Shush," a plastic tube lowered over the contestant's head that muffled the host's questions somewhat.
Isolation booths are also frequently used in audio recordings, with non-reflective walls, lined with acoustic foam that eliminate potential reverberations.
Use as punishment
Some schools in the United Kingdom use "isolation booths" as a place of detention, being a small room in which
|
https://en.wikipedia.org/wiki/Bond%20softening
|
Bond softening is an effect of reducing the strength of a chemical bond by strong laser fields. To make this effect significant, the strength of the electric field in the laser light has to be comparable with the electric field the bonding electron "feels" from the nuclei of the molecule. Such fields are typically in the range of 1–10 V/Å, which corresponds to laser intensities 1013–1015 W/cm2. Nowadays, these intensities are routinely achievable from table-top Ti:Sapphire lasers.
Theory
Theoretical description of bond softening can be traced back to early work on dissociation of diatomic molecules in intense laser fields. While the quantitative description of this process requires quantum mechanics, it can be understood qualitatively using quite simple models.
Low-intensity description
Consider the simplest diatomic molecule, the H2+ ion. The ground state of this molecule is bonding and the first excited state is antibonding. This means that when we plot the potential energy of the molecule (i.e. the average electrostatic energy of the two protons and the electron plus the kinetic energy of the latter) as the function of proton-proton separation, the ground state has a minimum but the excited state is repulsive (see Fig. 1a). Normally, the molecule is in the ground state, in one of the lowest vibrational levels (marked by horizontal lines).
In the presence of light, the molecule may absorb a photon (violet arrow), provided its frequency matches the energy difference between the ground and the excited states. The excited state is unstable and the molecule dissociates within femtoseconds into hydrogen atom and a proton releasing kinetic energy (red arrow). This is the usual description of photon absorption, which works well at low intensity. At high intensity, however, the interaction of the light with the molecule is so strong that the potential energy curves become distorted. To take this distortion into account requires "dressing" the molecule in photons.
Dres
|
https://en.wikipedia.org/wiki/Vignetting
|
In photography and optics, vignetting (; ) is a reduction of an image's brightness or saturation toward the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait that is clear at the center and fades off toward the edges. A similar effect is visible in photographs of projected images or videos off a projection screen, resulting in a so-called "hotspot" effect.
Vignetting is often an unintended and undesired effect caused by camera settings or lens limitations. However, it is sometimes deliberately introduced for creative effect, such as to draw attention to the center of the frame. A photographer may deliberately choose a lens that is known to produce vignetting to obtain the effect, or it may be introduced with the use of special filters or post-processing procedures.
When using zoom lenses, vignetting may occur all along the zoom range, depending on the aperture and the focal length. However, it may not always be visible, except at the widest end (the shortest focal length). In these cases, vignetting may cause an exposure value (EV) difference of up to 3EV.
Causes
There are several causes of vignetting. Sidney F. Ray distinguishes the following types:
Mechanical vignetting
Optical vignetting
Natural vignetting
A fourth cause is unique to digital imaging:
Pixel vignetting
A fifth cause is unique to analog imaging:
Photographic film vignetting
Mechanical vignetting
Mechanical vignetting occurs when light beams emanating from object points located off-axis (laterally or vertically off from the optical axis of an optical system under consideration) are partially blocked by external objects of the optical system such as thick or stacked filters, secondary lenses, and improper lens hoods. This has the effect of changing the entrance pupil shape as a function of angle (resulting in the path of light being partial
|
https://en.wikipedia.org/wiki/Oostvaardersplassen
|
The Oostvaardersplassen () is a nature reserve in the Netherlands, managed by the Staatsbosbeheer (state forestry service). Covering about in the province of Flevoland, it is an experiment in rewilding. It is in a polder created in 1968; by 1989, its ecological interest had resulted in its being declared a Ramsar wetland. It became part of Nieuw Land National Park when that was established in 2018.
Geography
The Oostvaardersplassen is located in the municipality of Lelystad, between the towns of Lelystad and Almere, in the province of Flevoland in the Netherlands. The area of is situated on the shore of the Markermeer in the center of the Flevopolder. The Oostvaardersplassen can be divided into a wet area in the northwest and a dry area in the southeast.
Wet and dry areas
In the wet area along the Markermeer, there are large reedbeds on clay, where moulting geese often feed. This area is also home to great cormorant, common spoonbill, great egret, white-tailed eagle and Eurasian bittern, among many other animals. Oostvaardersplassen is a Special Protection Area for birdlife.
Before the establishment of the reserve, the dry area was a nursery for willow trees, and in the first year hundreds of seedlings could be found on each square metre. This led to concern that a dense woodland would develop, significantly reducing the value of the habitat for water birds. To avoid this, the park's managers brought in a number of large herbivores to keep the area more open, including Konik ponies, red deer and Heck cattle. These large grazing animals are kept out in the open all year round without supplemental feeding for the winter and early spring, and are allowed to behave as wild animals (without, for example, for now, castrating males). The ecosystem developing under their influence is thought to resemble those that would have existed on European river banks and deltas before human disturbance. However, there is some controversy about how natural the ecosystem is, as
|
https://en.wikipedia.org/wiki/Teleportation%20in%20fiction
|
Teleportation is the theoretical transfer of matter and/or energy from one point to another without traversing the physical space between them. It is a common subject in science fiction and fantasy literature, film, video games, and television. In some situations, teleporting is presented as time traveling across space.
The use of matter transmitters in science fiction originated at least as early as the 19th century. An early example of scientific teleportation (as opposed to magical or spiritual teleportation) is found in the 1897 novel To Venus in Five Seconds by Fred T. Jane. Jane's protagonist is transported from a strange-machinery-containing gazebo on Earth to planet Venus.
A common fictional device for teleportation is a "wormhole". In video games, the instant teleportation of a player character may be referred to as a warp.
List of fiction containing teleportation
Teleportation illusions in live performance
Teleportation illusions have featured in live performances throughout history, often under the fiction of miracles, psychic phenomenon, or magic. The cups and balls trick has been performed since 3 BC and can involve balls vanishing, reappearing, teleporting and transposing (objects in two locations interchanging places). A common trick of close-up magic is the apparent teleportation of a small object, such as a marked playing card, which can involve sleight-of-hand, misdirection, and pickpocketing. Magic shows were popular entertainments at fairs in the 18th century and moved into permanent theatres in the mid-19th century. Theatres provided greater control of the environment and viewing angles for more elaborate illusions, and teleportation tricks grew in scale and ambition. To increase audience excitement, the teleportation illusion could be conducted under the theme of a predicament escape. Magic shows achieved widespread success during the Golden Age of Magic in the late 19th and early 20th centuries.
Written fiction
William Shakespeare in
|
https://en.wikipedia.org/wiki/Social%20Reader
|
"Social Reader" may refer to the Washington Post Social Reader (formerly available at socialreader.com), or may be used to describe a more general category of social news reading applications.
List of popular Social Readers
Google Reader
While not advertised as a "social reader," Google Reader was an RSS reader with social features, including sharing of articles with friends with personal commentary.
Washington Post Social Reader
The Washington Post'''s WaPo Labs team launched Washington Post Social Reader for Facebook on September 22, 2011. On January 22, 2014 Social Reader was rebranded as Trove and launched a new app for iPhone and iPad.
Guardian Social ReaderThe Guardian was the next major news publishing portal that joined the race. Guardian Social Reader was announced on Friday 23 September 2011. As the ground was already set by Washington Post Social Reader, it took The Guardian a very small time to reach its audience on Facebook. Within the six months of the launch of application, the monthly active users for Guardian Social Reader have reached a figure of 5.9 million, with more than 2.9 million joining in the first two months of 2012. The Guardian has also launched the same reader application for smartphones, including Android, iPhone and iPad.
Falling popularity
In May 2012, it was observed that the popularity of Social Reader apps had fallen considerably since they were introduced. The Washington Post Social Reader, having once had 17 million monthly users, had fallen below 10 million, and The Guardian'', having once had almost 600,000 users a day, had fallen below 100,000. This is thought to have been caused by a change in how Facebook displayed social reader information. "The initial drop was largely the result of a change in the way Facebook displayed social reader stories, which collapsed stories into a smaller, cycling module (the first social readers spit long lists of stories onto users' walls, which was both clumsy and widely reviled)."
|
https://en.wikipedia.org/wiki/Screenless%20video
|
Screenless video is any system for transmitting visual information from a video source without the use of a screen. Screenless computing systems can be divided into three groups: Visual Image, Retinal Direct, and Synaptic Interface.
Visual image
Visual Image screenless display includes any image that the eye can perceive. The most common example of Visual Image screenless display is a hologram. In these cases, light is reflected off some intermediate object (hologram, LCD panel, or cockpit window) before it reaches the retina. In the case of LCD panels the light is refracted from the back of the panel, but is nonetheless a reflected source. Google has proposed a similar system to replace the screens of tablet computers and smartphones.
Retinal display
Virtual retinal display systems are a class of screenless displays in which images are projected directly onto the retina. They are distinguished from visual image systems because light is not reflected from some intermediate object onto the retina, it is instead projected directly onto the retina. Retinal Direct systems, once marketed, hold out the promise of extreme privacy when computing work is done in public places because most snooping relies on viewing the same light as the person who is legitimately viewing the screen, and retinal direct systems send light only into the pupils of their intended viewer.
Synaptic interface
Synaptic Interface screenless video does not use light at all. Visual information completely bypasses the eye and is transmitted directly to the brain. While such systems have only been implemented in humans in rudimentary form - for example, displaying single Braille characters to blind people – success has been achieved in sampling usable video signals from the biological eyes of a living horseshoe crab through their optic nerves, and in sending video signals from electronic cameras into the creatures' brains using the same method.
See also
Volumetric display
Fog display
Augment
|
https://en.wikipedia.org/wiki/Spore%20Origins
|
Spore Origins (also known as Spore Mobile) is the mobile device spin-off of Spore, and focuses on a single phase of the larger game's gameplay - the cell phase.
Gameplay
The simplified game allows players to try to survive as a multicellular organism in a tide pool, with the ability to upgrade its creature as with the main game. The basic gameplay is similar to Flow. Flow designer Jenova Chen attributed Will Wright's first demo of Spore as inspiration.
Unlike the full version of Spore, the main game is roughly an hour long, and divided into 18 separate sections, or 30 sections in the iPhone and iPod touch version, with the player attacking and eating other organisms while avoiding being eaten by superior ones.
On some devices, movement is achieved by pressing the phone keys in ordinal directions. Other devices also support touching the screen to move the creature. Certain iPod devices use the click wheel as an input method, and users of the iPhone, iPod Touch, and iPod Nano may use the accelerometer. Creatures are eaten by attacking with the mouth (if the creature has one); group-eating combos can be achieved with the OK button or center button on the wheel. A section is completed after the player eats a certain amount of DNA material from other life forms.
Every three levels is followed by the creature editor, in which the player may add an upgrade to their organism in four categories: perception, attack, defense, and movement. The 3rd upgrade in each category is a "superpart". The player also unlocks a mode called "Survival", in which the player is on a single screen collecting pellets while dodging creatures.
Issues
The iPod classic had lockup issues which took place on the game's initial loading screen on 1.0.x and 1.1.x software. The bugs were fixed and the game was re-released on August 31, 2008.
Reception
Spore Origins iPhone/iPod Touch Review Note: This link now leads to spam.
Spore Origins Mobile Review
See also
flOw
|
https://en.wikipedia.org/wiki/Hexaxial%20reference%20system
|
The hexaxial reference system, better known as the Cabrera system, is a convention to present the extremity leads of the 12 lead electrocardiogram, that provides an illustrative logical sequence that helps interpretation of the ECG, especially to determine the heart's electrical axis in the frontal plane.
The most practical way of using this is by arranging extremity leads according to the Cabrera system, reversing polarity of lead aVR and presenting ECG complexes in the order (aVL, I, -aVR, II, aVF, III). Then determine the direction the maximal ECG vector is "pointing", i.e. in which lead there are most positive amplitude - this direction is the electrical axis - see diagram.
Example: If lead I has the highest amplitude (higher than aVL or -aVR), the axis is approximately 0°.
Conversely, if lead III has the most negative amplitude it means the vector is pointing away from this lead, i.e. towards -60°.
An alternative use is to locate the most isoelectric (or equiphasic) lead (I, II, III, aVR, aVL, or aVF) on a diagnostic quality ECG with proper lead placement. Then find the corresponding spoke on the hexaxial reference system. The perpendicular spoke will point to the heart's electrical axis. To determine which numerical value should be used, observe the polarity of the perpendicular lead on the ECG.
For example, if the most isoelectric (or equiphasic) lead is aVL, the perpendicular lead on the hexaxial reference system is lead II. If lead II is positively deflected on the ECG, the heart's electrical axis in the frontal plane will be approximately +60°.
Normal axis: -30° to +90°
Left axis deviation: -30° to -90°
Right axis deviation: +90° to +180°
Extreme axis deviation: -90° to -180°
Additional images
See also
Electrocardiogram
|
https://en.wikipedia.org/wiki/COCOA%20%28digital%20humanities%29
|
COCOA (an acronym derived from COunt and COncordance Generation on Atlas) was an early text file utility and associated file format for digital humanities, then known as humanities computing. It was approximately 4000 punched cards of FORTRAN and created in the late 1960s and early 1970s at University College London and the Atlas Computer Laboratory in Harwell, Oxfordshire. Functionality included word-counting and concordance building.
Oxford Concordance Program
The Oxford Concordance Program format was a direct descendant of COCOA developed at Oxford University Computing Services. The Oxford Text Archive holds items in this format.
Later developments
The COCOA file format bears at least a passing similarity to the later markup languages such as SGML and XML. A noticeable difference with its successors is that COCOA tags are flat and not tree structured. In that format, every information type and value encoded by a tag should be considered true until the same tag changes its value. Members of the Text Encoding Initiative community maintain legacy support for COCOA, although most in-demand texts and corpora have already been migrated to more widely understood formats such as TEI XML
|
https://en.wikipedia.org/wiki/Lapped%20transform
|
In signal processing, a lapped transform is a type of linear discrete block transformation where the basis functions of the transformation overlap the block boundaries, yet the number of coefficients overall resulting from a series of overlapping block transforms remains the same as if a non-overlapping block transform had been used.
Lapped transforms substantially reduce the blocking artifacts that otherwise occur with block transform coding techniques, in particular those using the discrete cosine transform. The best known example is the modified discrete cosine transform used in the MP3, Vorbis, AAC, and Opus audio codecs.
Although the best-known application of lapped transforms has been for audio coding, they have also been used for video and image coding and various other applications. They are used in video coding for coding I-frames in VC-1 and for image coding in the JPEG XR format. More recently, a form of lapped transform has also been used in the development of the Daala video coding format.
|
https://en.wikipedia.org/wiki/ATI%20Avivo
|
ATI Avivo is a set of hardware and low level software features present on the ATI Radeon R520 family of GPUs and all later ATI Radeon products. ATI Avivo was designed to offload video decoding, encoding, and post-processing from a computer's CPU to a compatible GPU. ATI Avivo compatible GPUs have lower CPU usage when a player and decoder software that support ATI Avivo is used. ATI Avivo has been long superseded by Unified Video Decoder (UVD) and Video Coding Engine (VCE).
Background
The GPU wars between ATI and NVIDIA have resulted in GPUs with ever-increasing processing power since early 2000s. To parallel this increase in speed and power, both GPU makers needed to increase video quality as well, in 3D graphics applications the focus in increasing quality has mainly fallen on anti-aliasing and anisotropic filtering. However it has dawned upon both companies that video quality on the PC would need improvement as well and the current APIs provided by both companies have not seen many improvements over a few generations of GPUs. Therefore, ATI decided to revamp its GPU's video processing capability with ATI Avivo, in order to compete with NVIDIA PureVideo API.
In the time of release of the latest generation Radeon HD series, the successor, the ATI Avivo HD was announced, and was presented on every Radeon HD 2600 and 2400 video cards to be available July, 2007 after NVIDIA announced similar hardware acceleration solution, PureVideo HD.
In 2011 Avivo is renamed to AMD Media Codec Package, an optional component of the AMD Catalyst software. The last version is released in August 2012. As of 2013, the package is no longer offered by AMD.
Features
ATI Avivo
During capturing, ATI Avivo amplifies the source, automatically adjust its brightness and contrast. ATI Avivo implements 12-bit transform to reduce data loss during conversion; it also utilizes motion adaptive 3D comb filter, automatic color control, automatic gain control, hardware noise reduction and edge enhan
|
https://en.wikipedia.org/wiki/Electrostatic%20discharge
|
Electrostatic discharge (ESD) is a sudden and momentary flow of electric current between two differently-charged objects when brought close together or when the dielectric between them breaks down, often creating a visible spark associated with the static electricity between the objects.
ESD can create spectacular electric sparks (lightning, with the accompanying sound of thunder, is an example of a large-scale ESD event), but also less dramatic forms which may be neither seen nor heard, yet still be large enough to cause damage to sensitive electronic devices. Electric sparks require a field strength above approximately 4 × 106 V/m in air, as notably occurs in lightning strikes. Other forms of ESD include corona discharge from sharp electrodes , brush discharge from blunt electrodes , etc.
ESD can cause harmful effects of importance in industry, including explosions in gas, fuel vapor and coal dust, as well as failure of solid state electronics components such as integrated circuits. These can suffer permanent damage when subjected to high voltages. Electronics manufacturers therefore establish electrostatic protective areas free of static, using measures to prevent charging, such as avoiding highly charging materials and measures to remove static such as grounding human workers, providing antistatic devices, and controlling humidity.
ESD simulators may be used to test electronic devices, for example with a human body model or a charged device model.
Causes
One of the causes of ESD events is static electricity. Static electricity is often generated through tribocharging, the separation of electric charges that occurs when two materials are brought into contact and then separated. Examples of tribocharging include walking on a rug, rubbing a plastic comb against dry hair, rubbing a balloon against a sweater, ascending from a fabric car seat, or removing some types of plastic packaging. In all these cases, the breaking of contact between two materials resul
|
https://en.wikipedia.org/wiki/Holistic%20management%20%28agriculture%29
|
Holistic Management (from holos, a Greek word meaning all, whole, entire, total) in agriculture is an approach to managing resources that was originally developed by Allan Savory for grazing management. Holistic Management has been likened to "a permaculture approach to rangeland management". Holistic Management is a registered trademark of Holistic Management International (no longer associated with Allan Savory).
Definition
Holistic management describes a systems thinking approach to managing resources. Originally developed by Allan Savory, it is now being adapted for use in managing other systems with complex social, ecological and economic factors.
Holistic planned grazing is similar to rotational grazing but differs in that it more explicitly recognizes and provides a framework for adapting to the four basic ecosystem processes: the water cycle, the mineral cycle including the carbon cycle, energy flow, and community dynamics (the relationship between organisms in an ecosystem), giving equal importance to livestock production and social welfare. Holistic Management has been likened to "a permaculture approach to rangeland management".
Framework
The Holistic Management decision-making framework uses six key steps to guide the management of resources:
Define in its entirety what you are managing. No area should be treated as a single-product system. By defining the whole, people are better able to manage. This includes identifying the available resources, including money, that the manager has at his disposal.
Define what you want now and for the future. Set the objectives, goals and actions needed to produce the quality of life sought, and what the life-nurturing environment must be like to sustain that quality of life far into the future.
Watch for the earliest indicators of ecosystem health. Identify the ecosystem services that have deep impacts for people in both urban and rural environments, and find a way to easily monitor them. One of the best example
|
https://en.wikipedia.org/wiki/Mastoid%20lymph%20nodes
|
The mastoid lymph nodes (retroauricular lymph nodes or posterior auricular glands) are a small group of lymph nodes, usually two in number, located just beneath the ear, on the mastoid insertion of the sternocleidomastoideus muscle, beneath the posterior auricular muscle.
Their mastoid lymph nodes receives lymph from the posterior part of the temporoparietal region, the upper part of the cranial surface of the visible ear and the back of the ear canal. The lymph then passes to the superior deep cervical glands.
Etymology
The word mastoid comes from the (, "mouth, jaws, that with which one chews").
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.