source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Recursive%20least%20squares%20filter | Recursive least squares (RLS) is an adaptive filter algorithm that recursively finds the coefficients that minimize a weighted linear least squares cost function relating to the input signals. This approach is in contrast to other algorithms such as the least mean squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithms they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence. However, this benefit comes at the cost of high computational complexity.
Motivation
RLS was discovered by Gauss but lay unused or ignored until 1950 when Plackett rediscovered the original work of Gauss from 1821. In general, the RLS can be used to solve any problem that can be solved by adaptive filters. For example, suppose that a signal is transmitted over an echoey, noisy channel that causes it to be received as
where represents additive noise. The intent of the RLS filter is to recover the desired signal by use of a -tap FIR filter, :
where is the column vector containing the most recent samples of . The estimate of the recovered desired signal is
The goal is to estimate the parameters of the filter , and at each time we refer to the current estimate as and the adapted least-squares estimate by . is also a column vector, as shown below, and the transpose, , is a row vector. The matrix product (which is the dot product of and ) is , a scalar. The estimate is "good" if is small in magnitude in some least squares sense.
As time evolves, it is desired to avoid completely redoing the least squares algorithm to find the new estimate for , in terms of .
The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational cost. Another advantage is that it provides intuition behind such results as the Kalman filter.
Discussion
The idea behind RLS filters is to |
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20beverages | This is a list of state beverages as designated by the various states of the United States. The first known usage of declaring a specific beverage a "state beverage" within the US began in 1965 with Ohio designating tomato juice as their official beverage. The most popular choice for state beverage designation is milk (or a flavored milk, in the case of Rhode Island). In total, 21 out of the 33 entities with official beverages (32 states and the District of Columbia) have selected milk.
Table
Notes |
https://en.wikipedia.org/wiki/Common%20Indexing%20Protocol | The Common Indexing Protocol (CIP) was an attempt in the IETF working group FIND during the mid-1990s to define a protocol for exchanging index information between directory services.
In the X.500 Directory model, searches scoped near the root of the tree (e.g. at a particular country) were problematic to implement, as potentially hundreds or thousands of directory servers would need to be contacted
in order to handle that query.
The indexes contained summaries or subsets of information about individuals and organizations represented in a white pages schema. By merging subsets of information from multiple sources, it was hoped that an index server holding that subset could be able to process a query more efficiently by chaining it only to some of the sources: those sources which did not hold information would not be contacted. For example, if a server holding the base entry for a particular country were provided with a list of names of all the people in all the entries in that country subtree, then that server would be able to process a query searching for a person with
a particular name by only chaining it to those servers which held data about such a person.
The protocol evolved from earlier work developing WHOIS++, and was intended to be capable of interconnecting
services from both the evolving WHOIS and LDAP activities.
This protocol has not seen much recent deployment, as WHOIS and LDAP environments have followed separate evolution paths. WHOIS deployments are typically in domain name registrars, and its data management issues have been addressed through specifications for domain name registry interconnection such as CRISP. In contrast, enterprises that manage employee, customer or student identity data in an LDAP directory have looked to federation protocols for interconnection between organizations.
RFCs
The Architecture of the Common Indexing Protocol (CIP)
MIME Object Definitions for the Common Indexing Protocol (CIP)
CIP Transport Pro |
https://en.wikipedia.org/wiki/Set%20packing | Set packing is a classical NP-complete problem in computational complexity theory and combinatorics, and was one of Karp's 21 NP-complete problems. Suppose one has a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint (in other words, no two of them share an element).
More formally, given a universe and a family of subsets of , a packing is a subfamily of sets such that all sets in are pairwise disjoint. The size of the packing is . In the set packing decision problem, the input is a pair and an integer ; the question is whether
there is a set packing of size or more. In the set packing optimization problem, the input is a pair , and the task is to find a set packing that uses the most sets.
The problem is clearly in NP since, given subsets, we can easily verify that they are pairwise disjoint in polynomial time.
The optimization version of the problem, maximum set packing, asks for the maximum number of pairwise disjoint sets in the list. It is a maximization problem that can be formulated naturally as an integer linear program, belonging to the class of packing problems.
Integer linear program formulation
The maximum set packing problem can be formulated as the following integer linear program.
Complexity
The set packing problem is not only NP-complete, but its optimization version (general maximum set packing problem) has been proven as difficult to approximate as the maximum clique problem; in particular, it cannot be approximated within any constant factor. The best known algorithm approximates it within a factor of . The weighted variant can also be approximated as well.
Packing sets with a bounded size
The problem does have a variant which is more tractable. Given any positive integer k≥3, the k-set packing problem is a variant of set packing in which each set contains at most k elements.
When k=1, the problem is trivial. When k=2, the problem is equivalent to finding |
https://en.wikipedia.org/wiki/Poly1305 | Poly1305 is a universal hash family designed by Daniel J. Bernstein for use in cryptography.
As with any universal hash family, Poly1305 can be used as a one-time message authentication code to authenticate a single message using a secret key shared between sender and recipient,
similar to the way that a one-time pad can be used to conceal the content of a single message using a secret key shared between sender and recipient.
Originally Poly1305 was proposed as part of Poly1305-AES,
a Carter–Wegman authenticator
that combines the Poly1305 hash with AES-128 to authenticate many messages using a single short key and distinct message numbers.
Poly1305 was later applied with a single-use key generated for each message using XSalsa20 in the NaCl crypto_secretbox_xsalsa20poly1305 authenticated cipher,
and then using ChaCha in the ChaCha20-Poly1305 authenticated cipher
deployed in TLS on the internet.
Description
Definition of Poly1305
Poly1305 takes a 16-byte secret key and an -byte message and returns a 16-byte hash .
To do this, Poly1305:
Interprets as a little-endian 16-byte integer.
Breaks the message into consecutive 16-byte chunks.
Interprets the 16-byte chunks as 17-byte little-endian integers by appending a 1 byte to every 16-byte chunk, to be used as coefficients of a polynomial.
Evaluates the polynomial at the point modulo the prime .
Reduces the result modulo encoded in little-endian return a 16-byte hash.
The coefficients of the polynomial , where , are:
with the exception that, if , then:
The secret key is restricted to have the bytes , i.e., to have their top four bits clear; and to have the bytes , i.e., to have their bottom two bits clear.
Thus there are distinct possible values of .
Use as a one-time authenticator
If is a secret 16-byte string interpreted as a little-endian integer, then
is called the authenticator for the message .
If a sender and recipient share the 32-byte secret key in advance, chosen uniformly at random, th |
https://en.wikipedia.org/wiki/Topographic%20profile | A topographic profile or topographic cut or elevation profile is a representation of the relief of the terrain that is obtained by cutting transversely the lines of a topographic map. Each contour line can be defined as a closed line joining relief points at equal height above sea level. It is usually drawn on the same horizontal scale as the map, but the use of an exaggerated vertical scale is advisable to underline the elements of the relief. This can vary according to the slope and amplitude of the terrestrial relief, but is usually three to five times the horizontal scale.
A series of parallel profiles, taken at regular intervals on a map, can be combined to provide a more complete three-dimensional view of the area that appears on the topographic map. It is evident that, thanks to computer science, more sophisticated three-dimensional models of the landscape can be made from digital terrain data.
The line of the plane defined by the points that limit the profile is called the guideline and the horizontal line of comparison on which the profile is constructed is called base.
Applications
One of the most important applications of the topographic profiles is in the construction of works of great length and small width, for example roads, sewers or pipelines.
Sometimes topographical profiles appear in printed maps, such as those designed for navigation routes, excavations and especially for geological maps, where they are used to show the internal structure of the rocks that populate a territory.
People who study natural resources such as geologists, geomorphologists, soil scientists and vegetation scholars, among others, build profiles to observe the relationship of natural resources to changes in topography and analyze numerous problems.
See also
Fall line (topography) |
https://en.wikipedia.org/wiki/Diagnosis%20of%20exclusion | A diagnosis of exclusion or by exclusion (per exclusionem) is a diagnosis of a medical condition reached by a process of elimination, which may be necessary if presence cannot be established with complete confidence from history, examination or testing. Such elimination of other reasonable possibilities is a major component in performing a differential diagnosis.
Diagnosis by exclusion tends to occur where scientific knowledge is scarce, specifically where the means to verify a diagnosis by an objective method is absent. As a specific diagnosis cannot be confirmed, a fall back position is to exclude that group of known causes that may cause a similar clinical presentation.
The largest category of diagnosis by exclusion is seen among psychiatric disorders where the presence of physical or organic disease must be excluded as a prerequisite for making a functional diagnosis.
Examples
An example of such a diagnosis is "fever of unknown origin": to explain the cause of elevated temperature the most common causes of unexplained fever (infection, neoplasm, or collagen vascular disease) must be ruled out.
Other examples include:
Adult-onset Still's disease
Behçet's disease
Bell's palsy
Burning mouth syndrome
Chronic recurrent multifocal osteomyelitis
Long COVID
Psychogenic polydipsia
Schizophrenia
Somatic symptom disorder
Sudden infant death syndrome
Tolosa–Hunt syndrome
See also
Idiopathic |
https://en.wikipedia.org/wiki/Conway%27s%20Soldiers | Conway's Soldiers or the checker-jumping problem is a one-person mathematical game or puzzle devised and analyzed by mathematician John Horton Conway in 1961. A variant of peg solitaire, it takes place on an infinite checkerboard. The board is divided by a horizontal line that extends indefinitely. Above the line are empty cells and below the line are an arbitrary number of game pieces, or "soldiers". As in peg solitaire, a move consists of one soldier jumping over an adjacent soldier into an empty cell, vertically or horizontally (but not diagonally), and removing the soldier which was jumped over. The goal of the puzzle is to place a soldier as far above the horizontal line as possible.
Conway proved that, regardless of the strategy used, there is no finite sequence of moves that will allow a soldier to advance more than four rows above the horizontal line. His argument uses a carefully chosen weighting of cells (involving the golden ratio), and he proved that the total weight can only decrease or remain constant. This argument has been reproduced in a number of popular math books.
Simon Tatham and Gareth Taylor have shown that the fifth row can be reached via an infinite series of moves. If diagonal jumps are allowed, the 8th row can be reached, but not the 9th row. In the n-dimensional version of the game, the highest row that can be reached is ; Conway's weighting argument demonstrates that row cannot be reached.
Conway's proof that the fifth row is inaccessible
Notation and definitions
Define . (In other words, here denotes the reciprocal of the golden ratio.) Observe that .
Let the target square be labeled with the value , and all other squares be labeled with the value , where is the Manhattan distance to the target square. Then we can compute the "score" of a configuration of soldiers by summing the values of the soldiers' squares. For example, a configuration of only two soldiers placed so as to reach the target square on the next jump would have s |
https://en.wikipedia.org/wiki/Join%20selection%20factor | Within computing, author O'Connell defines join selection factor as "[t]he percentage (or fraction) of records in one file that will be joined with records of another file". This can be calculated when two database tables are to be joined. It is primarily concerned with query optimization. |
https://en.wikipedia.org/wiki/Amacrine%20cell | Amacrine cells are interneurons in the retina. They are named from the Greek roots a– ("non"), makr– ("long") and in– ("fiber"), because of their short neuronal processes. Amacrine cells are inhibitory neurons, and they project their dendritic arbors onto the inner plexiform layer (IPL), they interact with retinal ganglion cells, and bipolar cells or both of these.
Structure
Amacrine cells operate at inner plexiform layer (IPL), the second synaptic retinal layer where bipolar cells and retinal ganglion cells form synapses. There are at least 33 different subtypes of amacrine cells based just on their dendrite morphology and stratification. Like horizontal cells, amacrine cells work laterally, but whereas horizontal cells are connected to the output of rod and cone cells, amacrine cells affect the output from bipolar cells, and are often more specialized. Each type of amacrine cell releases one or several neurotransmitters where it connects with other cells.
They are often classified by the width of their field of connection, which layer(s) of the stratum in the IPL they are in, and by neurotransmitter type. Most are inhibitory using either gamma-Aminobutyric acid or glycine as neurotransmitters.
Types
As mentioned above, there are several different ways to divide the many different types of amacrine cells into subtypes.
GABAergic, glycinergic, or neither:
Amacrine cells can be either GABAergic, glycinergic or neither depending on what inhibitory neurotransmitter they express (GABA, glycine, or neither). GABAergic amacrine cells are usually wide field amacrine cells and are found in the ganglion cell layer (GCL) and the inner nuclear layer (INL). One type of GABAergic amacrine cell that is fairly well studied is the starburst amacrine cell. These amacrine cells are usually characterised by their expression of choline acetyltransferase, or ChAT, and are known to play a role in direction selectivity and detection of directional motion. Acetylcholine is also relea |
https://en.wikipedia.org/wiki/Gabibbo | Gabibbo is an Italian mascot for the Mediaset-controlled channel Canale 5, created in 1990 by Gabibbo's main role has been in the programs Paperissima and Striscia la notizia, but he has appeared in several other Canale 5 programs. He is normally a jovial character known for his ability to make wisecracks and his overall humble demeanor. He speaks Italian with a Genoese accent, occasionally using Italianized Genoese words. In fact the word Gabibbo or Gabibbu belongs to the Genoese language and it is used to indicate, in an ironic-depreciative way, an immigrant from southern or even central Italy (it may originate from the Arab epithet habib).
Western Kentucky Hilltoppers lawsuit
Gabibbo has also earned negative press surrounding a lawsuit brought by Western Kentucky University who claim that Gabibbo is an exact copy of their mascot Big Red. Western Kentucky claims it has a case because in an interview with Novella 2000, Ricci states that he created Gabibbo after seeing Western Kentucky's mascot, further adding that Gabibbo is in fact, "(an imported) Big Red".
Ricci claims he was joking in that interview, saying that he made the remark after Novella pointed out the similarities between Big Red and Gabibbo. He goes on to say that there are "100 different mascots who look like Gabibbo", not just Big Red, adding that the Sesame Street mascots Cookie Monster and Elmo also look like Gabibbo. However, Gabibbo's head is almost an exact copy of Big Red's (a fact repeated by The New York Times' business section, who quip that Gabibbo is simply a "better dressed" version of Big Red) and that Big Red debuted in 1979, almost eleven years prior to Gabibbo.
On 12 December 2007, Gabibbo and Antonio Ricci were found to not be liable for infringement by the court of appeals in Milan. On June 7, 2018, the decision was overruled and remanded by the Italian Supreme Court. |
https://en.wikipedia.org/wiki/Chandrabindu | Chandrabindu (IAST: , in Sanskrit) is a diacritic sign with the form of a dot inside the lower half of a circle. It is used in the Devanagari (ँ), Bengali-Assamese (), Gujarati (ઁ), Odia (ଁ), Telugu (ఁ), Javanese ( ꦀ) and other scripts.
It usually means that the previous vowel is nasalized.
In Hindi, it is replaced in writing by anusvara when it is written above a consonant that carries a vowel symbol that extends above the top line.
In Classical Sanskrit, it seems to occur only over a lla conjunct consonant, to show that it is pronounced as a nasalized double l, which occurs if -nl- have become assimilated in sandhi.
In Vedic Sanskrit, it is used instead of anusvara to represent the sound anunasika when the next word starts with a vowel. It usually occurs where in earlier times a word ended in -ans.
Unicode
Unicode encodes chandrabindu and chandrabindu-like characters for a variety of scripts:
The COMBINING CANDRABINDU (U+0310), is a general-purpose combining diacritical mark intended for use with Latin letters in transliteration of Indic languages.
See also
Anusvara
Fermata |
https://en.wikipedia.org/wiki/Bistorta%20vivipara | Bistorta vivipara (synonym Persicaria vivipara) is a perennial herbaceous flowering plant in the knotweed and buckwheat family Polygonaceae, commonly known as alpine bistort. Scientific synonyms include Bistorta vivipara and Polygonum viviparum. It is common all over the high Arctic through Europe, North America, incl. Greenland, and temperate and tropical Asia. Its range stretches further south in high mountainous areas such as the Alps, Carpathians, Pyrenees, Caucasus, Alaska and the Tibetan Plateau.
Taxonomy
Molecular phylogenetic work has demonstrated that the genus Bistorta represents a distinct lineage within the family Polygonaceae. The genus Bistorta contains at least 42 accepted species.
Description
Alpine bistort is a perennial herb that grows to tall. It has a thick rhizomatous rootstock and an erect, unbranched, hairless stem. The leaves are hairless on the upper surfaces, but hairy and greyish-green below. The basal ones are longish-elliptical with long stalks and rounded bases; the upper ones are few and are linear and stalkless. The tiny flowers are white or pink in the upper part of the spike with five perianth segments, eight stamens with purple anthers and three fused carpels. The lower ones are replaced by bulbils. Flowers rarely produce viable seeds and reproduction is normally by the bulbils, which are small bulb-like structures that develop in the axils of the leaves and may develop into new plants. Very often, a small leaf develops when the bulbil is still attached to the mother plant. The bulbils are rich in starch and are a preferred food for rock ptarmigans (Lagopus mutus) and reindeer; they are also occasionally used by Arctic peoples. Alpine bistort flowers in June and July.
Habitat
Alpine bistort grows in many different plant communities, very often in abundance. Typical habitats include moist short grassland, yards, the edges of tracks, and nutrient-rich fens.
As with many other alpine plants, Alpine bistort is slow-growing and |
https://en.wikipedia.org/wiki/APX | In computational complexity theory, the class APX (an abbreviation of "approximable") is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor approximation algorithms for short). In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.
An approximation algorithm is called an -approximation algorithm for input size if it can be proven that the solution that the algorithm finds is at most a multiplicative factor of times worse than the optimal solution. Here, is called the approximation ratio. Problems in APX are those with algorithms for which the approximation ratio is a constant . The approximation ratio is conventionally stated greater than 1. In the case of minimization problems, is the found solution's score divided by the optimum solution's score, while for maximization problems the reverse is the case. For maximization problems, where an inferior solution has a smaller score, is sometimes stated as less than 1; in such cases, the reciprocal of is the ratio of the score of the found solution to the score of the optimum solution.
A problem is said to have a polynomial-time approximation scheme (PTAS) if for every multiplicative factor of the optimum worse than 1 there is a polynomial-time algorithm to solve the problem to within that factor. Unless P = NP there exist problems that are in APX but without a PTAS, so the class of problems with a PTAS is strictly contained in APX. One such problem is the bin packing problem.
APX-hardness and APX-completeness
A problem is said to be APX-hard if there is a PTAS reduction from every problem in APX to that problem, and to be APX-complete if the problem is APX-hard and also in APX. As a consequence of P ≠ NP ⇒ PTAS ≠ APX, if P ≠ NP is assumed, no APX-hard problem has a PTAS. In practice, reducing one problem to ano |
https://en.wikipedia.org/wiki/Spacetime%20symmetries | Spacetime symmetries are features of spacetime that can be described as exhibiting some form of symmetry. The role of symmetry in physics is important in simplifying solutions to many problems. Spacetime symmetries are used in the study of exact solutions of Einstein's field equations of general relativity. Spacetime symmetries are distinguished from internal symmetries.
Physical motivation
Physical problems are often investigated and solved by noticing features which have some form of symmetry. For example, in the Schwarzschild solution, the role of spherical symmetry is important in deriving the Schwarzschild solution and deducing the physical consequences of this symmetry (such as the nonexistence of gravitational radiation in a spherically pulsating star). In cosmological problems, symmetry plays a role in the cosmological principle, which restricts the type of universes that are consistent with large-scale observations (e.g. the Friedmann–Lemaître–Robertson–Walker (FLRW) metric). Symmetries usually require some form of preserving property, the most important of which in general relativity include the following:
preserving geodesics of the spacetime
preserving the metric tensor
preserving the curvature tensor
These and other symmetries will be discussed below in more detail. This preservation property which symmetries usually possess (alluded to above) can be used to motivate a useful definition of these symmetries themselves.
Mathematical definition
A rigorous definition of symmetries in general relativity has been given by Hall (2004). In this approach, the idea is to use (smooth) vector fields whose local flow diffeomorphisms preserve some property of the spacetime. (Note that one should emphasize in one's thinking this is a diffeomorphism—a transformation on a differential element. The implication is that the behavior of objects with extent may not be as manifestly symmetric.) This preserving property of the diffeomorphisms is made precise as follows. A |
https://en.wikipedia.org/wiki/White%20meat | In culinary terms, white meat is meat which is pale in color before and after cooking. In traditional gastronomy, white meat also includes rabbit, the flesh of milk-fed young mammals (in particular veal and lamb), and sometimes pork. In ecotrophology and nutritional studies, white meat includes poultry and fish, but excludes all mammal flesh, which is considered red meat.
Various factors have resulted in debate centering on the definition of white and red meat. Dark meat is used to describe darker-colored flesh. A common example is the lighter-colored meat of poultry (white meat), coming from the breast, as contrasted with darker-colored meat from the legs (dark meat). Certain types of poultry that are sometimes grouped as white meat are red when raw, such as duck and goose. Some types of fish, such as tuna, sometimes are red when raw and turn white when cooked.
Terminology
The terms white, red, light and dark applied to meat have varied and inconsistent meanings in different contexts. The term white meat in particular has caused confusion from oversimplification in scientific publications, misuse of the term in the popular press, and evolution of the term over decades. Some writers suggest avoiding the terms "red" and "white" altogether, instead classifying meat by objective characteristics such as myoglobin or heme iron content, lipid profile, fatty acid composition, cholesterol content, etc.
In nutritional studies, white meat may also include amphibians like frogs and land snails. Mammal flesh (eg; beef, pork, goat, lamb, doe, rabbit) is excluded and considered to be red meat. Periodically some researchers allow lean cuts of rabbit to be an outlier and categorize it into the “white meat” category because it shares certain nutritional similarities with poultry. Otherwise, nutritional studies and social studies popularly define "red meat" as coming from any mammal, "seafood" as coming from fish and shellfish, and "white meat" coming from birds and other animals. |
https://en.wikipedia.org/wiki/Saxifraga%20oppositifolia | Saxifraga oppositifolia, the purple saxifrage or purple mountain saxifrage, is a species of plant that is very common in the high Arctic and also some high mountainous areas further south, including northern Britain, the Alps and the Rocky Mountains.
Saxifraga oppositifolia grows at a latitude of 83°40'N on Kaffeklubben Island, making it one of the northernmost plants in the world.
Description
Saxifraga oppositifolia is a low-growing, densely or loosely matted plant growing up to high, with somewhat woody branches of creeping or trailing habit close to the surface. The leaves are small, rounded, scale-like, opposite in four rows with ciliated margins. The flowers are solitary on short stalks, petals purple or lilac, much longer than the calyx lobes. It is one of the first spring flowers, continuing to flower during the whole summer in localities where the snow melts later. The flowers grow to about in diameter.
Ecology
Habitat
Saxifraga oppositifolia grows in all kinds of cold temperate to Arctic habitats, usually found from sea level up to , in many places colouring the landscape. Its native habitats include tundra, arctic coastal bluffs, alpine scree, and rock crevices.
Swiss botanist Christian Körner found the plant growing at an elevation of in the Swiss alps, making it the highest elevation angiosperm in Europe. It is even known to grow on Kaffeklubben Island in north Greenland, at , the most northerly plant locality in the world.
Species interactions
The flowers of Saxifraga oppositifolia may be consumed by certain animal species, such as the caterpillars of the cold-adapted Gynaephora groenlandica, the Arctic woolly-bear caterpillar.
Uses
Saxifraga oppositifolia is a popular plant in alpine gardens, though difficult to grow in warm climates.
The edible flower petals are eaten, particularly in parts of Nunavut without abundant berries. They are bitter at first but, after about one second, they become sweet. (They are also slightly sticky.) It is kno |
https://en.wikipedia.org/wiki/Pulmonary%20surfactant | Pulmonary surfactant is a surface-active complex of phospholipids and proteins formed by type II alveolar cells. The proteins and lipids that make up the surfactant have both hydrophilic and hydrophobic regions. By adsorbing to the air-water interface of alveoli, with hydrophilic head groups in the water and the hydrophobic tails facing towards the air, the main lipid component of surfactant, dipalmitoylphosphatidylcholine (DPPC), reduces surface tension.
As a medication, pulmonary surfactant is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system.
Function
To increase pulmonary compliance.
To prevent atelectasis (collapse of the alveoli or atriums) at the end of expiration.
To facilitate recruitment of collapsed airways.
Alveoli can be compared to gas in water, as the alveoli are wet and surround a central air space. The surface tension acts at the air-water interface and tends to make the bubble smaller (by decreasing the surface area of the interface). The gas pressure (P) needed to keep an equilibrium between the collapsing force of surface tension (γ) and the expanding force of gas in an alveolus of radius r is expressed by the Young–Laplace equation:
Compliance
Compliance is the ability of lungs and thorax to expand.
Lung compliance is defined as the volume change per unit of pressure change across the lung. Measurements of lung volume obtained during the controlled inflation/deflation of a normal lung show that the volumes obtained during deflation exceed those during inflation, at a given pressure. This difference in inflation and deflation volumes at a given pressure is called hysteresis and is due to the air-water surface tension that occurs at the beginning of inflation. However, surfactant decreases the alveolar surface tension, as seen in cases of premature infants with infant respiratory distress syndrome. The normal surface tension for water is 70 dyn/cm (70 mN/m) and in the lungs, it |
https://en.wikipedia.org/wiki/Tibialis%20anterior%20muscle | The tibialis anterior muscle is a muscle of the anterior compartment of the lower leg. It originates from the upper portion of the tibia; it inserts into the medial cuneiform and first metatarsal bones of the foot. It acts to dorsiflex and invert the foot. This muscle is mostly located near the shin.
It is situated on the lateral side of the tibia; it is thick and fleshy above, tendinous below. The tibialis anterior overlaps the anterior tibial vessels and deep peroneal nerve in the upper part of the leg.
Structure
The tibialis anterior muscle is the most medial muscle of the anterior compartment of the leg.
The muscle ends in a tendon which is apparent on the anteriomedial dorsal aspect of the foot close to the ankle. Its tendon is ensheathed in a synovial sheath. The tendon passes through the medial compartment superior and inferior extensor retinacula of the foot.
Origin
The tibialis anterior muscle arises from the upper 2/3 of the lateral surface of the tibia and the adjoining part of the interosseous membrane and deep fascia overlying it, and the intermuscular septum between this muscle and the extensor digitorum longus.
Insertion
It is inserted into the medial and inferior surface of the medial cuneiform bone, and adjacent portion of the first metatarsal bone.'
Nerve supply
The tibialis anterior muscle is innervated by the deep fibular nerve, and recurrent genicular nerve (L4).
Variation
A deep portion of the muscle is rarely inserted into the talus, or a tendinous slip may pass to the head of the first metatarsal bone or the base of the first phalanx of the great toe.
The tibiofascialis anterior, a small muscle from the lower part of the tibia to the transverse or cruciate crural ligaments or deep fascia.
Actions/movements
The muscle acts to dorsiflex and invert the foot. It is the largest dorsiflexor of the foot. The muscle also contributes to deceleration.
Function
The muscle helps maintain the medial longitudinal arch of the foot. It draws u |
https://en.wikipedia.org/wiki/G%20protein-gated%20ion%20channel | G protein-gated ion channels are a family of transmembrane ion channels in neurons and atrial myocytes that are directly gated by G proteins.
Overview of mechanisms and function
Generally, G protein-gated ion channels are specific ion channels located in the plasma membrane of cells that are directly activated by a family of associated proteins. Ion channels allow for the selective movement of certain ions across the plasma membrane in cells. More specifically, in nerve cells, along with ion transporters, they are responsible for maintaining the electrochemical gradient across the cell.
G proteins are a family of intracellular proteins capable of mediating signal transduction pathways. Each G protein is a heterotrimer of three subunits: α-, β-, and γ- subunits. The α-subunit (Gα) typically binds the G protein to a transmembrane receptor protein known as a G protein-coupled receptor, or GPCR. This receptor protein has a large, extracellular binding domain which will bind its respective ligands (e.g. neurotransmitters and hormones). Once the ligand is bound to its receptor, a conformational change occurs. This conformational change in the G protein allows Gα to bind GTP. This leads to yet another conformational change in the G protein, resulting in the separation of the βγ-complex (Gβγ) from Gα. At this point, both Gα and Gβγ are active and able to continue the signal transduction pathway. Different classes of G protein-coupled receptors have many known functions including the cAMP and Phosphatidylinositol signal transduction pathways. A class known as metabotropic glutamate receptors play a large role in indirect ion channel activation by G proteins. These pathways are activated by second messengers which initiate signal cascades involving various proteins which are important to the cell's response.
G protein-gated ion channels are associated with a specific type of G protein-coupled receptor. These ion channels are transmembrane ion channels with se |
https://en.wikipedia.org/wiki/Adaptive%20control | Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain.<ref name=CMP-AC-T-01>{{cite journal|author=Chengyu Cao, Lili Ma, Yunjun Xu|title="Adaptive Control Theory and Applications", Journal of Control Science and Engineering'|volume=2012|issue=1|year=2012|doi=10.1155/2012/827353|pages=1,2|doi-access=free }}</ref> For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.
Parameter estimation
The foundation of adaptive control is parameter estimation, which is a branch of system identification. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control). Projection and normalization are commonly used to improve the robustness of estimation algorithms.
Classification of adaptive control techniques
In general, one should distinguish between:
Feedforward adaptive control
Feedback adaptive control
as well as between
Direct methods
Indirect methods
Hybrid methods
Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller par |
https://en.wikipedia.org/wiki/Tin%28II%29%20chloride | Tin(II) chloride, also known as stannous chloride, is a white crystalline solid with the formula . It forms a stable dihydrate, but aqueous solutions tend to undergo hydrolysis, particularly if hot. SnCl2 is widely used as a reducing agent (in acid solution), and in electrolytic baths for tin-plating. Tin(II) chloride should not be confused with the other chloride of tin; tin(IV) chloride or stannic chloride (SnCl4).
Chemical structure
SnCl2 has a lone pair of electrons, such that the molecule in the gas phase is bent. In the solid state, crystalline SnCl2 forms chains linked via chloride bridges as shown. The dihydrate has three coordinates as well, with one water on the tin and another water on the first. The main part of the molecule stacks into double layers in the crystal lattice, with the "second" water sandwiched between the layers.
Chemical properties
Tin(II) chloride can dissolve in less than its own mass of water without apparent decomposition, but as the solution is diluted, hydrolysis occurs to form an insoluble basic salt:
SnCl2 (aq) + H2O (l) Sn(OH)Cl (s) + HCl (aq)
Therefore, if clear solutions of tin(II) chloride are to be used, it must be dissolved in hydrochloric acid (typically of the same or greater molarity as the stannous chloride) to maintain the equilibrium towards the left-hand side (using Le Chatelier's principle). Solutions of SnCl2 are also unstable towards oxidation by the air:
6 SnCl2 (aq) + O2 (g) + 2 H2O (l) → 2 SnCl4 (aq) + 4 Sn(OH)Cl (s)
This can be prevented by storing the solution over lumps of tin metal.
There are many such cases where tin(II) chloride acts as a reducing agent, reducing silver and gold salts to the metal, and iron(III) salts to iron(II), for example:
SnCl2 (aq) + 2 FeCl3 (aq) → SnCl4 (aq) + 2 FeCl2 (aq)
It also reduces copper(II) to copper(I).
Solutions of tin(II) chloride can also serve simply as a source of Sn2+ ions, which can form other tin(II) compounds via precipitation reactions. For example, rea |
https://en.wikipedia.org/wiki/Biological%20Innovation%20for%20Open%20Society | BiOS (Biological Open Source/Biological Innovation for Open Society) is an international initiative to foster innovation and freedom to operate in the biological sciences. BiOS was officially launched on 10 February 2005 by Cambia, an independent, international non-profit organization dedicated to democratizing innovation. Its intention is to initiate new norms and practices for creating tools for biological innovation, using binding covenants to protect and preserve their usefulness, while allowing diverse business models for the application of these tools.
As described by Richard Anthony Jefferson, CEO of Cambia, the Deputy CEO of Cambia, Dr Marie Connett worked extensively with small companies, university offices of technology transfer, attorneys, and multinational corporations to create a platform to share productive and sustainable technology. The parties developed the BiOS Material Transfer Agreement (MTA) and the BiOS license as legal instruments to facilitate these goals.
Biological Open Source
Traditionally, the term 'open source' describes a paradigm for software development associated with a set of collaborative innovation practices, which ensure access to the end product's source materials - typically, source code. The BiOS Initiative has sought to extend this concept to the biological sciences, and agricultural biotechnology in particular. BiOS is founded on the concept of sharing scientific tools and platforms so that innovation can occur at the 'application layer.' Jefferson observes that, 'Freeing up the tools that make new discoveries possible will spur a new wave of innovation that has real value.' He notes further that, 'Open source is an enormously powerful tool for driving efficiency.'
Through BiOS instruments, licensees cannot appropriate the fundamental kernel of a technology and improvements exclusively for themselves. The base technology remains the property of whichever entity developed it, but improvements can be shared with others that |
https://en.wikipedia.org/wiki/Sid%20the%20Slug | Sid the Slug is an advertising character created by the Food Standards Agency (FSA) in the United Kingdom in 2004 as the mascot of the "Salt - Watch it" campaign to warn the public of the risks of excessive salt consumption.
The multimedia campaign, including advertising hoardings, television commercials and Internet coverage, was based on the premise that salt kills slugs, and can harm humans too. The Salt Manufacturers' Association filed a complaint to the Advertising Standards Authority, their complaint being that the information presented was misleading. The Advertising Standards Authority did not uphold the SMA complaint in its adjudication.
The ASA had to deal with another complaint from a member of the public, that the use of the name "Sid" was offensive; this was also rejected, with the ASA instead arguing that most people would find it "humorous".
A member of the public complained to the FSA that the Welsh subtitles in the "Sid the Slug" TV advertisements meant the FSA was not treating English and Welsh equally, as is required by the FSA Welsh Language Scheme. The FSA replied that the animation could not have been dubbed into Welsh successfully, hence the subtitles. However, the FSA accepted that it had not complied with advertising conduct, as set by the Welsh Language Board. |
https://en.wikipedia.org/wiki/Relativistic%20mechanics | In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics.
As with classical mechanics, the subject can be divided into "kinematics"; the description of motion by specifying positions, velocities and accelerations, and "dynamics"; a full description by considering energies, momenta, and angular momenta and their conservation laws, and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by "statics" in classical mechanics—depends on the relative motion of observers who measure in frames of reference.
Although some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum (Newton's second law), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames. In |
https://en.wikipedia.org/wiki/Altruism%20%28biology%29 | In biology, altruism refers to behaviour by an individual that increases the fitness of another individual while decreasing the fitness of themselves. Altruism in this sense is different from the philosophical concept of altruism, in which an action would only be called "altruistic" if it was done with the conscious intention of helping another. In the behavioural sense, there is no such requirement. As such, it is not evaluated in moral terms—it is the consequences of an action for reproductive fitness that determine whether the action is considered altruistic, not the intentions, if any, with which the action is performed.
The term altruism was coined by the French philosopher Auguste Comte in French, as altruisme, for an antonym of egoism. He derived it from the Italian altrui, which in turn was derived from Latin alteri, meaning "other people" or "somebody else".
Altruistic behaviours appear most obviously in kin relationships, such as in parenting, but may also be evident among wider social groups, such as in social insects. They allow an individual to increase the success of its genes by helping relatives that share those genes. Obligate altruism is the permanent loss of direct fitness (with potential for indirect fitness gain). For example, honey bee workers may forage for the colony. Facultative altruism is temporary loss of direct fitness (with potential for indirect fitness gain followed by personal reproduction). For example, a Florida scrub jay may help at the nest, then gain parental territory.
Overview
In ethology (the study of behavior), and more generally in the study of social evolution, on occasion, some animals do behave in ways that reduce their individual fitness but increase the fitness of other individuals in the population; this is a functional definition of altruism. Research in evolutionary theory has been applied to social behaviour, including altruism. Cases of animals helping individuals to whom they are closely related can be explai |
https://en.wikipedia.org/wiki/Planar%20chirality | Planar chirality, also known as 2D chirality, is the special case of chirality for two dimensions.
Most fundamentally, planar chirality is a mathematical term, finding use in chemistry, physics and related physical sciences, for example, in astronomy, optics and metamaterials. Recent occurrences in latter two fields are dominated by microwave and terahertz applications as well as micro- and nanostructured planar interfaces for infrared and visible light.
In chemistry
This term is used in chemistry contexts, e.g., for a chiral molecule lacking an asymmetric carbon atom, but possessing two non-coplanar rings that are each dissymmetric and which cannot easily rotate about the chemical bond connecting them: 2,2'-dimethylbiphenyl is perhaps the simplest example of this case. Planar chirality is also exhibited by molecules like (E)-cyclooctene, some di- or poly-substituted metallocenes, and certain monosubstituted paracyclophanes. Nature rarely provides planar chiral molecules, cavicularin being an exception.
Assigning the configuration of planar chiral molecules
To assign the configuration of a planar chiral molecule, begin by selecting the pilot atom, which is the highest priority of the atoms that is not in the plane, but is directly attached to an atom in the plane. Next, assign the priority of the three adjacent in-plane atoms, starting with the atom attached to the pilot atom as priority 1, and preferentially assigning in order of highest priority if there is a choice. Then set the pilot atom to in front of the three atoms in question. If the three atoms reside in a clockwise direction when followed in order of priority, the molecule is assigned as R; when counterclockwise it is assigned as S.
In optics and metamaterials
Chiral diffraction
Papakostas et al. observed in 2003 that planar chirality affects the polarization of light diffracted by arrays of planar chiral microstructures, where large polarization changes of opposite sign were detected in light dif |
https://en.wikipedia.org/wiki/Computerized%20physician%20order%20entry | Computerized physician order entry (CPOE), sometimes referred to as computerized provider order entry or computerized provider order management (CPOM), is a process of electronic entry of medical practitioner instructions for the treatment of patients (particularly hospitalized patients) under his or her care.
The entered orders are communicated over a computer network to the medical staff or to the departments (pharmacy, laboratory, or radiology) responsible for fulfilling the order. CPOE reduces the time it takes to distribute and complete orders, while increasing efficiency by reducing transcription errors including preventing duplicate order entry, while simplifying inventory management and billing.
CPOE is a form of patient management software.
Required data
In a graphical representation of an order sequence, specific data should be presented to CPOE system staff in cleartext, including:
identity of the patient
role of required member of staff
resources, materials and medication applied
procedures to be performed
operational sequence to be obeyed
feedback to be noted
case specific documentation to build
Some textual data can be reduced to simple graphics.
CPOE related terminology
CPOE systems use terminology familiar to medical and nursing staff, but there are different terms used to classify and concatenate orders. The following items are examples of additional terminology that a CPOE system programmer might need to know:
Filler
The application responding to, i.e., performing, a request for services (orders) or producing an observation. The filler can also originate requests for services (new orders), add additional services to existing orders, replace existing orders, put an order on hold, discontinue an order, release a held order, or cancel existing orders.
Order
A request for a service from one application to a second application. In some cases an application is allowed to place orders with itself.
Order detail segment
One of several seg |
https://en.wikipedia.org/wiki/Axial%20chirality | In chemistry, axial chirality is a special case of chirality in which a molecule contains two pairs of chemical groups in a non-planar arrangement about an axis of chirality so that the molecule is not superposable on its mirror image. The axis of chirality (or chiral axis) is usually determined by a chemical bond that is constrained against free rotation either by steric hindrance of the groups, as in substituted biaryl compounds such as BINAP, or by torsional stiffness of the bonds, as in the C=C double bonds in allenes such as glutinic acid. Axial chirality is most commonly observed in substituted biaryl compounds wherein the rotation about the aryl–aryl bond is restricted so it results in chiral atropisomers, as in various ortho-substituted biphenyls, and in binaphthyls such as BINAP.
Axial chirality differs from central chirality (point chirality) in that axial chirality does not require a chiral center such as an asymmetric carbon atom, the most common form of chirality in organic compounds. Bonding to asymmetric carbon has the form Cabcd where a, b, c, and d must be distinct groups. Allenes have the form and the groups need not all be distinct as long as groups in each pair are distinct: abC=C=Cab is sufficient for the compound to be chiral, as in penta-2,3-dienedioic acid. Similarly, chiral atropisomers of the form may have some identical groups (), as in BINAP.
Nomenclature
The enantiomers of axially chiral compounds are usually given the stereochemical labels (Ra) and (Sa), sometimes abbreviated (R) and (S). The designations are based on the same Cahn–Ingold–Prelog priority rules used for tetrahedral stereocenters. The chiral axis is viewed end-on and the two "near" and two "far" substituents on the axial unit are ranked, but with the additional rule that the two near substituents have higher priority than the far ones.
Helical chirality
The chirality of a molecule that has a helical, propeller, or screw-shaped geometry is called helicity or helical |
https://en.wikipedia.org/wiki/Microstrip | Microstrip is a type of electrical transmission line which can be fabricated with any technology where a conductor is separated from a ground plane by a dielectric layer known as "substrate". Microstrip lines are used to convey microwave-frequency signals.
Typical realisation technologies are printed circuit board (PCB), alumina coated with a dielectric layer or sometimes silicon or some other similar technologies. Microwave components such as antennas, couplers, filters, power dividers etc. can be formed from microstrip, with the entire device existing as the pattern of metallization on the substrate. Microstrip is thus much less expensive than traditional waveguide technology, as well as being far lighter and more compact. Microstrip was developed by ITT laboratories as a competitor to stripline (first published by Grieg and Engelmann in the December 1952 IRE proceedings).
The disadvantages of microstrip compared with waveguide are the generally lower power handling capacity, and higher losses. Also, unlike waveguide, microstrip is typically not enclosed, and is therefore susceptible to cross-talk and unintentional radiation.
For lowest cost, microstrip devices may be built on an ordinary FR-4 (standard PCB) substrate. However it is often found that the dielectric losses in FR4 are too high at microwave frequencies, and that the dielectric constant is not sufficiently tightly controlled. For these reasons, an alumina substrate is commonly used. From monolithic integration perspective microtrips with integrated circuit/monolithic microwave integrated circuit technologies might be feasible however their performance might be limited by the dielectric layer(s) and conductor thickness available.
Microstrip lines are also used in high-speed digital PCB designs, where signals need to be routed from one part of the assembly to another with minimal distortion, and avoiding high cross-talk and radiation.
Microstrip is one of many forms of planar transmission line, |
https://en.wikipedia.org/wiki/Tight%20junction | Tight junctions, also known as occluding junctions or zonulae occludentes (singular, zonula occludens), are multiprotein junctional complexes whose canonical function is to prevent leakage of solutes and water and seals between the epithelial cells. They also play a critical role maintaining the structure and permeability of endothelial cells. Tight junctions may also serve as leaky pathways by forming selective channels for small cations, anions, or water. The corresponding junctions that occur in invertebrates are septate junctions.
Structure
Tight junctions are composed of a branching network of sealing strands, each strand acting independently from the others. Therefore, the efficiency of the junction in preventing ion passage increases exponentially with the number of strands.
Each strand is formed from a row of transmembrane proteins embedded in both plasma membranes, with extracellular domains joining one another directly. There are at least 40 different proteins composing the tight junctions. These proteins consist of both transmembrane and cytoplasmic proteins. The three major transmembrane proteins are occludin, claudins, and junction adhesion molecule (JAM) proteins. These associate with different peripheral membrane proteins such as ZO-1 located on the intracellular side of plasma membrane, which anchor the strands to the actin component of the cytoskeleton. Thus, tight junctions join together the cytoskeletons of adjacent cells.
Transmembrane proteins:
Occludin was the first integral membrane protein to be identified. It has a molecular weight of ~60kDa. It consists of four transmembrane domains and both the N-terminus and the C-terminus of the protein are intracellular. It forms two extracellular loops and one intracellular loop. These loops help regulate paracellular permeability. Occludin also plays a key role in cellular structure and barrier function.
Claudins were discovered after occludin and are a family of over 27 different members in |
https://en.wikipedia.org/wiki/Brian%20Martin%20%28social%20scientist%29 | Brian Martin (born 1947) is a social scientist in the School of Humanities and Social Inquiry, Faculty of Arts, Social Sciences and Humanities, at the University of Wollongong (UOW) in NSW, Australia. He was appointed a professor at the university in 2007, and in 2017 was appointed emeritus professor. His work is in the fields of peace research, scientific controversies, science and technology studies, sociology, political science, media studies, law, journalism, freedom of speech, education and corrupted institutions, as well as research on whistleblowing and dissent in the context of science. Martin was president of Whistleblowers Australia from 1996 to 1999 and remains their International Director. He has been criticized by medical professionals and public health advocates for promoting the disproven oral polio vaccine AIDS hypothesis and supporting vaccine hesitancy in the context of his work.
Martin has spoken at a British Science Association Festival of Science, and testified at the Australian Federal Senate's Inquiry into Academic Freedom. The crustacean Polycheles martini was named after him.
Research and academia
Martin was born in the United States in 1947 and raised in Tulsa, Oklahoma. He earned a BA in physics at Rice University in Texas in 1969, and, seeking to avoid conscription into the Vietnam War, emigrated to Australia, where he earned a PhD in physics at the University of Sydney in 1976.
Martin's original academic field was theoretical physics, and he worked in both stratospheric modelling and numerical methods during his career. He has published extensively about the social dynamics and politicisation of controversial scientific topics. His topics of inquiry have included the globalization of polarised science such as the origin of HIV/AIDS. He argues that there are situations in which scientific research that threatens vested interests can be suppressed. He describes a number of direct and indirect mechanisms through which he argues that this |
https://en.wikipedia.org/wiki/DYSEAC | DYSEAC was the second Standards Electronic Automatic Computer. (See SEAC.)
DYSEAC was a first-generation computer built by the National Bureau of Standards for the U.S. Army Signal Corps. It was housed in a truck, making it one of the first movable computers (perhaps the first). It went into operation in April 1954.
DYSEAC used 900 vacuum tubes and 24,500 crystal diodes. It had a memory of 512 words of 45 bits each (plus 1 parity bit), using mercury delay-line memory. Memory access time was 48–384 microseconds. The addition time was 48 microseconds, and the multiplication/division time was 2112 microseconds. These times are excluding the memory-access time, which added up to approximately 1500 microseconds to those times.
DYSEAC weighed about .
See also
SEAC
List of vacuum-tube computers |
https://en.wikipedia.org/wiki/FR-2 | FR-2 (Flame Resistant 2) is a NEMA designation for synthetic resin bonded paper, a composite material made of paper impregnated with a plasticized phenol formaldehyde resin, used in the manufacture of printed circuit boards. Its main properties are similar to NEMA grade XXXP (MIL-P-3115) material, and can be substituted for the latter in many applications.
Applications
FR-2 sheet with copper foil lamination on one or both sides is widely used to build low-end consumer electronic equipment. While its electrical and mechanical properties are inferior to those of epoxy-bonded fiberglass, FR-4, it is significantly cheaper. It is not suitable for devices installed in vehicles, as continuous vibration can make cracks propagate, causing hairline fractures in copper circuit traces. Without copper foil lamination, FR-2 is sometimes used for simple structural shapes and electrical insulation.
Properties
Fabrication
FR-2 can be machined by drilling, sawing, milling and hot punching. Cold punching and shearing are not recommended, as they leave a ragged edge and tend to cause cracking. Tools made of high-speed steel can be used, although tungsten carbide tooling is preferred for high volume production.
Adequate ventilation or respiration protection are mandatory during high-speed machining, as it gives off toxic vapors.
Trade names and synonyms
Carta
Haefelyt
Lamitex
Paxolin, Paxoline
Pertinax, taken over by Lamitec and Dr. Dietrich Müller GmbH in 2014
Getinax (in the Ex-USSR)
Phenolic paper
Preßzell
Repelit
Synthetic resin bonded paper (SRBP)
Turbonit
Veroboard
Wahnerit
See also
Formica (plastic)
Micarta |
https://en.wikipedia.org/wiki/MANIAC%20II | The MANIAC II (Mathematical Analyzer Numerical Integrator and Automatic Computer Model II) was a first-generation electronic computer, built in 1957 for use at Los Alamos Scientific Laboratory.
MANIAC II was built by the University of California and the Los Alamos Scientific Laboratory, completed in 1957 as a successor to MANIAC I. It used 2,850 vacuum tubes and 1,040 semiconductor diodes in the arithmetic unit. Overall it used 5,190 vacuum tubes, 3,050 semiconductor diodes, and 1,160 transistors.
It had 4,096 words of memory in Magnetic-core memory (with 2.4 microsecond access time), supplemented by 12,288 words of memory using Williams tubes (with 15 microsecond access time). The word size was 48 bits. Its average multiplication time was 180 microseconds and the average division time was 300 microseconds.
By the time of its decommissioning, the computer was all solid-state, using a combination of
RTL, DTL and TTL. It had an array multiplier, 15 index registers, 16K of 6-microsecond cycle time core memory, and 64K of 2-microsecond cycle time core memory. A NOP instruction took about 2.5 microseconds. A multiplication took 8 microseconds and a division 25 microseconds. It had a paging unit using 1K word pages with an associative 16-deep lookup memory. A 1-megaword CDC drum was hooked up as a paging device. It also had several ADDS Special-Order Direct-View Storage-Tube terminals. These terminals used an extended character set which covered about all the mathematical symbols, and allowed for half-line spacing for math formulas.
For I/O, it had two IBM 360 series nine-track and two seven-track 1/2" tape drives. It had an eight-bit paper-tape reader and punch, and a 500 line-per-minute printer (1500 line-per-minute using the hexadecimal character set). Storage was three IBM 7000 series 1301 disk drives, each having two modules of 21.6 million characters apiece.
One of the data products of MANIAC II was the table of numbers appearing in the book The 3-j and 6-j S |
https://en.wikipedia.org/wiki/Multi-channel%20memory%20architecture | In the fields of digital electronics and computer hardware, multi-channel memory architecture is a technology that increases the data transfer rate between the DRAM memory and the memory controller by adding more channels of communication between them. Theoretically, this multiplies the data rate by exactly the number of channels present. Dual-channel memory employs two channels. The technique goes back as far as the 1960s having been used in IBM System/360 Model 91 and in CDC 6600.
Modern high-end desktop and workstation processors such as the AMD Ryzen Threadripper series and the Intel Core i9 Extreme Edition lineup support quad-channel memory. Server processors from the AMD Epyc series and the Intel Xeon platforms give support to memory bandwidth starting from quad-channel module layout to up to octa-channel layout. In March 2010, AMD released Socket G34 and Magny-Cours Opteron 6100 series processors with support for quad-channel memory. In 2006, Intel released chipsets that support quad-channel memory for its LGA771 platform and later in 2011 for its LGA2011 platform. Microcomputer chipsets with even more channels were designed; for example, the chipset in the AlphaStation 600 (1995) supports eight-channel memory, but the backplane of the machine limited operation to four channels.
Dual-channel architecture
Dual-channel-enabled memory controllers in a PC system architecture use two 64-bit data channels. Dual-channel should not be confused with double data rate (DDR), in which data exchange happens twice per DRAM clock. The two technologies are independent of each other, and many motherboards use both by using DDR memory in a dual-channel configuration.
Operation
Dual-channel architecture requires a dual-channel-capable motherboard and two or more DDR memory modules. The memory modules are installed into matching banks, each of which belongs to a different channel. The motherboard's manual will provide an explanation of how to install memory for that partic |
https://en.wikipedia.org/wiki/The%20Smith%27s%20Snackfood%20Company | The Smith's Snackfood Company is a British-Australian snack food brand owned by the American multinational food, snack, and beverage corporation PepsiCo. It is best known for its brand of potato crisps. The company was founded by Frank Smith and Jim Viney in the United Kingdom in 1920 as Smiths Potato Crisps Ltd, originally packaging a twist of salt with its crisps in greaseproof paper bags which were sold around London. The dominant brand in the UK until the 1960s when Golden Wonder took over with Cheese & Onion, Smith's countered by creating Salt & Vinegar flavour (first tested by their north-east England subsidiary Tudor) which was launched nationally in 1967.
After establishing the product in the UK, Smith set up the company in Australia in 1932. Both versions of Smiths have had various owners, but were reunited under PepsiCo ownership, with the UK business in being purchased in 1989, and the Australian business in 1998. Smith's Snackvend Stand is the branch of the company that operates vending machines. The Smith's brand in the United Kingdom is now a sub brand to the main Walkers brand, while in Australia, Smiths is the main brand.
United Kingdom
Smith's Potato Crisps was formed by Frank Smith and Jim Viney in the United Kingdom after World War I. Smith had been a manager for a Smithfield wholesale grocery business which sold potato crisps from 1913. Deciding to make his own, Smith converted garages in Cricklewood, London into a crisp factory, selling to local businesses. By 1920 he had 12 full-time employees. Smith conceived the idea of selling unseasoned potato crisps with a small blue sachet of salt that could be sprinkled over them. In 1927, after buying Jim Viney's share of the business, the company expanded into a factory in Brentford, London. In 1929, Smiths had seven factories in the UK and the following year it was incorporated as a private limited company. By 1934, 200 million packets of crisps were sold in Britain each year, 95 percent of which w |
https://en.wikipedia.org/wiki/Teltron%20tube | A teltron tube (named for Teltron Inc., which is now owned by 3B Scientific Ltd.) is a type of cathode ray tube used to demonstrate the properties of electrons. There were several different types made by Teltron including a diode, a triode, a Maltese Cross tube, a simple deflection tube with a fluorescent screen, and one which could be used to measure the charge-to-mass ratio of an electron. The latter two contained an electron gun with deflecting plates. The beams can be bent by applying voltages to various electrodes in the tube or by holding a magnet close by. The electron beams are visible as fine bluish lines. This is accomplished by filling the tube with low pressure helium (He) or Hydrogen (H2) gas. A few of the electrons in the beam collide with the helium atoms, causing them to fluoresce and emit light.
They are usually used to teach electromagnetic effects because they show how an electron beam is affected by electric fields and by magnetic fields like the Lorentz force.
Motions in fields
Charged particles in a uniform electric field follow a parabolic trajectory, since the electric field term (of the Lorentz force which acts on the particle) is the product of the particle's charge and the magnitude of the electric field, (oriented in the direction of the electric field). In a uniform magnetic field however, charged particles follow a circular trajectory due to the cross product in the magnetic field term of the Lorentz force. (That is, the force from the magnetic field acts on the particle in a direction perpendicular to the particle's direction of motion. See: Lorentz force for more details.)
Apparatus
The 'teltron' apparatus consists of a Teltron type electron deflection tube, a Teltron stand, EHT power supply (, variable).
Experimental setup
In an evacuated glass bulb some hydrogen gas (H2) is filled, so that the tube has a hydrogen atmosphere at low pressure of about is formed. The pressure is such that the electrons are decelerated by col |
https://en.wikipedia.org/wiki/Mathematical%20formulation%20of%20the%20Standard%20Model | This article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group . The theory is commonly viewed as describing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs boson.
The Standard Model is renormalizable and mathematically self-consistent, however despite having huge and continued successes in providing experimental predictions it does leave some unexplained phenomena. In particular, although the physics of special relativity is incorporated, general relativity is not, and the Standard Model will fail at energies or distances where the graviton is expected to emerge. Therefore, in a modern field theory context, it is seen as an effective field theory.
Quantum field theory
The standard model is a quantum field theory, meaning its fundamental objects are quantum fields which are defined at all points in spacetime. QFT treats particles as excited states (also called quanta) of their underlying quantum fields, which are more fundamental than the particles. These fields are
the fermion fields, , which account for "matter particles";
the electroweak boson fields , and ;
the gluon field, ; and
the Higgs field, .
That these are quantum rather than classical fields has the mathematical consequence that they are operator-valued. In particular, values of the fields generally do not commute. As operators, they act upon a quantum state (ket vector).
Alternative presentations of the fields
As is common in quantum theory, there is more than one way to look at things. At first the basic fields given above may not seem to correspond well with the "fundamental particles" in the chart above, but there are several alternative presentations which, in particular contexts, may be more appropriate than those that are given above.
Fermions
Rather than having one fermion field , it can be split up into separate components for each type of pa |
https://en.wikipedia.org/wiki/Nanodomain | A nanodomain is a nanometer-sized cluster of proteins found in a cell membrane. They are associated with the signal which occurs when a single calcium ion channel opens on a cell membrane, allowing an influx of calcium ions (Ca) which extend in a plume a few tens of nanometres from the channel pore. In a nanodomain, the coupling distance, that is, the distance between the calcium-binding proteins which sense the calcium, and the calcium channel, is very small, less than , which allows rapid signalling. The formation of a nanodomain signal is virtually instantaneous following the opening of the calcium channel, as calcium ions move rapidly into the cell along a steep concentration gradient. The nanodomain signal collapses just as quickly when the calcium channel closes, as the ions rapidly diffuse away from the pore. Formation of a nanodomain signal requires the influx of only approximately 1000 calcium ions.
Coupling distances greater than , mediated by a larger number of channels, are referred to as microdomains. nanodomain
Properties
Nanodomain signals are thought to improve the temporal precision of fast exocytosis of vesicles due to two specific properties:
The peak concentration of calcium ions will be reached incredibly quick (within a microsecond) and maintained as long as the channel is open.
Closure of the channel leads to a rapid collapse of the domain due to lateral diffusion away from the pore (the site of entry). The lateral diffusion of microdomains additionally depends on the action of fast endogenous buffers (which remove the calcium and transport it away from the active zone).
Single channels are able to cause vesicular release, however, the cooperativity of different calcium channels is synapse-specific. The release driven by a single calcium ion channel minimizes the total calcium ion influx, overlapping domains can provide greater reliability and temporal fidelity. |
https://en.wikipedia.org/wiki/Flora%20Europaea | The Flora Europaea is a 5-volume encyclopedia of plants, published between 1964 and 1993 by Cambridge University Press. The aim was to describe all the national Floras of Europe in a single, authoritative publication to help readers identify any wild or widely cultivated plant in Europe to the subspecies level. It also provides information on geographical distribution, habitat preference, and chromosome number, where known.
The Flora was released in CD form in 2001, and the Royal Botanic Garden Edinburgh have made an index to the plant names available online.
History
The idea of a pan-European Flora was first mooted at the 8th International Congress of Botany in Paris in 1954. In 1957, Britain's Science and Engineering Research Council provided grants to fund a secretariat of three people, and Volume 1 was published in 1964. More volumes were issued in the following years, culminating in 1980 with the monocots of Volume 5. The royalties were put into a trust fund administered by the Linnean Society, which allowed funding for Dr John Akeroyd to continue work on the project. A revised Volume 1 was launched at the Linnean Society on 11 March 1993.
Volumes
Volume 1 : Lycopodiaceae to Platanaceae
Published 1964
Volume 2: Rosaceae to Umbelliferae
Published : 1 Dec 1968 (486 pages)
Volume 3: Diapensiaceae to Myoporaceae
Published : 28 Dec 1972 (399 pages)
Volume 4: Plantaginaceae to Compositae (and Rubiaceae)
Published: 5 Aug 1976 (534 pages)
Volume 5: Alismataceae to Orchidaceae
Published: 3 April 1980 (476 pages)
Volume 1 Revised: Lycopodiaceae to Platanaceae
Published: 22 April 1993 (629 pages)
5 Volume Set and CD-ROM Pack
Published: 6 Dec 2001 (2392 pages)
Editors
The editors named on every edition are :
Tom Tutin (1908–1987) – Professor of Botany at University of Leicester
Vernon Heywood (b. 1927) – Chief Scientist, Plant Conservation, IUCN and professor emeritus at University of Reading
Alan Burges (1911–2002) – Professor of Botany at University o |
https://en.wikipedia.org/wiki/Zernike%20polynomials | In mathematics, the Zernike polynomials are a sequence of polynomials that are orthogonal on the unit disk. Named after optical physicist Frits Zernike, laureate of the 1953 Nobel Prize in Physics and the inventor of phase-contrast microscopy, they play important roles in various optics branches such as beam optics and imaging.
Definitions
There are even and odd Zernike polynomials. The even Zernike polynomials are defined as
(even function over the azimuthal angle ), and the odd Zernike polynomials are defined as
(odd function over the azimuthal angle ) where m and n are nonnegative integers with n ≥ m ≥ 0 (m = 0 for spherical Zernike polynomials), is the azimuthal angle, ρ is the radial distance , and are the radial polynomials defined below. Zernike polynomials have the property of being limited to a range of −1 to +1, i.e. . The radial polynomials are defined as
for an even number of n − m, while it is 0 for an odd number of n − m. A special value is
Other representations
Rewriting the ratios of factorials in the radial part as products of binomials shows that the coefficients are integer numbers:
.
A notation as terminating Gaussian hypergeometric functions is useful to reveal recurrences, to demonstrate that they are special cases of Jacobi polynomials, to write down the differential equations, etc.:
for n − m even.
The factor in the radial polynomial may be expanded in a Bernstein basis of for even or times a function of for odd in the range . The radial polynomial may therefore be expressed by a finite number of Bernstein Polynomials with rational coefficients:
Noll's sequential indices
Applications often involve linear algebra, where an integral over a product of Zernike polynomials and some other factor builds a matrix elements.
To enumerate the rows and columns of these matrices by a single index, a conventional mapping of the two indices n and l to a single index j has been introduced by Noll. The table of this association starts as |
https://en.wikipedia.org/wiki/Entropic%20force | In physics, an entropic force acting in a system is an emergent phenomenon resulting from the entire system's statistical tendency to increase its entropy, rather than from a particular underlying force on the atomic scale.
Mathematical formulation
In the canonical ensemble, the entropic force associated to a macrostate partition is given by
where is the temperature, is the entropy associated to the macrostate , and is the present macrostate.
Examples
Pressure of an ideal gas
The internal energy of an ideal gas depends only on its temperature, and not on the volume of its containing box, so it is not an energy effect that tends to increase the volume of the box as gas pressure does. This implies that the pressure of an ideal gas has an entropic origin.
What is the origin of such an entropic force? The most general answer is that the effect of thermal fluctuations tends to bring a thermodynamic system toward a macroscopic state that corresponds to a maximum in the number of microscopic states (or micro-states) that are compatible with this macroscopic state. In other words, thermal fluctuations tend to bring a system toward its macroscopic state of maximum entropy.
Brownian motion
The entropic approach to Brownian movement was initially proposed by R. M. Neumann. Neumann derived the entropic force for a particle undergoing three-dimensional Brownian motion using the Boltzmann equation, denoting this force as a diffusional driving force or radial force. In the paper, three example systems are shown to exhibit such a force:
electrostatic system of molten salt,
surface tension and,
elasticity of rubber.
Polymers
A standard example of an entropic force is the elasticity of a freely jointed polymer molecule. For an ideal chain, maximizing its entropy means reducing the distance between its two free ends. Consequently, a force that tends to collapse the chain is exerted by the ideal chain between its two free ends. This entropic force is proporti |
https://en.wikipedia.org/wiki/Myology | Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance.
See also
Myotomy
Oral myology |
https://en.wikipedia.org/wiki/Shell%20theorem | In classical mechanics, the shell theorem gives gravitational simplifications that can be applied to objects inside or outside a spherically symmetrical body. This theorem has particular application to astronomy.
Isaac Newton proved the shell theorem and stated that:
A spherically symmetric body affects external objects gravitationally as though all of its mass were concentrated at a point at its center.
If the body is a spherically symmetric shell (i.e., a hollow ball), no net gravitational force is exerted by the shell on any object inside, regardless of the object's location within the shell.
A corollary is that inside a solid sphere of constant density, the gravitational force within the object varies linearly with distance from the center, becoming zero by symmetry at the center of mass. This can be seen as follows: take a point within such a sphere, at a distance from the center of the sphere. Then you can ignore all of the shells of greater radius, according to the shell theorem (2). But the point can be considered to be external to the remaining sphere of radius r, and according to (1) all of the mass of this sphere can be considered to be concentrated at its centre. The remaining mass is proportional to (because it is based on volume). The gravitational force exerted on a body at radius r will be proportional to (the inverse square law), so the overall gravitational effect is proportional to so is linear in
These results were important to Newton's analysis of planetary motion; they are not immediately obvious, but they can be proven with calculus. (Gauss's law for gravity offers an alternative way to state the theorem.)
In addition to gravity, the shell theorem can also be used to describe the electric field generated by a static spherically symmetric charge density, or similarly for any other phenomenon that follows an inverse square law. The derivations below focus on gravity, but the results can easily be generalized to the electrostatic forc |
https://en.wikipedia.org/wiki/Complete%20mixing | In evolutionary game theory, complete mixing refers to an assumption about the type of interactions that occur between individual organisms. Interactions between individuals in a population attains complete mixing if and only if the probably individual x interacts with individual y is equal for all y.
This assumption is implicit in the replicator equation a system of differential equations that represents one model in evolutionary game theory. This assumption usually does not hold for most organismic populations, since usually interactions occur in some spatial setting where individuals are more likely to interact with those around them. Although the assumption is empirically violated, it represents a certain sort of scientific idealization which may or may not be harmful to the conclusions reached by that model. This question has led individuals to investigate a series of other models where there is not complete mixing (e.g. Cellular automata models).
Game theory
Population genetics |
https://en.wikipedia.org/wiki/Northern%20California%20coastal%20forests | The Northern California coastal forests are a temperate coniferous forests ecoregion of coastal Northern California and southwestern Oregon.
Setting
The ecoregion covers , extending from just north of the California-Oregon border south, to southern Monterey County. The ecoregion rarely extends more than 65 km inland from the coast, narrower in the southernmost parts of the ecoregion.
The ecoregion is a sub-ecoregion of the Pacific temperate rain forests ecoregion, which extends up the Pacific Coast to Kodiak Island in Alaska. The ecoregion lies close to the Pacific Ocean, and is kept moist by Pacific Ocean storms during the winter months, and by coastal fogs in the summer months. These factors keep the ecoregion cooler in the summer and warmer in the winter, as compared to ecoregions further inland. The ecoregion is also defined by the distribution of the Coast Redwood (Sequoia sempervirens), with isolated groves located in protected canyons as far south as Redwood Gulch, in southern Monterey County. The greatest concentration of remaining Old-growth forest are in the northernmost portion of the ecoregion, primarily within Humboldt and Del Norte counties.
Major urban centers located within this ecoregion include the montane portions of various cities of the San Francisco Peninsula, Fort Bragg, Eureka, and Brookings.
Habitats
Redwood forests are interspersed with several other plant communities throughout this ecoregion.
Coastal redwood forests
The dominant forest type in this ecoregion is the coastal redwood forest. These are the tallest forests on Earth, with individual redwood (Sequoia sempervirens) trees reaching heights of . These forests are generally found in areas exposed to coastal fog. In the north, they occur on upland slopes, in riparian zones, and on riverine terraces. In the south, where annual precipitation is lower, they are constrained to coves and ravines. Coast Douglas-firs (Pseudotsuga menziesii var. menziesii) are nearly always associated w |
https://en.wikipedia.org/wiki/Jupiter%20project | The Jupiter project was to be a new high-end model of Digital Equipment Corporation (DEC)'s PDP-10 mainframe computers. This project was cancelled in 1983, as the PDP-10 was increasingly eclipsed by the VAX supermini machines (descendants of the PDP-11). DEC recognized then that the PDP-10 and VAX product lines were competing with each other and decided to concentrate its software development effort on the more profitable VAX. The PDP-10 was finally dropped from DEC's line in 1983, following the failure of the Jupiter Project at DEC to build a viable new model. |
https://en.wikipedia.org/wiki/Happy%20Human | The Happy Human is an icon that has been adopted as an international symbol of secular humanism.
Created by Dennis Barrington, the figure was the winning design in a competition arranged by Humanists UK (formerly the British Humanist Association) in 1965. Various forms of it are now used across the world by humanist organisations of all sizes including Humanists UK, Humanists International and The American Humanist Association (AHA).
The trademark is still held by Humanists UK, which freely licenses use of the symbol by bona fide Humanist organisations worldwide.
Origins
The Happy Human was created in response to a Humanists UK competition in 1965, after many years of discussion as to what a logo should look like. After some time without progress, radio presenter Margaret Knight backed a popular movement among Humanists UK's membership to commission such a logo, triggering publicity officer Tom Vernon to announce the competition. Of the several hundred designs from a number of countries that were considered, Dennis Barrington's simple design was favoured for its perceived universalism. Within the space of a few years, the logo had emerged as a symbol of Humanism, not just of Humanists UK and was adopted by humanist organisations around the world.
Since the 1990s, humanist groups have taken on looser, more figurative versions of the Happy Human logo, such as the logos used by Humanisterna (Sweden), Humanistischer Verband Deutschlands (Germany), Union of Rationalist Atheists and Agnostics (Italy), and the European Humanist Federation. In 2017, the British Humanist Association, which originated the Happy Human, debuted a new, single line-drawing style Happy Human when it renamed as Humanists UK.
Variations of the Happy Human symbol
Organisations using the Happy Human
American Humanist Association (US)
Humanists UK (England, Wales and Northern Ireland)
Council of Australian Humanist Societies (CAHS) (Australia)
European Humanist Federation
Romanian Secular-Humanis |
https://en.wikipedia.org/wiki/Infrared%20sauna | An infrared sauna uses infrared heaters to emit infrared light experienced as radiant heat which is absorbed by the surface of the skin. Infrared saunas are popular in alternative therapies, where they are claimed to help with a number of medical issues including autism, cancer, and COVID-19, but these claims are entirely pseudoscientific. Traditional saunas differ from infrared saunas in that they heat the body primarily by conduction and convection from the heated air and by radiation of the heated surfaces in the sauna room whereas infrared saunas primarily use just radiation.
Infrared saunas are also used in Infrared Therapy and Waon Therapy; while there is a small amount of preliminary evidence that these therapies correlate with a number of benefits, including reduced blood pressure, increased heart rate and increased left ventricular function, there are several problems with linking this evidence to alleged health benefits.
History
John Harvey Kellogg invented the use of radiant heat saunas with his incandescent electric light bath in 1891. He claimed that it stimulated healing in the body and in 1893 displayed his invention at the Chicago World's Fair. In 1896 the Radiant Heat Bath was patented by Kellogg and described in the patent as not depending on the heat in the air to heat the body but able to more quickly produce a sweat than traditional Turkish or Russian baths at a lower ambient temperature. The idea became popular, particularly in Germany where "Light Institutes" were set up. King Edward VII of England and Kaiser Wilhelm II of Germany both had radiant heat baths set up in their various palaces. The modern concept of the infrared sauna was revived in the 1970s in Japan as Waon (Japanese: "soothing warmth") Therapy and neonatal beds for newborns use infrared elements to keep the baby warm without being stifled.
Description
Infrared saunas can be designed to look like traditional saunas but cheaper models can be in the form of a tent with an in |
https://en.wikipedia.org/wiki/Prognathism | Prognathism, also called Habsburg chin, Habsburg's chin, Habsburg jaw or Habsburg's jaw primarily in the context of its prevalence amongst members of the House of Habsburg, is a positional relationship of the mandible or maxilla to the skeletal base where either of the jaws protrudes beyond a predetermined imaginary line in the coronal plane of the skull. In general dentistry, oral and maxillofacial surgery, and orthodontics, this is assessed clinically or radiographically (cephalometrics). The word prognathism derives from Greek πρό (pro, meaning 'forward') and γνάθος (gnáthos, 'jaw'). One or more types of prognathism can result in the common condition of malocclusion, in which an individual's top teeth and lower teeth do not align properly.
Presentation
Prognathism in humans, particularly alveolar prognathism, can occur due to normal variation among phenotypes. In human populations where prognathism is not the norm, it may be a malformation, the result of injury, a disease state, a hereditary condition, or, if not pathological, may simply be a minority trait within a population.
Prognathism is considered a disorder only if it affects chewing, speech or social function as a byproduct of severely affected aesthetics of the face.
Clinical determinants include soft tissue analysis where the clinician assesses nasolabial angle, the relationship of the soft tissue portion of the chin to the nose, and the relationship between the upper and lower lips; also used is dental arch relationship assessment such as Angle's classification.
Cephalometric analysis is the most accurate way of determining all types of prognathism, as it includes assessments of skeletal base, occlusal plane angulation, facial height, soft tissue assessment and anterior dental angulation. Various calculations and assessments of the information in a cephalometric radiograph allow the clinician to objectively determine dental and skeletal relationships and determine a treatment plan.
Prognathism sh |
https://en.wikipedia.org/wiki/Glabella | The glabella, in humans, is the area of skin between the eyebrows and above the nose. The term also refers to the underlying bone that is slightly depressed, and joins the two brow ridges. It is a cephalometric landmark that is just superior to the nasion.
Etymology
The term for the area is derived from the Latin , meaning 'smooth, hairless'.
In medical science
The skin of the glabella may be used to measure skin turgor in suspected cases of dehydration by gently pinching and lifting it. When released, the glabella of a dehydrated patient tends to remain extended ("tented"), rather than returning to its normal shape.
See also
Glabellar reflex |
https://en.wikipedia.org/wiki/Fas%20ligand | Fas ligand (FasL or CD95L or CD178) is a type-II transmembrane protein expressed on cytotoxic T lymphocytes and natural killer (NK) cells. Its binding with Fas receptor (FasR) induces programmed cell death in the FasR-carrying target cell. Fas ligand/receptor interactions play an important role in the regulation of the immune system and the progression of cancer.
Structure
Fas ligand or FasL is a homotrimeric type II transmembrane protein that belongs to the tumor necrosis factor (TNF) family. It signals through trimerization of FasR, which spans the membrane of the target cell. This trimerization usually leads to apoptosis, or programmed cell death.
Soluble Fas ligand is generated by cleaving membrane-bound FasL at a conserved cleavage site by the external matrix metalloproteinase MMP-7.
Receptors
FasR: The Fas receptor (FasR), or CD95, is the most intensely studied member of the death receptor family. The gene is situated on chromosome 10 in humans and 19 in mice. Previous reports have identified as many as eight splice variants, which are translated into seven isoforms of the protein. Many of these isoforms are rare haplotypes that are usually associated with a state of disease. Apoptosis-inducing Fas receptor is dubbed isoform 1 and is a type 1 transmembrane protein. It consists of three cysteine-rich pseudorepeats, a transmembrane domain, and an intracellular death domain.
DcR3: Decoy receptor 3 (DcR3) is a recently discovered decoy receptor of the tumor necrosis factor superfamily that binds to FasL, LIGHT, and TL1A. DcR3 is a soluble receptor that has no signal transduction capabilities (hence a "decoy") and functions to prevent FasR-FasL interactions by competitively binding to membrane-bound Fas ligand and rendering them inactive.
Cell signaling
Fas forms the death-inducing signaling complex (DISC) upon ligand binding. Membrane-anchored Fas ligand trimer on the surface of an adjacent cell causes trimerization of Fas receptor. This event is also m |
https://en.wikipedia.org/wiki/Substrate-level%20phosphorylation | Substrate-level phosphorylation is a metabolism reaction that results in the production of ATP or GTP supported by the energy released from another high-energy bond that leads to phosphorylation of ADP or GDP to ATP or GTP (note that the reaction catalyzed by creatine kinase is not considered as "substrate-level phosphorylation"). This process uses some of the released chemical energy, the Gibbs free energy, to transfer a phosphoryl (PO3) group to ADP or GDP. Occurs in glycolysis and in the citric acid cycle.
Unlike oxidative phosphorylation, oxidation and phosphorylation are not coupled in the process of substrate-level phosphorylation, and reactive intermediates are most often gained in the course of oxidation processes in catabolism. Most ATP is generated by oxidative phosphorylation in aerobic or anaerobic respiration while substrate-level phosphorylation provides a quicker, less efficient source of ATP, independent of external electron acceptors. This is the case in human erythrocytes, which have no mitochondria, and in oxygen-depleted muscle.
Overview
Adenosine triphosphate (ATP) is a major "energy currency" of the cell. The high energy bonds between the phosphate groups can be broken to power a variety of reactions used in all aspects of cell function.
Substrate-level phosphorylation occurs in the cytoplasm of cells during glycolysis and in mitochondria either during the Krebs cycle or by MTHFD1L (EC 6.3.4.3), an enzyme interconverting ADP + phosphate + 10-formyltetrahydrofolate to ATP + formate + tetrahydrofolate (reversibly), under both aerobic and anaerobic conditions. In the pay-off phase of glycolysis, a net of 2 ATP are produced by substrate-level phosphorylation.
Glycolysis
The first substrate-level phosphorylation occurs after the conversion of 3-phosphoglyceraldehyde and Pi and NAD+ to 1,3-bisphosphoglycerate via glyceraldehyde 3-phosphate dehydrogenase. 1,3-bisphosphoglycerate is then dephosphorylated via phosphoglycerate kinase, producing 3-ph |
https://en.wikipedia.org/wiki/Oval | An oval () is a closed curve in a plane which resembles the outline of an egg. The term is not very specific, but in some areas (projective geometry, technical drawing, etc.) it is given a more precise definition, which may include either one or two axes of symmetry of an ellipse. In common English, the term is used in a broader sense: any shape which reminds one of an egg. The three-dimensional version of an oval is called an ovoid.
Oval in geometry
The term oval when used to describe curves in geometry is not well-defined, except in the context of projective geometry. Many distinct curves are commonly called ovals or are said to have an "oval shape". Generally, to be called an oval, a plane curve should resemble the outline of an egg or an ellipse. In particular, these are common traits of ovals:
they are differentiable (smooth-looking), simple (not self-intersecting), convex, closed, plane curves;
their shape does not depart much from that of an ellipse, and
an oval would generally have an axis of symmetry, but this is not required.
Here are examples of ovals described elsewhere:
Cassini ovals
portions of some elliptic curves
Moss's egg
superellipse
Cartesian oval
stadium
An ovoid is the surface in 3-dimensional space generated by rotating an oval curve about one of its axes of symmetry.
The adjectives ovoidal and ovate mean having the characteristic of being an ovoid, and are often used as synonyms for "egg-shaped".
Projective geometry
In a projective plane a set of points is called an oval, if:
Any line meets in at most two points, and
For any point there exists exactly one tangent line through , i.e., }.
For finite planes (i.e. the set of points is finite) there is a more convenient characterization:
For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
An ovoid in a projective space is a set of points such that:
Any |
https://en.wikipedia.org/wiki/Booji%20Boy | Booji Boy is a character created in the early 1970s by the American new wave band Devo. The name is pronounced "Boogie Boy"—the strange spelling "Booji" resulted when the band was using Letraset to produce captions for a film, and ran out of the letter "g". When the "i" was added but before the "e," Devo's lead singer Mark Mothersbaugh reportedly remarked that the odd spelling "looked right."
Booji Boy has traits of a simian child and typically wears an orange nuclear protection suit. He is portrayed by Mothersbaugh in a mask and is the son of another fictitious Devo character, General Boy. The intent of the figure is to satirize infantile regression in Western culture, a quality Devo enjoyed elucidating. This character was officially introduced in the 1976 short film The Truth About De-Evolution.
According to the book We're All Devo!, the roots of the character come from discovering a baby mask in an Akron area novelty store. Mothersbaugh developed the character's distinctive high pitched falsetto almost instantly. He had kept a supply of Booji Boy masks for several years, but due to improper storage, many of them ended up ruined from dry rot. A similar, half-head mask was used in concerts during 2004 and 2005, and a new mask based on the original was created and used beginning in 2007. In 2012, SikRik Masks in Devo's hometown of Akron, Ohio made a new mask that more closely resembled the original. The company made 100 copies of the new mask, which were sold through Club Devo.
Booji Boy was incorporated into Devo's 1996 PC CD-ROM video game Devo Presents Adventures of the Smart Patrol. His name was simplified to "Boogie Boy" and the game claims his "real name" was "Craig Allen Rothwell." Not coincidentally, this is also supposedly the real name of the dancer known as "Spazz Attack" who appeared in some of Devo's videos and played Booji Boy on a Devo tour.
The game's booklet contained more information about the character's back story:
Booji Boy has been feat |
https://en.wikipedia.org/wiki/Palsy | Palsy is a medical term which refers to various types of paralysis<ref name="Agin">Dan Agin, More Than Genes: What Science Can Tell Us About Toxic Chemicals, Development, and the Risk to Our Children;; (2009), p. 172.</ref> or paresis, often accompanied by weakness and the loss of feeling and uncontrolled body movements such as shaking. The word originates from the Anglo-Norman paralisie, parleisie et al., from the accusative form of Latin paralysis, from Ancient Greek παράλυσις (parálusis), from παραλύειν (paralúein, “to disable on one side”), from παρά (pará, “beside”) + λύειν (lúein, “loosen”). The word is longstanding in the English language, having appeared in the play Grim the Collier of Croydon'', reported to have been written as early as 1599:
In some editions, the Bible passage of Luke 5:18 is translated to refer to "a man which was taken with a palsy". More modern editions simply refer to a man who is paralysed. Although the term has historically been associated with paralysis generally, "is now almost always used in connection to the word 'cerebral'—meaning the brain".
Specific kinds of palsy include:
Bell's palsy, partial facial paralysis
Bulbar palsy, impairment of cranial nerves
Cerebral palsy, a neural disorder caused by intracranial lesions
Conjugate gaze palsy, a disorder affecting the ability to move the eyes
Erb's palsy, also known as brachial palsy, involving paralysis of an arm
Spinal muscular atrophy, also known as wasting palsy
Progressive supranuclear palsy, a degenerative disease
Squatter's palsy, a common name for bilateral peroneal nerve palsy that may be triggered by sustained squatting
Third nerve palsy, involving cranial nerve III |
https://en.wikipedia.org/wiki/Pack%20hunter | A pack hunter or social predator is a predatory animal which hunts its prey by working together with other members of its species. Normally animals hunting in this way are closely related, and with the exceptions of chimpanzees where only males normally hunt, all individuals in a family group contribute to hunting. When hunting cooperation is across two or more species, the broader term cooperative hunting is commonly used.
A well known pack hunter is the gray wolf; humans too can be considered pack hunters. Other pack hunting mammals include chimpanzees, dolphins, lions, dwarf and banded mongooses, and spotted hyenas. Avian social predators include the Harris's hawk, butcherbirds, three of four kookaburra species and many helmetshrikes. Other pack hunters include army ants, the goldsaddle goatfish, and occasionally crocodiles.
Pack hunting is typically associated with cooperative breeding and its concentration in the Afrotropical realm is a reflection of this. Most pack hunters are found in the southern African savannas, with a notable absence in tropical rainforests and with the exception of the wolf and coyote, higher latitudes. It is thought that either on the ancient and poor soils of the southern African savanna it is not possible for individual predators to find adequate food, or that the environment's inherent unpredictability due to ENSO or IOD events means that in very bad conditions it will not be possible to raise the young necessary to prevent declining populations from adult mortality. It is also argued that Africa's large area of continuous flat and open country, which was even more extensive while rainforest contracted during glacial periods of the Quaternary, may have helped encourage pack hunting to become much more common than on any other continent.
Around 80–95% of carnivores are solitary and hunt alone; the others including lions, wild dogs, spotted hyenas, chimpanzees, and humans hunt cooperatively, at least some of the time. Cooperative |
https://en.wikipedia.org/wiki/Indole-3-acetic%20acid | Indole-3-acetic acid (IAA, 3-IAA) is the most common naturally occurring plant hormone of the auxin class. It is the best known of the auxins, and has been the subject of extensive studies by plant physiologists. IAA is a derivative of indole, containing a carboxymethyl substituent. It is a colorless solid that is soluble in polar organic solvents.
Biosynthesis
IAA is predominantly produced in cells of the apex (bud) and very young leaves of a plant. Plants can synthesize IAA by several independent biosynthetic pathways. Four of them start from tryptophan, but there is also a biosynthetic pathway independent of tryptophan. Plants mainly produce IAA from tryptophan through indole-3-pyruvic acid. IAA is also produced from tryptophan through indole-3-acetaldoxime in Arabidopsis thaliana.
In rats, IAA is a product of both endogenous and colonic microbial metabolism from dietary tryptophan along with tryptophol. This was first observed in rats infected by Trypanosoma brucei gambiense. A 2015 experiment showed that a high-tryptophan diet can decrease serum levels of IAA in mice, but that in humans, protein consumption has no reliably predictable effect on plasma IAA levels. Human cells have been known to produce IAA in vitro since the 1950s, and the critical biosynthesis gene IL4I1 has been identified.
Biological effects
As all auxins, IAA has many different effects, such as inducing cell elongation and cell division with all subsequent results for plant growth and development. On a larger scale, IAA serves as signaling molecule necessary for development of plant organs and coordination of growth.
Plant gene regulation
IAA enters the plant cell nucleus and binds to a protein complex composed of a ubiquitin-activating enzyme (E1), a ubiquitin-conjugating enzyme (E2), and a ubiquitin ligase (E3), resulting in ubiquitination of Aux/IAA proteins with increased speed. Aux/IAA proteins bind to auxin response factor (ARF) proteins, forming a heterodimer, suppressing ARF |
https://en.wikipedia.org/wiki/Computer-supported%20collaboration | Computer-supported collaboration research focuses on technology that affects groups, organizations, communities and societies, e.g., voice mail and text chat. It grew from cooperative work study of supporting people's work activities and working relationships. As net technology increasingly supported a wide range of recreational and social activities, consumer markets expanded the user base, enabling more and more people to connect online to create what researchers have called a computer supported cooperative work, which includes "all contexts in which technology is used to mediate human activities such as communication, coordination, cooperation, competition, entertainment, games, art, and music" (from CSCW 2023).
Scope of the field
Focused on output
The subfield computer-mediated communication deals specifically with how humans use "computers" (or digital media) to form, support and maintain relationships with others (social uses), regulate information flow (instructional uses), and make decisions (including major financial and political ones). It does not focus on common work products or other "collaboration" but rather on "meeting" itself, and on trust. By contrast, CSC is focused on the output from, rather than the character or emotional consequences of, meetings or relationships, reflecting the difference between "communication" and "collaboration".
Focused on contracts and rendezvous
Unlike communication research, which focuses on trust, or computer science, which focuses on truth and logic, CSC focuses on cooperation and collaboration and decision making theory, which are more concerned with rendezvous and contract. For instance, auctions and market systems, which rely on bid and ask relationships, are studied as part of CSC but not usually as part of communication.
The term CSC emerged in the 1990s to replace the following terms:
workgroup computing, which emphasizes technology over the work being supported and seems to restrict inquiry to small organi |
https://en.wikipedia.org/wiki/Hellmann%E2%80%93Feynman%20theorem | In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
The theorem has been proven independently by many authors, including Paul Güttinger (1932), Wolfgang Pauli (1933), Hans Hellmann (1937) and Richard Feynman (1939).
The theorem states
where
is a Hermitian operator depending upon a continuous parameter ,
, is an eigenstate (eigenfunction) of the Hamiltonian, depending implicitly upon ,
is the energy (eigenvalue) of the state , i.e. .
Note that there is a breakdown of the Hellmann-Feynman theorem close to quantum critical points in the thermodynamic limit.
Proof
This proof of the Hellmann–Feynman theorem requires that the wave function be an eigenfunction of the Hamiltonian under consideration; however, it is also possible to prove more generally that the theorem holds for non-eigenfunction wave functions which are stationary (partial derivative is zero) for all relevant variables (such as orbital rotations). The Hartree–Fock wavefunction is an important example of an approximate eigenfunction that still satisfies the Hellmann–Feynman theorem. Notable example of where the Hellmann–Feynman is not applicable is for example finite-order Møller–Plesset perturbation theory, which is not variational.
The proof also employs an identity of normalized wavefunctions – that derivatives of the overlap of a wave function with itself must be zero. Using Dirac's bra–ket notation these two conditions are written as
The proof then follows through an application of the derivative product rule to the expectation value of the Hamiltonian viewed as a function of :
Alternate proof
The Hellmann–Feynman theo |
https://en.wikipedia.org/wiki/Mustard%20plant | The mustard plant is any one of several plant species in the genera Brassica and Sinapis in the family Brassicaceae (the mustard family). Mustard seed is used as a spice. Grinding and mixing the seeds with water, vinegar, or other liquids creates the yellow condiment known as prepared mustard. The seeds can also be pressed to make mustard oil, and the edible leaves can be eaten as mustard greens. Many vegetables are cultivated varieties of mustard plants; domestication may have begun 6,000 years ago.
History
Although some varieties of mustard plants were well-established crops in Hellenistic and Roman times, Zohary and Hopf note, "There are almost no archeological records available for any of these crops." Wild forms of mustard and its relatives, the radish and turnip, can be found over West Asia and Europe, suggesting their domestication took place somewhere in that area. However, Zohary and Hopf conclude: "Suggestions as to the origins of these plants are necessarily based on linguistic considerations." The Encyclopædia Britannica states that mustard was grown by the Indus Civilization of 2500–1700 BC. According to the Saskatchewan Mustard Development Commission, "Some of the earliest known documentation of mustard's use dates back to Sumerian and Sanskrit texts from 3000 BC".
A wide-ranging genetic study of B. rapa announced in 2021 concluded that the species may have been domesticated as long as 6,000 years ago in Central Asia, and that turnips or oilseeds may have been the first product. The results also suggested that a taxonomic re-evaluation of the species might be needed.
Species
White mustard (Sinapis alba) grows wild in North Africa, West Asia, and Mediterranean Europe, and has spread farther by long cultivation; brown mustard (Brassica juncea), originally from the foothills of the Himalayas, is grown commercially in India, Canada, the United Kingdom, Denmark, Bangladesh and the United States; black mustard (Brassica nigra) is grown in Argentina, C |
https://en.wikipedia.org/wiki/South%20Sandwich%20Trench | The South Sandwich Trench is a deep arcuate trench in the South Atlantic Ocean lying to the east of the South Sandwich Islands. It is the deepest trench of the Southern Atlantic Ocean, and the second deepest of the Atlantic Ocean after the Puerto Rico Trench. Since the trench extends south of the 60th parallel south, it also contains the deepest point in the Southern Ocean.
The deepest point in the entire trench is the Meteor Deep, whose location prior to February 2019 was identified as at a depth of . This sounding was made during the German Meteor expedition. The depth is named after the German survey ship Meteor, which first surveyed the area as part of its namesake expedition in 1926. The deepest point below the 60th parallel south, the deepest point in the Southern Ocean, is dubbed by Victor Vescovo as the Factorian Deep, a name that he hopes will become official. This point lies at a depth of , and is the only subzero Hadal zone in the world.
The trench is long and has a maximum depth of below sea level at , as measured by a Kongsberg EM124 multibeam sonar from February 2–7, 2019 during the Five Deeps Expedition. This measurement was made during the first complete sonar mapping of the trench which covered its entire length, with a measurement error of +/- . The deepest point of the South Sandwich Trench is only shallower than the deepest point in the Puerto Rico Trench, sometimes known as the Milwaukee or Brownson Deep.
The trench is produced by the subduction of the southernmost portion of the South American Plate beneath the small South Sandwich Plate. The South Sandwich Islands constitute a volcanic island arc which results from this active subduction. Mount Belinda on Montagu Island is an active volcano.
Meteor Deep 2019 survey
In February 2019, however, the Five Deeps Expedition, led by its survey ship , has recommended that the location of the Meteor Deep be "relocated" to the newly discovered, truly deepest location at and ± in depth, given |
https://en.wikipedia.org/wiki/Chromoblastomycosis | Chromoblastomycosis is a long-term fungal infection of the skin and subcutaneous tissue (a chronic subcutaneous mycosis).
It can be caused by many different types of fungi which become implanted under the skin, often by thorns or splinters. Chromoblastomycosis spreads very slowly.
It is rarely fatal and usually has a good prognosis, but it can be very difficult to cure. The several treatment options include medication and surgery.
The infection occurs most commonly in tropical or subtropical climates, often in rural areas.
Symptoms and signs
The initial trauma causing the infection is often forgotten or not noticed. The infection builds at the site over a period of years, and a small red papule (skin elevation) appears. The lesion is usually not painful, with few, if any symptoms. Patients rarely seek medical care at this point.
Several complications may occur. Usually, the infection slowly spreads to the surrounding tissue while still remaining localized to the area around the original wound. However, sometimes the fungi may spread through the blood vessels or lymph vessels, producing metastatic lesions at distant sites. Another possibility is secondary infection with bacteria. This may lead to lymph stasis (obstruction of the lymph vessels) and elephantiasis. The nodules may become ulcerated, or multiple nodules may grow and coalesce, affecting a large area of a limb.
Cause
Chromoblastomycosis is believed to originate in minor trauma to the skin, usually from vegetative material such as thorns or splinters; this trauma implants fungi in the subcutaneous tissue. In many cases, the patient will not notice or remember the initial trauma, as symptoms often do not appear for years. The fungi most commonly observed to cause chromoblastomycosis are:
Fonsecaea pedrosoi
Cladophialophora bantiana causes both cutaneous chromoblastomycosis and systemic phaeohyphomycosis
Phialophora verrucosa
Cladophialophora carrionii
Fonsecaea compacta
Mechanism
Over months to |
https://en.wikipedia.org/wiki/Issue%20tracking%20system | An issue tracking system (also ITS, trouble ticket system, support ticket, request management or incident ticket system) is a computer software package that manages and maintains lists of issues. Issue tracking systems are generally used in collaborative settings, especially in large or distributed collaborations, but can also be employed by individuals as part of a time management or personal productivity regimen. These systems often encompass resource allocation, time accounting, priority management, and oversight workflow in addition to implementing a centralized issue registry.
Background
In the institutional setting, issue tracking systems are commonly used in an organization's customer support call center to create, update, and resolve reported customer issues, or even issues reported by that organization's other employees. A support ticket should include vital information for the account involved and the issue encountered. An issue tracking system often also contains a knowledge base containing information on each customer, resolutions to common problems, and other such data.
An issue tracking system is similar to a "bugtracker", and often, a software company will sell both, and some bugtrackers are capable of being used as an issue tracking system, and vice versa. Consistent use of an issue or bug tracking system is considered one of the "hallmarks of a good software team".
A ticket element, within an issue tracking system, is a running report on a particular problem, its status, and other relevant data. They are commonly created in a help desk or call center environment and almost always have a unique reference number, also known as a case, issue or call log number which is used to allow the user or help staff to quickly locate, add to or communicate the status of the user's issue or request.
These tickets are called so because of their origin as small cards within a traditional wall mounted work planning system when this kind of support started. Oper |
https://en.wikipedia.org/wiki/Common%20Locale%20Data%20Repository | The Common Locale Data Repository (CLDR) is a project of the Unicode Consortium to provide locale data in XML format for use in computer applications. CLDR contains locale-specific information that an operating system will typically provide to applications.
CLDR is written in the Locale Data Markup Language (LDML).
Details
Among the types of data that CLDR includes are the following:
Translations for language names
Translations for territory and country names
Translations for currency names, including singular/plural modifications
Translations for weekday, month, era, period of day, in full and abbreviated forms
Translations for time zones and example cities (or similar) for time zones
Translations for calendar fields
Patterns for formatting/parsing dates or times of day
Exemplar sets of characters used for writing the language
Patterns for formatting/parsing numbers
Rules for language-adapted collation
Rules for spelling out numbers as words
Rules for formatting numbers in traditional numeral systems (such as Roman and Armenian numerals)
Rules for transliteration between scripts, much of it based on BGN/PCGN romanization
The information is currently used in International Components for Unicode, Apple's macOS, LibreOffice, MediaWiki, and IBM's AIX, among other applications and operating systems.
CLDR overlaps somewhat with ISO/IEC 15897 (POSIX locales). POSIX locale information can be derived from CLDR by using some of CLDR's conversion tools.
CLDR is maintained by a technical committee which includes employees from IBM, Apple, Google, Microsoft, and some government-based organizations. The committee is chaired by John Emmons, of IBM; Mark Davis, of Google, is vice-chair.
The CLDR covers 400+ languages. |
https://en.wikipedia.org/wiki/Variational%20perturbation%20theory | In mathematics, variational perturbation theory (VPT) is a mathematical method to convert divergent power series in a small expansion parameter, say
,
into a convergent series in powers
,
where is a critical exponent (the so-called index of "approach to scaling" introduced by Franz Wegner). This is possible with the help of variational parameters, which are determined by optimization order by order in . The partial sums are converted to convergent partial sums by a method developed in 1992.
Most perturbation expansions in quantum mechanics are divergent for any small coupling strength . They can be made convergent by VPT (for details see the first textbook cited below). The convergence is exponentially fast.
After its success in quantum mechanics, VPT has been developed further to become an important mathematical tool in quantum field theory with its anomalous dimensions. Applications focus on the theory of critical phenomena. It has led to the most accurate predictions of critical exponents.
More details can be read here. |
https://en.wikipedia.org/wiki/Electronic%20correlation | Electronic correlation is the interaction between electrons in the electronic structure of a quantum system. The correlation energy is a measure of how much the movement of one electron is influenced by the presence of all other electrons.
Atomic and molecular systems
Within the Hartree–Fock method of quantum chemistry, the antisymmetric wave function is approximated by a single Slater determinant. Exact wave functions, however, cannot generally be expressed as single determinants. The single-determinant approximation does not take into account Coulomb correlation, leading to a total electronic energy different from the exact solution of the non-relativistic Schrödinger equation within the Born–Oppenheimer approximation. Therefore, the Hartree–Fock limit is always above this exact energy. The difference is called the correlation energy, a term coined by Löwdin. The concept of the correlation energy was studied earlier by Wigner.
A certain amount of electron correlation is already considered within the HF approximation, found in the electron exchange term describing the correlation between electrons with parallel spin. This basic correlation prevents two parallel-spin electrons from being found at the same point in space and is often called Fermi correlation. Coulomb correlation, on the other hand, describes the correlation between the spatial position of electrons due to their Coulomb repulsion, and is responsible for chemically important effects such as London dispersion. There is also a correlation related to the overall symmetry or total spin of the considered system.
The word correlation energy has to be used with caution. First it is usually defined as the energy difference of a correlated method relative to the Hartree–Fock energy. But this is not the full correlation energy because some correlation is already included in HF. Secondly the correlation energy is highly dependent on the basis set used. The "exact" energy is the energy with full correlation |
https://en.wikipedia.org/wiki/Critical%20exponent | Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on some of its general features. For instance, for ferromagnetic systems, the critical exponents depend only on:
the dimension of the system
the range of the interaction
the spin dimension
These properties of critical exponents are supported by experimental data. Analytical results can be theoretically achieved in mean field theory in high dimensions or when exact solutions are known such as the two-dimensional Ising model. The theoretical treatment in generic dimensions requires the renormalization group approach or the conformal bootstrap techniques.
Phase transitions and critical exponents appear in many physical systems such as water at the critical point, in magnetic systems, in superconductivity, in percolation and in turbulent fluids.
The critical dimension above which mean field exponents are valid varies with the systems and can even be infinite.
Definition
The control parameter that drives phase transitions is often temperature but can also be other macroscopic variables like pressure or an external magnetic field. For simplicity, the following discussion works in terms of temperature; the translation to another control parameter is straightforward. The temperature at which the transition occurs is called the critical temperature . We want to describe the behavior of a physical quantity in terms of a power law around the critical temperature; we introduce the reduced temperature
which is zero at the phase transition, and define the critical exponent :
This results in the power law we were looking for:
It is important to remember that this represents the asymptotic behavior of the function as .
More generally one might expect
The most important critical exponents
Let us assume that the system has two different phases c |
https://en.wikipedia.org/wiki/Minimum%20information%20about%20a%20microarray%20experiment | Minimum information about a microarray experiment (MIAME) is a standard created by the FGED Society for reporting microarray experiments.
MIAME is intended to specify all the information necessary to interpret the results of the experiment unambiguously and to potentially reproduce the experiment. While the standard defines the content required for compliant reports, it does not specify the format in which this data should be presented. MIAME describes the minimum information required to ensure that microarray data can be easily interpreted and that results derived from its analysis can be independently verified. There are a number of file formats used to represent this data, as well as both public and subscription-based repositories for such experiments. Additionally, software exists to aid the preparation of MIAME-compliant reports.
MIAME revolves around six key components: raw data, normalized data, sample annotations, experimental design, array annotations, and data protocols. |
https://en.wikipedia.org/wiki/Diplosome | In cell biology, a diplosome refers to the pair of centrioles which are arranged perpendicularly to one another located near the nucleus. The diplosome plays a role in many processes such as in primary cilium development, spermiogenesis of teleosts, and mitosis. The rigid arrangement of centrioles in a diplosome is generally established after the procentriole is formed during mitosis.
Role of Diplosome in Primary Cilia Development
Primary cilia develop from the diplosome. Although the mechanism is not defined, during prometaphase of mitosis the diplosome ungergoes many changes to allow cilium resorption to occur.
Role of Diplosome in Spermiogenesis of teleosts
The type of spermiogenesis the teleost will undergo is dependent on the location of the diplosome on the nucleus, which ultimately acts as the cause of where the flagellum will be. In type I spermiogenesis, the diplosome is located at a lateral position on the nucleus leading to a perpendicular flagellum to the nucleus. In type II spermiogenesis, the diplosome is located at the apical pole of the nucleus, creating a parallel flagellum to the nucleus. In both scenarios the diplosome will reach the nuclear fossa after nuclear roation.
Diplosome in Mitosis
Diplosomes first appear during G2 phase of the cell cycle. In the early stages of mitosis the diplosome will split and begin to move in opposite directions until both reach edges of the nucleus. At this point one diplosome will return to the center of the nucleus while the other continues over the edge moving toward the center on the opposite side of the nucleus. The separated diplosome will stay in place until the nuclear envelope has broken. |
https://en.wikipedia.org/wiki/Courant%E2%80%93Friedrichs%E2%80%93Lewy%20condition | In mathematics, the convergence condition by Courant–Friedrichs–Lewy is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation produces incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.
Heuristic description
The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration, then this duration must be less than the time for the wave to travel to adjacent grid points. As a corollary, when the grid point separation is reduced, the upper limit for the time step also decreases. In essence, the numerical domain of dependence of any point in space and time (as determined by initial conditions and the parameters of the approximation scheme) must include the analytical domain of dependence (wherein the initial conditions have an effect on the exact value of the solution at that point) to assure that the scheme can access the information required to form the solution.
Statement
To make a reasonably formally precise statement of the condition, it is necessary to define the following quantities:
Spatial coordinate: one of the coordinates of the physical space in which the problem is posed
Spatial dimension of the problem: the number of spatial dimensions, i.e., the number of spatial coordinates of the physical space where the problem is posed. Typical values are , and .
Time: the coordinate, acting as a parameter, which describes the evolution of the system, distinct from the spatial coordinates
The spatial coordinates and the time are disc |
https://en.wikipedia.org/wiki/Specific%20energy | Specific energy or massic energy is energy per unit mass. It is also sometimes called gravimetric energy density, which is not to be confused with energy density, which is defined as energy per unit volume. It is used to quantify, for example, stored heat and other thermodynamic properties of substances such as specific internal energy, specific enthalpy, specific Gibbs free energy, and specific Helmholtz free energy. It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property, whereas energy and mass are extensive properties.
The SI unit for specific energy is the joule per kilogram (J/kg). Other units still in use worldwide in some contexts are the kilocalorie per gram (Cal/g or kcal/g), mostly in food-related topics, and watt hours per kilogram in the field of batteries. In some countries the Imperial unit BTU per pound (Btu/lb) is used in some engineering and applied technical fields.
The concept of specific energy is related to but distinct from the notion of molar energy in chemistry, that is energy per mole of a substance, which uses units such as joules per mole, or the older but still widely used calories per mole.
Table of some non-SI conversions
The following table shows the factors for conversion to J/kg of some non-SI units:
For a table giving the specific energy of many different fuels as well as batteries, see the article Energy density.
Ionising radiation
For ionising radiation, the gray is the SI unit of specific energy absorbed by matter known as absorbed dose, from which the SI unit the sievert is calculated for the stochastic health effect on tissues, known as dose equivalent. The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed |
https://en.wikipedia.org/wiki/Yamaha%20V9958 | The Yamaha V9958 is a Video Display Processor used in the MSX2+ and MSX turbo R series of home computers, as the successor to the Yamaha V9938 used in the MSX2. The main new features are three graphical YJK modes with up to 19268 colors and horizontal scrolling registers. The V9958 was not as widely adopted as the V9938.
Specifications
Video RAM: 128 KB + 64 KB of expanded VRAM
Text modes: 80 x 24 and 32 x 24
Resolution: 512 x 212 (4 or 16 colors out of 512) and 256 x 212 (16, 256, 12499 or 19268 colors)
Sprites: 32, 16 colors, max 8 per horizontal line
Hardware acceleration for copy, line, fill, etc.
Interlacing to double vertical resolution
Horizontal and vertical scroll registers
Feature changes from the V9938
The following features were added to or removed from the Yamaha V9938 specifications:
Added horizontal scrolling registers
Added YJK graphics modes (similar to YUV):
G7 + YJK + YAE: 256 x 212, 12499 colors + 16 color palette
G7 + YJK: 256 x 212, 19268 colors
Added the ability to execute hardware accelerated commands in non-bitmap screen modes
Removed lightpen and mouse functions
Removed composite video output function
MSX-specific terminology
On MSX, the screen modes are often referred to by their assigned number in MSX BASIC. This mapping is as follows: |
https://en.wikipedia.org/wiki/Yamaha%20V9938 | The Yamaha V9938 is a video display processor (VDP) used on the MSX2 home computer, as well as on the Geneve 9640 enhanced TI-99/4A clone and the Tatung Einstein 256. It was also used in a few MSX1 computers, in a configuration with 16kB VRAM.
The Yamaha V9938, also known as MSX-Video or VDP (Video Display Processor), is the successor of the Texas Instruments TMS9918 used in the MSX1 and other systems. The V9938 was in turn succeeded by the Yamaha V9958.
Specifications
Video RAM: 16–192 KB
Text modes: 80 × 24, 40 × 24 and 32 × 24
Resolution: 512 × 212 (16 colors from 512), 256 × 212 (16 colors from 512) and 256 × 212 (256 colors)
Sprites: 32, 16 colors, max 8 per horizontal line
Hardware acceleration for copy, line, fill and logical operations available
Interlacing to double vertical resolution
Vertical scroll register
Detailed specifications
Video RAM: 4 possible configurations
16 KB (modes G4 up to G7 will not be available)
64 KB (modes G6 and G7 will not be available)
128 KB: most common configuration
192 KB, where 64 KB is extended-VRAM (only available as back-buffer for G4 and G5 modes)
Clock: 21 MHz
Video output frequency: 15 kHz
Sprites: 32, 16 colors (1 per line. 3, 7 or 15 colors/line by using the CC attribute), max 8 per horizontal line
Hardware acceleration, with copy, line, fill etc. With or without logical operations.
Vertical scroll register
Capable of superimposition and digitization
Support for connecting a lightpen and a mouse
Resolution:
Horizontal: 256 or 512
Vertical: 192p, 212p, 384i or 424i
Color modes:
Paletted RGB: 16 colors out of 512
Fixed RGB: 256 colors
Screen modes
Text modes:
T1: 40 × 24 with 2 colors (out of 512)
T2: 80 × 24 with 4 colors (out of 512)
All text modes can have 26.5 rows as well.
Pattern modes
G1: 256 × 192 with 16 paletted colors and 1 table of 8×8 patterns
G2: 256 × 192 with 16 paletted colors and 3 tables of 8×8 patterns
G3: 256 × 192 with 16 paletted colors and 3 tables of 8×8 patt |
https://en.wikipedia.org/wiki/Lymph%20node%20biopsy | Lymph node biopsy is a test in which a lymph node or a piece of a lymph node is removed for examination under a microscope (see: biopsy).
The lymphatic system is made up of several lymph nodes connected by lymph vessels. The nodes produce white blood cells (lymphocytes) that fight infections. When an infection is present, the lymph nodes swell, produce more white blood cells, and attempt to trap the organisms that are causing the infection. The lymph nodes also try to trap cancer cells.
Imaging studies include CXR, CT scans of Abdomen,chest, pelvis, neck and PET scans.
CBC, ESR, serum ferritin, bone marrow aspiration.
Indications
The test is used to help determine the cause of lymph node enlargement (swollen glands or lymphadenitis). It may also determine whether tumors in the lymph node are cancerous or noncancerous. Enlarged lymph nodes may be caused by a number of conditions, ranging from very mild infections to serious malignancies. Benign conditions can often be distinguished from cancerous and infectious processes by microscopic examination. The pathologist may also perform additional tests on the lymph node tissue to assist in making a diagnosis.
Some of the conditions where abnormal values are obtained are:
Hodgkin's lymphoma
Non-Hodgkin's lymphoma
Sarcoidosis
tuberculous cervical lymphadenitis (scrofula)
Lymph node biopsies may be performed to evaluate the spread of cancer. See Lymphadenectomy#With sentinel node biopsy.
However, Sentinel lymph node biopsy for evaluating early, thin melanoma has not been shown to improve survival, and for this reason, should not be performed. Patients with melanoma in situ, T1a melanoma or T1b melanoma ≤ 0.5mm have a low risk of cancer spreading to lymph nodes and high 5-year survival rates, so this kind of biopsy is unnecessary.
Procedure
The test is done in an operating room in a hospital, or at an outpatient surgical facility. There are two ways the sample may be obtained:
Needle biopsy
Open (excisional) bio |
https://en.wikipedia.org/wiki/Hardware%20acceleration | Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput and reduced energy consumption. Typical advantages of focusing on software may include greater versatility, more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, times to market, and need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in a GPU, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs).
Hardware acceleration is advantageous for performance, and practical when the functions are fixed so updates are not as ne |
https://en.wikipedia.org/wiki/Fine-needle%20aspiration | Fine-needle aspiration (FNA) is a diagnostic procedure used to investigate lumps or masses. In this technique, a thin (23–25 gauge (0.52 to 0.64 mm outer diameter)), hollow needle is inserted into the mass for sampling of cells that, after being stained, are examined under a microscope (biopsy). The sampling and biopsy considered together are called fine-needle aspiration biopsy (FNAB) or fine-needle aspiration cytology (FNAC) (the latter to emphasize that any aspiration biopsy involves cytopathology, not histopathology). Fine-needle aspiration biopsies are very safe minor surgical procedures. Often, a major surgical (excisional or open) biopsy can be avoided by performing a needle aspiration biopsy instead, eliminating the need for hospitalization. In 1981, the first fine-needle aspiration biopsy in the United States was done at Maimonides Medical Center. Today, this procedure is widely used in the diagnosis of cancer and inflammatory conditions. Fine needle aspiration is generally considered a safe procedure. Complications are infrequent.
Aspiration is safer and far less traumatic than an open biopsy; complications beyond bruising and soreness are rare. However, the few problematic cells can be too few (inconclusive) or missed entirely (a false negative).
Medical uses
This type of sampling is performed for one of two reasons:
A biopsy is performed on a lump or a tissue mass when its nature is in question.
For known tumors, this biopsy is performed to assess the effect of treatment or to obtain tissue for special studies.
When the lump can be felt, the biopsy is usually performed by a cytopathologist or a surgeon. In this case, the procedure is usually short and simple. Otherwise, it may be performed by an interventional radiologist, a doctor with training in performing such biopsies under x-ray or ultrasound guidance. In this case, the procedure may require more extensive preparation and take more time to perform.
Also, fine-needle aspiration is the main me |
https://en.wikipedia.org/wiki/Schanuel%27s%20conjecture | In mathematics, specifically transcendental number theory, Schanuel's conjecture is a conjecture made by Stephen Schanuel in the 1960s concerning the transcendence degree of certain field extensions of the rational numbers.
Statement
The conjecture is as follows:
Given any complex numbers that are linearly independent over the rational numbers , the field extension (z1, ..., zn, ez1, ..., ezn) has transcendence degree at least over .
The conjecture can be found in Lang (1966).
Consequences
The conjecture, if proven, would generalize most known results in transcendental number theory. The special case where the numbers z1,...,zn are all algebraic is the Lindemann–Weierstrass theorem. If, on the other hand, the numbers are chosen so as to make exp(z1),...,exp(zn) all algebraic then one would prove that linearly independent logarithms of algebraic numbers are algebraically independent, a strengthening of Baker's theorem.
The Gelfond–Schneider theorem follows from this strengthened version of Baker's theorem, as does the currently unproven four exponentials conjecture.
Schanuel's conjecture, if proved, would also settle whether numbers such as e + and ee are algebraic or transcendental, and prove that e and are algebraically independent simply by setting z1 = 1 and z2 = i, and using Euler's identity.
Euler's identity states that ei + 1 = 0. If Schanuel's conjecture is true then this is, in some precise sense involving exponential rings, the only relation between e, , and i over the complex numbers.
Although ostensibly a problem in number theory, the conjecture has implications in model theory as well. Angus Macintyre and Alex Wilkie, for example, proved that the theory of the real field with exponentiation, exp, is decidable provided Schanuel's conjecture is true. In fact they only needed the real version of the conjecture, defined below, to prove this result, which would be a positive solution to Tarski's exponential function problem.
Related conje |
https://en.wikipedia.org/wiki/Footprinting | Footprinting (also known as reconnaissance) is the technique used for gathering information about computer systems and the entities they belong to. To get this information, a hacker might use various tools and technologies. This information is very useful to a hacker who is trying to crack a whole system.
When used in the computer security lexicon, "Footprinting" generally refers to one of the pre-attack phases; tasks performed before doing the actual attack. Some of the tools used for Footprinting are Sam Spade, nslookup, traceroute, Nmap and neotrace.
Techniques used
DNS queries
Network enumeration
Network queries
Operating system identification
Software used
Wireshark
Uses
It allows a hacker to gain information about the target system or network. This information can be used to carry out attacks on the system. That is the reason by which it may be named a Pre-Attack, since all the information is reviewed in order to get a complete and successful resolution of the attack. Footprinting is also used by ethical hackers and penetration testers to find security flaws and vulnerabilities within their own company's network before a malicious hacker does.
Types
There are two types of Footprinting that can be used: active Footprinting and passive Footprinting. Active Footprinting is the process of using tools and techniques, such as performing a ping sweep or using the traceroute command, to gather information on a target. Active Footprinting can trigger a target's Intrusion Detection System (IDS) and may be logged, and thus requires a level of stealth to successfully do. Passive Footprinting is the process of gathering information on a target by innocuous, or, passive, means. Browsing the target's website, visiting social media profiles of employees, searching for the website on WHOIS, and performing a Google search of the target are all ways of passive Footprinting. Passive Footprinting is the stealthier method since it will not trigger a target's IDS or otherwise |
https://en.wikipedia.org/wiki/Affinity%20maturation | In immunology, affinity maturation is the process by which TFH cell-activated B cells produce antibodies with increased affinity for antigen during the course of an immune response. With repeated exposures to the same antigen, a host will produce antibodies of successively greater affinities. A secondary response can elicit antibodies with several fold greater affinity than in a primary response. Affinity maturation primarily occurs on membrane immunoglobulin of germinal center B cells and as a direct result of somatic hypermutation (SHM) and selection by TFH cells.
In vivo
The process is thought to involve two interrelated processes, occurring in the germinal centers of the secondary lymphoid organs:
Somatic hypermutation: Mutations in the variable, antigen-binding coding sequences (known as complementarity-determining regions (CDR)) of the immunoglobulin genes. The mutation rate is up to 1,000,000 times higher than in cell lines outside the lymphoid system. Although the exact mechanism of the SHM is still not known, a major role for the activation-induced (cytidine) deaminase has been discussed. The increased mutation rate results in 1-2 mutations per CDR and, hence, per cell generation. The mutations alter the binding specificity and binding affinities of the resultant antibodies.
Clonal selection: B cells that have undergone SHM must compete for limiting growth resources, including the availability of antigen and paracrine signals from TFH cells. The follicular dendritic cells (FDCs) of the germinal centers present antigen to the B cells, and the B cell progeny with the highest affinities for antigen, having gained a competitive advantage, are favored for positive selection leading to their survival. Positive selection is based on steady cross-talk between TFH cells and their cognate antigen presenting GC B cell. Because a limited number of TFH cells reside in the germinal center, only highly competitive B cells stably conjugate with TFH cells and thus r |
https://en.wikipedia.org/wiki/Isoetaceae | Isoetaceae is a family including living quillworts (Isoetes) and comparable extinct herbaceous lycopsids (Tomiostrobus). |
https://en.wikipedia.org/wiki/Isoetales | Isoetales, sometimes also written Isoëtales, is an order of plants in the class Lycopodiopsida.
There are about 140-150 living species, all of which are classified in the genus Isoetes (quillworts), with a cosmopolitan distribution, but often scarce to rare. Living species are mostly aquatic or semi-aquatic, and are found in clear ponds and slowly moving streams. Each leaf is slender and broadens downward to a swollen base up to 5 mm wide where the leaves attach in clusters to a bulb-like, underground corm characteristic of most quillworts. This swollen base also contains male and female sporangia, protected by a thin, transparent covering (velum), which is used diagnostically to help identify quillwort species. Quillwort species are very difficult to distinguish by general appearance. The best way to identify them is by examining the megaspores under a microscope.
Isoetes are the only living pteridophytes capable of secondary growth.
Fossils
Some authors include the tree-like "aboresecent lycophytes", which formed forests during the Carboniferous period, and often assigned to their own order, Lepidodendrales, within Isoetales.
Fossilised specimens of Isoetes beestonii have been found in rocks dating to the latest Permian-earliest Triassic. During the Early Triassic, Isoetales, such as the long-stemmed Pleuromeia were dominant over large areas of the globe. The oldest fossil closely resembling modern quillworts is Isoetites rolandii from the Late Jurassic of North America. |
https://en.wikipedia.org/wiki/Multibus | Multibus is a computer bus standard used in industrial systems. It was developed by Intel Corporation and was adopted as the IEEE 796 bus.
The Multibus specification was important because it was a robust industry standard with a relatively large form factor, allowing complex devices to be designed on it. Because it was well-defined and well-documented, it allowed a Multibus-compatible industry to grow around it, with many companies making card cages and enclosures for it. Many others made CPU, memory, and other peripheral boards. In 1982 there were over 100 Multibus board and systems manufacturers. This allowed complex systems to be built from commercial off-the-shelf hardware, and also allowed companies to innovate by designing a proprietary Multibus board, then integrate it with another vendor's hardware to create a complete system. A good example of this was Sun Microsystems with their Sun-1 and Sun-2 workstations. Sun built custom-designed CPU, memory, SCSI, and video display boards, and then added 3Com Ethernet networking boards, Xylogics SMD disk controllers, Ciprico Tapemaster 1/2 inch tape controllers, Sky Floating Point Processor, and Systech 16-port Terminal Interfaces in order to configure the system as a workstation or a file server. Other workstation vendors who used Multibus-based designs included HP/Apollo and Silicon Graphics.
The Intel Multibus I & II product line was purchased from Intel by RadiSys Corporation, which in 2002 was then purchased by U.S. Technologies, Inc.
Multibus architecture
Multibus was an asynchronous bus that accommodated devices with various transfer rates while maintaining a maximum throughput. It had 20 address lines so it could address up to 1 Mb of Multibus memory and 1 Mb of I/O locations. Most Multibus I/O devices only decoded the first 64 Kb of address space.
Multibus supported multi-master functionality that allowed it to share the Multibus with multiple processors and other DMA devices.
The standard Multibus for |
https://en.wikipedia.org/wiki/High-nutrient%2C%20low-chlorophyll%20regions | High-nutrient, low-chlorophyll (HNLC) regions are regions of the ocean where the abundance of phytoplankton is low and fairly constant despite the availability of macronutrients. Phytoplankton rely on a suite of nutrients for cellular function. Macronutrients (e.g., nitrate, phosphate, silicic acid) are generally available in higher quantities in surface ocean waters, and are the typical components of common garden fertilizers. Micronutrients (e.g., iron, zinc, cobalt) are generally available in lower quantities and include trace metals. Macronutrients are typically available in millimolar concentrations, while micronutrients are generally available in micro- to nanomolar concentrations. In general, nitrogen tends to be a limiting ocean nutrient, but in HNLC regions it is never significantly depleted. Instead, these regions tend to be limited by low concentrations of metabolizable iron. Iron is a critical phytoplankton micronutrient necessary for enzyme catalysis and electron transport.
Between the 1930s and '80s, it was hypothesized that iron is a limiting ocean micronutrient, but there were not sufficient methods reliably to detect iron in seawater to confirm this hypothesis. In 1989, high concentrations of iron-rich sediments in nearshore coastal waters off the Gulf of Alaska were detected. However, offshore waters had lower iron concentrations and lower productivity despite macronutrient availability for phytoplankton growth. This pattern was observed in other oceanic regions and led to the naming of three major HNLC zones: the North Pacific Ocean, the Equatorial Pacific Ocean, and the Southern Ocean.
The discovery of HNLC regions has fostered scientific debate about the ethics and efficacy of iron fertilization experiments which attempt to draw down atmospheric carbon dioxide by stimulating surface-level photosynthesis. It has also led to the development of hypotheses such as grazing control which poses that HNLC regions are formed, in part, from the grazing |
https://en.wikipedia.org/wiki/Reed%E2%80%93Muller%20code | Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.
Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code. Reed–Muller codes are linear block codes that are locally testable, locally decodable, and list decodable. These properties make them particularly useful in the design of probabilistically checkable proofs.
Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When r and m are integers with 0 ≤ r ≤ m, the Reed–Muller code with parameters r and m is denoted as RM(r, m). When asked to encode a message consisting of k bits, where holds, the RM(r, m) code produces a codeword consisting of 2m bits.
Reed–Muller codes are named after David E. Muller, who discovered the codes in 1954, and Irving S. Reed, who proposed the first efficient decoding algorithm.
Description using low-degree polynomials
Reed–Muller codes can be described in several different (but ultimately equivalent) ways. The description that is based on low-degree polynomials is quite elegant and particularly suited for their application as locally testable codes and locally decodable codes.
Encoder
A block code can have one or more encoding functions that map messages to codewords . The Reed–Muller code has message length and block length . One way to define an encoding for this code is based on the evaluation of multilinear polynomials with m variables and total degree r. Every multilinear polynomial over the finite field with two elements can be written as follows:
The are the variables of the polynomial, and the values are the coefficients of the poly |
https://en.wikipedia.org/wiki/KASW | KASW (channel 61) is a television station in Phoenix, Arizona, United States, affiliated with The CW. It is owned by the E. W. Scripps Company alongside ABC affiliate KNXV-TV (channel 15). Both stations share studios on North 44th Street on the city's east side, while KASW's primary transmitter is located on South Mountain.
KASW went on the air in 1995 as the Phoenix affiliate of The WB. Its first owner contracted with KTVK (channel 3) for programming and support services, and KTVK bought the station in 1999. In addition to being an affiliate of The WB and later The CW, the station also broadcast several secondary local sports teams at various times. KASW was split from KTVK in 2014 as the result of KTVK's sale. Scripps acquired it in 2019 and has added local newscasts from KNXV. KASW is the high-power ATSC 3.0 (NextGen TV) station for the Phoenix area and provides the ATSC 3.0 broadcasts of six major Phoenix commercial stations.
History
Prior history of UHF channel 61 in Phoenix
Prior to KASW's sign-on, the UHF channel 61 frequency in the Phoenix market was originally occupied by low-power station K61CA; that station carried a locally programmed music video format known as "Music Channel" and operated from March 15, 1983, until November 12, 1984, closing due to mounting debts and lack of cash to continue operating.
The construction permit for K61CA remained active for several more years; by 1988, it was owned by Channel 61 Development Corporation and was planned as a satellite-fed relay of KSTS, a Telemundo affiliate in San Jose, California.
In November 1987, the FCC allocated channel 61 for full-power use in Phoenix. KUSK-TV applied alongside four other groups; the field was narrowed to three, and Brooks Broadcasting, owned by Chandler farmer Gregory R. Brooks, was granted the permit in February 1991 by the FCC review board.
WB affiliation
Little activity occurred on the permit, with the call sign KAIK; Brooks considered running home shopping on the station |
https://en.wikipedia.org/wiki/Antiprotonic%20helium | Antiprotonic helium is a three-body atom composed of an antiproton and an electron orbiting around a helium nucleus. It is thus made partly of matter, and partly of antimatter. The atom is electrically neutral, since both electrons and antiprotons each have a charge of −1, whereas helium nuclei have a charge of +2. It has the longest lifetime of any experimentally producible matter-antimatter bound state.
Production
These exotic atoms can be produced by simply mixing antiprotons with ordinary helium gas; the antiproton spontaneously removes one of the two electrons contained in a normal helium atom in a chemical reaction, and then begins to orbit the helium nucleus in the electron's place. This will happen in the case of approximately 3% of the antiprotons introduced to the helium gas. The antiproton's orbit, which has a large principal quantum number and angular momentum quantum number of around 38, lies far away from the surface of the helium nucleus. The antiproton can thus orbit the nucleus for tens of microseconds, before finally falling to its surface and annihilating. This contrasts with other types of exotic atoms, X^+, which typically decay within picoseconds.
Laser spectroscopy
Antiprotonic helium atoms are under study by the ASACUSA experiment at CERN. In these experiments, the atoms are first produced by stopping a beam of antiprotons in helium gas. The atoms are then irradiated by powerful laser beams, which cause the antiprotons in them to resonate and jump from one atomic orbit to another.
As in spectroscopy of other bound states, Doppler broadening and other effects present challenges to precision. Researchers use a variety of techniques to obtain accurate results. One way to exceed Doppler-limited precision is two-photon spectroscopy. The ASACUSA Collaboration has studied ^3He^+ and ^4He^+ atoms with the occupying a high Rydberg state with large principal and orbital quantum numbers, 38 using 2-photon spectroscopy.
Counterpropagating Ti:Sa |
https://en.wikipedia.org/wiki/Painlev%C3%A9%20transcendents | In mathematics, Painlevé transcendents are solutions to certain nonlinear second-order ordinary differential equations in the complex plane with the Painlevé property (the only movable singularities are poles), but which are not generally solvable in terms of elementary functions. They were discovered by
,
,
, and
.
History
Painlevé transcendents have their origin in the study of special functions, which often arise as solutions of differential equations, as well as in the study of isomonodromic deformations of linear differential equations. One of the most useful classes of special functions are the elliptic functions. They are defined by second order ordinary differential equations whose singularities have the Painlevé property: the only movable singularities are poles. This property is rare in nonlinear equations. Poincaré and L. Fuchs showed that any first order equation with the Painlevé property can be transformed into the Weierstrass elliptic equation or the Riccati equation, which can all be solved explicitly in terms of integration and previously known special functions. Émile Picard pointed out that for orders greater than 1, movable essential singularities can occur, and found a special case of what was later called Painleve VI equation (see below).
(For orders greater than 2 the solutions can have moving natural boundaries.) Around 1900, Paul Painlevé studied second order differential equations with no movable singularities. He found that up to certain transformations, every such equation
of the form
(with a rational function) can be put into one of fifty canonical forms (listed in ).
found that forty-four of the fifty equations are reducible in the sense that they can be solved in terms of previously known functions, leaving just six equations requiring the introduction of new special functions to solve them. There were some computational errors,
and as a result he missed three of the equations, including the general form of Painleve VI.
|
https://en.wikipedia.org/wiki/Quantitative%20feedback%20theory | In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz (Horowitz, 1963; Horowitz and Sidi, 1972), is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds (or constraints) on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.
Plant templates
Usually any system can be represented by its Transfer Function (Laplace in continuous time domain), after getting the model of a system.
As a result of experimental measurement, values of coefficients in the Transfer Function have a range of uncertainty. Therefore, in QFT every parameter of this function is included into an interval of possible values, and the system may be represented by a family of plants rather than by a standalone expression.
A frequency analysis is performed for a finite number of representative frequencies and a set of templates are obtained in the NC diagram which encloses the behaviour of the open loop system at each frequency.
Frequency bounds
Usually system performance is described as robustness to instability (phase and gain margins), rejection to input and output noise disturbances and reference tracking. In the QFT design methodology these requirements on the system are represented as frequency constraints, conditions that the compensated system loop (controller and plant) could not break.
With these considerations and the selection of the same set of frequencies used for the templates, the frequency constraints for the behaviour of the system loop are computed and represented on the Nichols Chart (NC) as curves.
To achieve the problem requirements, a set of rules on the Open Loop Transfer Function, for the nominal plant may be found. That means the n |
https://en.wikipedia.org/wiki/Dry%20rot | Dry rot is wood decay caused by one of several species of fungi that digest parts of the wood which give the wood strength and stiffness. It was previously used to describe any decay of cured wood in ships and buildings by a fungus which resulted in a darkly colored deteriorated and cracked condition.
The life-cycle of dry rot can be broken down into four main stages. Dry rot begins as a microscopic spore which, in high enough concentrations, can resemble a fine orange dust. If the spores are subjected to sufficient moisture, they will germinate and begin to grow fine white strands known as hyphae. As the hyphae grow they will eventually form a large mass known as mycelium. The final stage is a fruiting body which pumps new spores out into the surrounding air.
In other fields, the term has been applied to the decay of crop plants by fungi. In health and safety, the term is used to describe the deterioration of rubber, for example the cracking of rubber hoses.
Discussion
Dry rot is the term given to brown rot decay caused by certain fungi that deteriorate timber in buildings and other wooden construction without an apparent source of moisture. The term is a misnomer because all wood decaying fungi need a minimum amount of moisture before decay begins. The decayed wood takes on a dark or browner crumbly appearance, with cubical like cracking or checking, that becomes brittle and can eventually crush the wood into powder. Chemically, wood attacked by dry rot fungi is decayed by the same process as other brown rot fungi. An outbreak of dry rot within a building can be an extremely serious infestation that is hard to eradicate, requiring drastic remedies to correct. Significant decay can cause instability and cause the structure to collapse.
The term dry rot, or true dry rot, refers to the decay of timbers from only certain species of fungi that are thought to provide their own source of moisture and nutrients to cause decay in otherwise relatively dry timber. Howev |
https://en.wikipedia.org/wiki/Amodal%20perception | Amodal perception is the perception of the whole of a physical structure when only parts of it affect the sensory receptors. For example, a table will be perceived as a complete volumetric structure even if only part of it—the facing surface—projects to the retina; it is perceived as possessing internal volume and hidden rear surfaces despite the fact that only the near surfaces are exposed to view. Similarly, the world around us is perceived as a surrounding plenum, even though only part of it is in view at any time. Another much quoted example is that of the "dog behind a picket fence" in which a long narrow object (the dog) is partially occluded by fence-posts in front of it, but is nevertheless perceived as a single continuous object. Albert Bregman noted an auditory analogue of this phenomenon: when a melody is interrupted by bursts of white noise, it is nonetheless heard as a single melody continuing "behind" the bursts of noise.
Formulation of the theory is credited to the Belgian psychologist Albert Michotte and Fabio Metelli, an Italian psychologist, with their work developed in recent years by E.S. Reed and the Gestaltists.
Modal completion is a similar phenomenon in which a shape is perceived to be occluding other shapes even when the shape itself is not drawn. Examples include the triangle that appears to be occluding three disks and an outlined triangle in the Kanizsa triangle and the circles and squares that appear in different versions of the Koffka cross.
See also
Developmental psychology
Illusory contours
Intermodal perception
Psychology |
https://en.wikipedia.org/wiki/Back-and-forth%20method | In mathematical logic, especially set theory and model theory, the back-and-forth method is a method for showing isomorphism between countably infinite structures satisfying specified conditions. In particular it can be used to prove that
any two countably infinite densely ordered sets (i.e., linearly ordered in such a way that between any two members there is another) without endpoints are isomorphic. An isomorphism between linear orders is simply a strictly increasing bijection. This result implies, for example, that there exists a strictly increasing bijection between the set of all rational numbers and the set of all real algebraic numbers.
any two countably infinite atomless Boolean algebras are isomorphic to each other.
any two equivalent countable atomic models of a theory are isomorphic.
the Erdős–Rényi model of random graphs, when applied to countably infinite graphs, almost surely produces a unique graph, the Rado graph.
any two many-complete recursively enumerable sets are recursively isomorphic.
Application to densely ordered sets
As an example, the back-and-forth method can be used to prove Cantor's isomorphism theorem, although this was not Georg Cantor's original proof. This theorem states that two unbounded countable dense linear orders are isomorphic.
Suppose that
(A, ≤A) and (B, ≤B) are linearly ordered sets;
They are both unbounded, in other words neither A nor B has either a maximum or a minimum;
They are densely ordered, i.e. between any two members there is another;
They are countably infinite.
Fix enumerations (without repetition) of the underlying sets:
A = { a1, a2, a3, ... },
B = { b1, b2, b3, ... }.
Now we construct a one-to-one correspondence between A and B that is strictly increasing. Initially no member of A is paired with any member of B.
(1) Let i be the smallest index such that ai is not yet paired with any member of B. Let j be some index such that bj is not yet paired with any member of A and ai can be paired wi |
https://en.wikipedia.org/wiki/Intel740 | The Intel740, or i740 (codenamed Auburn), is a 350 nm graphics processing unit using an AGP interface released by Intel on February 12, 1998. Intel was hoping to use the i740 to popularize the Accelerated Graphics Port, while most graphics vendors were still using PCI. Released to enormous fanfare, the i740 proved to have disappointing real-world performance, and sank from view after only a few months on the market. Some of its technology lived on in the form of Intel Extreme Graphics, and the concept of an Intel produced graphics processor lives on in the form of Intel HD Graphics and Intel Iris Pro.
History
The i740 has a long and storied history that starts at GE Aerospace as part of their flight simulation systems, notable for their construction of the Project Apollo "Visual Docking Simulator" that was used to train Apollo Astronauts to dock the Command Module and Lunar Module. GE sold their aerospace interests to Martin Marietta in 1992, as a part of Jack Welch's aggressive downsizing of GE. In 1995, Martin Marietta merged with Lockheed to form Lockheed Martin.
In January 1995, Lockheed Martin re-organized their divisions and formed Real3D in order to bring their 3D experience to the civilian market. Real3D had an early brush with success, providing chipsets and overall design to Sega, who used it in a number of arcade game boards, the Model 2 and Model 3. They also formed a joint project with Intel and Chips and Technologies (later purchased by Intel) to produce 3D accelerators for the PC market, under the code name "Auburn".
Auburn was designed specifically to take advantage of (and promote) the use of AGP interface, during the time when many competing 3D accelerators (notably, 3dfx Voodoo Graphics) still used the PCI connection. A unique characteristic, which set the AGP version of the card apart from other similar devices on the market, was the use of on-board memory exclusively for the display frame buffer, with all textures being kept in the computer s |
https://en.wikipedia.org/wiki/Covering%20set | In mathematics, a covering set for a sequence of integers refers to a set of prime numbers such that every term in the sequence is divisible by at least one member of the set. The term "covering set" is used only in conjunction with sequences possessing exponential growth.
Sierpinski and Riesel numbers
The use of the term "covering set" is related to Sierpinski and Riesel numbers. These are odd natural numbers for which the formula (Sierpinski number) or (Riesel number) produces no prime numbers. Since 1960 it has been known that there exists an infinite number of both Sierpinski and Riesel numbers (as solutions to families of congruences based upon the set } but, because there are an infinitude of numbers of the form or for any , one can only prove to be a Sierpinski or Riesel number through showing that every term in the sequence or is divisible by one of the prime numbers of a covering set.
These covering sets form from prime numbers that in base 2 have short periods. To achieve a complete covering set, Wacław Sierpiński showed that a sequence can repeat no more frequently than every 24 numbers. A repeat every 24 numbers give the covering set }, while a repeat every 36 terms can give several covering sets: }; }; } and }.
Riesel numbers have the same covering sets as Sierpinski numbers.
Other covering sets
Covering sets (thus Sierpinski numbers and Riesel numbers) also exists for bases other than 2.
Covering sets are also used to prove the existence of composite generalized Fibonacci sequences with first two terms coprime (primefree sequence), such as the sequence starting with 20615674205555510 and 3794765361567513.
The concept of a covering set can easily be generalised to other sequences which turn out to be much simpler.
In the following examples + is used as it is in regular expressions to mean 1 or more. For example, 91+3 means the set }.
An example are the following eight sequences:
(29·10n − 191) / 9 or 32+01
(37·10n + 359) / 9 or 41+51
|
https://en.wikipedia.org/wiki/Agent%2047 | Agent 47 is a fictional character, the protagonist and the player character of the Hitman video game franchise, developed by IO Interactive. He has been featured in all games of the series, as well as various spin-off media, including two theatrically released films, a series of comics, and two novels. He has been voiced by actor David Bateson in every main entry in the series since its inception in 2000.
A monotone contract killer without empathy, the player controls 47 as he travels around the world to execute hits on various criminals that are assigned to him by Diana Burnwood, his handler within the fictional International Contract Agency (ICA). The character takes his name from being the 47th clone created by various wealthy criminals from around the world, in the hopes of creating an army of obedient soldiers to carry out their commands. As one of the last clones to be created, 47 is among the most skillful, and manages to escape his creators before finding employment with the ICA.
Agent 47 has been positively received by critics for his moral ambiguity and nuanced characterization. Alongside other gaming characters with similar traits, such as Lara Croft, Sam Fisher, and Solid Snake, he is considered one of the most popular and significant characters in video games.
Concept and creation
According to Jacob Andersen, lead designer of Hitman 2: Silent Assassin, Agent 47 went from being "a mean old hairy guy" to having "hi-tech glasses" before getting to his current design. More inspiration came from "comic books, Hong Kong movies" and other similar media. According to game director Rasmus Højengaard, the idea of a clone whose future is decided by the people that created him intrigued the Hitman team. He felt that the idea of creating the "ultimate assassin" by cloning evolved with the character before the first game was completed. The character of 47 is voiced in the video game series by David Bateson, whom the appearance of 47 is based on.
Appearance
Agent |
https://en.wikipedia.org/wiki/Periphyton | Periphyton is a complex mixture of algae, cyanobacteria, heterotrophic microbes, and detritus that is attached to submerged surfaces in most aquatic ecosystems. The related term Aufwuchs (German "surface growth" or "overgrowth") refers to the collection of small animals and plants that adhere to open surfaces in aquatic environments, such as parts of rooted plants.
Periphyton serves as an important food source for invertebrates, tadpoles, and some fish. It can also absorb contaminants, removing them from the water column and limiting their movement through the environment. The periphyton is also an important indicator of water quality; responses of this community to pollutants can be measured at a variety of scales representing physiological to community-level changes. Periphyton has often been used as an experimental system in, e.g., pollution-induced community tolerance studies.
Composition
In both marine and freshwater environments, algae – particularly green algae and diatoms – make up the dominant component of surface growth communities. Small crustaceans, rotifers, and protozoans are also commonly found in fresh water and the sea, but insect larvae, oligochaetes and tardigrades are peculiar to freshwater aufwuchs faunas.
Uses
Periphyton communities are used in aquaculture food production systems for the removal of solid and dissolved pollutants. Their performance in filtration is established and their application as aquacultural feed is being researched.
Periphyton serves as an indicator of water quality because:
It has a naturally high number of species.
It has a fast response to changes.
It is easy to sample.
It is known for tolerance/sensitivity to change.
Threats
A risk for periphyton stems from urbanization. Increased turbidity levels associated with urban sprawl can smother periphyton causing its detachment from the rocks on which it lives. It can be important for the clearance of harmful chemicals and reducing turbidity.
Food source
Many aquatic |
https://en.wikipedia.org/wiki/Heptomino | A heptomino (or 7-omino) is a polyomino of order 7, that is, a polygon in the plane made of 7 equal-sized squares connected edge-to-edge. The name of this type of figure is formed with the prefix hept(a)-. When rotations and reflections are not considered to be distinct shapes, there are 108 different free heptominoes. When reflections are considered distinct, there are 196 one-sided heptominoes. When rotations are also considered distinct, there are 760 fixed heptominoes.
Symmetry
The figure shows all possible free heptominoes, coloured according to their symmetry groups:
84 heptominoes (coloured grey) have no symmetry. Their symmetry group consists only of the identity mapping.
9 heptominoes (coloured red) have an axis of reflection symmetry aligned with the gridlines. Their symmetry group has two elements, the identity and the reflection in a line parallel to the sides of the squares.
7 heptominoes (coloured green) have an axis of reflection symmetry at 45° to the gridlines. Their symmetry group has two elements, the identity and a diagonal reflection.
4 heptominoes (coloured blue) have point symmetry, also known as rotational symmetry of order 2. Their symmetry group has two elements, the identity and the 180° rotation.
3 heptominoes (coloured purple) have two axes of reflection symmetry, both aligned with the gridlines. Their symmetry group has four elements, the identity, two reflections and the 180° rotation. It is the dihedral group of order 2, also known as the Klein four-group.
1 heptomino (coloured orange) has two axes of reflection symmetry, both aligned with the diagonals. Its symmetry group also has four elements. Its symmetry group is also the dihedral group of order 2 with four elements.
If reflections of a heptomino are considered distinct, as they are with one-sided heptominoes, then the first and fourth categories above would each double in size, resulting in an extra 88 heptominoes for a total of 196. If rotations are also considered distin |
https://en.wikipedia.org/wiki/Nonomino | A nonomino (or enneomino or 9-omino) is a polyomino of order 9, that is, a polygon in the plane made of 9 equal-sized squares connected edge-to-edge. The name of this type of figure is formed with the prefix non(a)-. When rotations and reflections are not considered to be distinct shapes, there are 1,285 different free nonominoes. When reflections are considered distinct, there are 2,500 one-sided nonominoes. When rotations are also considered distinct, there are 9,910 fixed nonominoes.
Symmetry
The 1,285 free nonominoes can be classified according to their symmetry groups:
1,196 nonominoes have no symmetry. Their symmetry group consists only of the identity mapping.
38 nonominoes have an axis of reflection symmetry aligned with the gridlines. Their symmetry group has two elements, the identity and the reflection in a line parallel to the sides of the squares.
26 nonominoes have an axis of reflection symmetry at 45° to the gridlines. Their symmetry group has two elements, the identity and a diagonal reflection.
19 nonominoes have point symmetry, also known as rotational symmetry of order 2. Their symmetry group has two elements, the identity and the 180° rotation.
4 nonominoes have two axes of reflection symmetry, both aligned with the gridlines. Their symmetry group has four elements, the identity, two reflections and the 180° rotation. It is the dihedral group of order 2, also known as the Klein four-group.
2 nonominoes have four axes of reflection symmetry, aligned with the gridlines and the diagonals, and rotational symmetry of order 4. Their symmetry group, the dihedral group of order 4, has eight elements.
Unlike octominoes, there are no nonominoes with rotational symmetry of order 4 or with two axes of reflection symmetry aligned with the diagonals.
If reflections of a nonomino are considered distinct, as they are with one-sided nonominoes, then the first and fourth categories above double in size, resulting in an extra 1,215 nonominoes for |
https://en.wikipedia.org/wiki/154%20%28number%29 | 154 (one hundred [and] fifty-four) is the natural number following 153 and preceding 155.
In mathematics
154 is a nonagonal number. Its factorization makes 154 a sphenic number
There is no integer with exactly 154 coprimes below it, making 154 a noncototient, nor is there, in base 10, any integer that added up to its own digits yields 154, making 154 a self number
154 is the sum of the first six factorials, if one starts with and assumes that .
With just 17 cuts, a pancake can be cut up into 154 pieces (Lazy caterer's sequence).
The distinct prime factors of 154 add up to 20, and so do the ones of 153, hence the two form a Ruth-Aaron pair. 154! + 1 is a factorial prime.
In music
154 is an album by Wire, named for the number of live gigs Wire had performed at that time
In the military
was a United States Navy Trefoil-class concrete barge during World War II
was a United States Navy Admirable-class minesweeper during World War II
was a United States Navy Wickes-class destroyer during World War II
was a United States Navy General G. O. Squier-class transport during World War II
was a United States Navy Haskell-class attack transport during World War II
was a United States Navy Buckley-class destroyer escort ship during World War II
Strike Fighter Squadron 154 (VFA-154) is a United States Navy strike fighter squadron stationed at Naval Air Station Lemoore
Convoy ON-154 was a convoy of ships in December 1942 during World War II
In sports
Major League Baseball teams played 154 games a season prior to expansion in 1961
Golfer Jack Nicklaus played in a record 154 consecutive major championships from the 1957 U.S. Open to the 1998 U.S. Open
In transportation
Seattle Bus Route 154
The Maserati Tipo 154 racecar, also known as 151/4, was produced in 1965
In other fields
154 is also:
The year AD 154 or 154 BC
154 AH is a year in the Islamic calendar that corresponds to that corresponds to 770 – 771 AD
154 Bertha is a dark outer Main belt asteroid |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.