id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
53,036,072
https://en.wikipedia.org/wiki/Mu%20Librae
μ Librae (Latinised as Mu Librae) is the Bayer designation for a probable triple star system in the zodiac constellation of Libra. They have a combined apparent visual magnitude of 5.32, which is bright enough to be faintly visible to the naked eye. With an annual parallax shift of 13.71 mas, the system is located at an estimated distance of around 240 light years. The inner pair consists of two A-type stars that, as of 2006, had an angular separation of 1.79 arc seconds along a position angle of 5.5°. They have an estimated physical separation of 139 AU. The primary, component A, is a visual magnitude 5.69 magnetic Ap star showing overabundances of the elements aluminum, strontium, chromium, and europium. Hence, it has a stellar classification of A1pSrEuCr. It is a photometric variable with periods of and . The surface magnetic field strength is 1,375 Gauss. The secondary, component B, is an Am star with a stellar classification of A6m. It has a visual magnitude of 6.72. The tertiary member, component C, is a magnitude 14.70 star at an angular separation of 12.90 arc seconds along a position angle of 294°, as of 2000. References A-type main-sequence stars Am stars Libra (constellation) Libra, Mu BD-13 3986 130559 072489 5523 Ap stars
Mu Librae
Astronomy
309
22,113,019
https://en.wikipedia.org/wiki/Redheffer%20matrix
In mathematics, a Redheffer matrix, often denoted as studied by , is a square (0,1) matrix whose entries aij are 1 if i divides j or if j = 1; otherwise, aij = 0. It is useful in some contexts to express Dirichlet convolution, or convolved divisors sums, in terms of matrix products involving the transpose of the Redheffer matrix. Variants and definitions of component matrices Since the invertibility of the Redheffer matrices are complicated by the initial column of ones in the matrix, it is often convenient to express where is defined to be the (0,1) matrix whose entries are one if and only if and . The remaining one-valued entries in then correspond to the divisibility condition reflected by the matrix , which plainly can be seen by an application of Mobius inversion is always invertible with inverse . We then have a characterization of the singularity of expressed by If we define the function then we can define the Redheffer (transpose) matrix to be the nxn square matrix in usual matrix notation. We will continue to make use this notation throughout the next sections. Examples The matrix below is the 12 × 12 Redheffer matrix. In the split sum-of-matrices notation for , the entries below corresponding to the initial column of ones in are marked in blue. A corresponding application of the Mobius inversion formula shows that the Redheffer transpose matrix is always invertible, with inverse entries given by where denotes the Moebius function. In this case, we have that the inverse Redheffer transpose matrix is given by Key properties Singularity and relations to the Mertens function and special series Determinants The determinant of the n × n square Redheffer matrix is given by the Mertens function M(n). In particular, the matrix is not invertible precisely when the Mertens function is zero (or is close to changing signs). As a corollary of the disproof of the Mertens conjecture, it follows that the Mertens function changes sign, and is therefore zero, infinitely many times, so the Redheffer matrix is singular at infinitely many natural numbers. The determinants of the Redheffer matrices are immediately tied to the Riemann Hypothesis through this relation with the Mertens function, since the Hypothesis is equivalent to showing that for all (sufficiently small) . Factorizations of sums encoded by these matrices In a somewhat unconventional construction which reinterprets the (0,1) matrix entries to denote inclusion in some increasing sequence of indexing sets, we can see that these matrices are also related to factorizations of Lambert series. This observation is offered in so much as for a fixed arithmetic function f, the coefficients of the next Lambert series expansion over f provide a so-called inclusion mask for the indices over which we sum f to arrive at the series coefficients of these expansions. Notably, observe that Now in the special case of these divisor sums, which we can see from the above expansion, are codified by boolean (zero-one) valued inclusion in the sets of divisors of a natural number n, it is possible to re-interpret the Lambert series generating functions which enumerate these sums via yet another matrix-based construction. Namely, Merca and Schmidt (2017-2018) proved invertible matrix factorizations expanding these generating functions in the form of where denotes the infinite q-Pochhammer symbol and where the lower triangular matrix sequence is exactly generated as the coefficients of , through these terms also have interpretations as differences of special even (odd) indexed partition functions. Merca and Schmidt (2017) also proved a simple inversion formula which allows the implicit function f to be expressed as a sum over the convolved coefficients of the original Lambert series generating function in the form of where p(n) denotes the partition function, is the Moebius function, and the coefficients of inherit a quadratic dependence on j through the pentagonal number theorem. This inversion formula is compared to the inverses (when they exist) of the Redheffer matrices for the sake of completion here. Other than that the underlying so-termed mask matrix which specifies the inclusion of indices in the divisor sums at hand are invertible, utilizing this type of construction to expand other Redheffer-like matrices for other special number theoretic sums need not be limited to those forms classically studied here. For example, in 2018 Mousavi and Schmidt extend such matrix based factorization lemmas to the cases of Anderson-Apostol divisor sums (of which Ramanujan sums are a notable special case) and sums indexed over the integers that are relatively prime to each n (for example, as classically defines the tally denoted by the Euler phi function). More to the point, the examples considered in the applications section below suggest a study of the properties of what can be considered generalized Redheffer matrices representing other special number theoretic sums. Spectral radius and eigenspaces If we denote the spectral radius of by , i.e., the dominant maximum modulus eigenvalue in the spectrum of , then which bounds the asymptotic behavior of the spectrum of when n is large. It can also be shown that , and by a careful analysis (see the characteristic polynomial expansions below) that . The matrix has eigenvalue one with multiplicity . The dimension of the eigenspace corresponding to the eigenvalue is known to be . In particular, this implies that is not diagonalizable whenever . For all other eigenvalues of , then dimension of the corresponding eigenspaces are one. Characterizing eigenvectors We have that is an eigenvector of corresponding to some eigenvalue in the spectrum of if and only if for the following two conditions hold: If we restrict ourselves to the so-called non-trivial cases where , then given any initial eigenvector component we can recursively compute the remaining n-1 components according to the formula With this in mind, for we can define the sequences of There are a couple of curious implications related to the definitions of these sequences. First, we have that if and only if Secondly, we have an established formula for the Dirichlet series, or Dirichlet generating function, over these sequences for fixed which holds for all given by where of course as usual denotes the Riemann zeta function. Bounds and properties of non-trivial eigenvalues A graph theoretic interpretation to evaluating the zeros of the characteristic polynomial of and bounding its coefficients is given in Section 5.1 of. Estimates of the sizes of the Jordan blocks of corresponding to the eigenvalue one are given in. A brief overview of the properties of a modified approach to factorizing the characteristic polynomial, , of these matrices is defined here without the full scope of the somewhat technical proofs justifying the bounds from the references cited above. Namely, let the shorthand and define a sequence of auxiliary polynomial expansions according to the formula Then we know that has two real roots, denoted by , which satisfy where is Euler's classical gamma constant, and where the remaining coefficients of these polynomials are bounded by A plot of the much more size-constrained nature of the eigenvalues of which are not characterized by these two dominant zeros of the polynomial seems to be remarkable as evidenced by the only 20 remaining complex zeros shown below. The next image is reproduced from a freely available article cited above when is available here for reference. Applications and generalizations We provide a few examples of the utility of the Redheffer matrices interpreted as a (0,1) matrix whose parity corresponds to inclusion in an increasing sequence of index sets. These examples should serve to freshen up some of the at times dated historical perspective of these matrices, and their being footnote-worthy by virtue of an inherent, and deep, relation of their determinants to the Mertens function and equivalent statements of the Riemann Hypothesis. This interpretation is a great deal more combinatorial in construction than typical treatments of the special Redheffer matrix determinants. Nonetheless, this combinatorial twist on enumerating special sequences of sums has been explored more recently in a number of papers and is a topic of active interest in pre-print archives. Before diving into the full construction of this spin on the Redheffer matrix variants defined above, observe that this type of expansion is in many ways essentially just another variation of the usage of a Toeplitz matrix to represent truncated power series expressions where the matrix entries are coefficients of the formal variable in the series. Let's explore an application of this particular view of a (0,1) matrix as masking inclusion of summation indices in a finite sum over some fixed function. See the citations to the references and for existing generalizations of the Redheffer matrices in the context of general arithmetic function cases. The inverse matrix terms are referred to a generalized Mobius function within the context of sums of this type in. Matrix products expanding Dirichlet convolutions and Dirichlet inverses First, given any two non-identically-zero arithmetic functions f and g, we can provide explicit matrix representations which encode their Dirichlet convolution in rows indexed by natural numbers : Then letting denote the vector of all ones, it is easily seen that the row of the matrix-vector product gives the convolved Dirichlet sums for all where the upper index is arbitrary. One task that is particularly onerous given an arbitrary function f is to determine its Dirichlet inverse exactly without resorting to a standard recursive definition of this function via yet another convolved divisor sum involving the same function f with its under-specified inverse to be determined: It is clear that in general the Dirichlet inverse for f, i.e., the uniquely defined arithmetic function such that , involves sums of nested divisor sums of depth from one to where this upper bound is the prime omega function which counts the number of distinct prime factors of n. As this example shows, we can formulate an alternate way to construct the Dirichlet inverse function values via matrix inversion with our variant Redheffer matrices, . Generalizations of the Redheffer matrix forms: GCD sums and other matrices whose entries denote inclusion in special sets There are several often cited articles from worthy journals that fight to establish expansions of number theoretic divisor sums, convolutions, and Dirichlet series (to name a few) through matrix representations. Besides non-trivial estimates on the corresponding spectrum and eigenspaces associated with truly notable and important applications of these representations—the underlying machinery in representing sums of these forms by matrix products is to effectively define a so-termed masking matrix whose zero-or-one valued entries denote inclusion in an increasing sequence of sets of the natural numbers . To illustrate that the previous mouthful of jargon makes good sense in setting up a matrix based system for representing a wide range of special summations, consider the following construction: Let be a sequence of index sets, and for any fixed arithmetic function define the sums One of the classes of sums considered by Mousavi and Schmidt (2017) defines the relatively prime divisor sums by setting the index sets in the last definition to be This class of sums can be used to express important special arithmetic functions of number theoretic interest, including Euler's phi function (where classically we define ) as and even the Mobius function through its representation as a discrete (finite) Fourier transform: Citations in the full paper provide other examples of this class of sums including applications to cyclotomic polynomials (and their logarithms). The referenced article by Mousavi and Schmidt (2017) develops a factorization-theorem-like treatment to expanding these sums which is an analog to the Lambert series factorization results given in the previous section above. The associated matrices and their inverses for this definition of the index sets then allow us to perform the analog of Moebius inversion for divisor sums which can be used to express the summand functions f as a quasi-convolved sum over the inverse matrix entries and the left-hand-side special functions, such as or pointed out in the last pair of examples. These inverse matrices have many curious properties (and a good reference pulling together a summary of all of them is currently lacking) which are best intimated and conveyed to new readers by inspection. With this in mind, consider the case of the upper index and the relevant matrices defined for this case given as follows: Examples of invertible matrices which define other special sums with non-standard, however, clear applications should be catalogued and listed in this generalizations section for completeness. An existing summary of inversion relations, and in particular, exact criteria under which sums of these forms can be inverted and related is found in many references on orthogonal polynomials. Other good examples of this type of factorization treatment to inverting relations between sums over sufficiently invertible, or well enough behaved triangular sets of weight coefficients include the Mobius inversion formula, the binomial transform, and the Stirling transform, among others. See also Redheffer star product References External links and citations to related work Matrices Number theory
Redheffer matrix
Mathematics
2,740
74,947,518
https://en.wikipedia.org/wiki/Dendrobium%20alkaloids
Dendrobium alkaloids are natural products and so-called pseudoalkaloids. Occurrence Dendrobium alkaloids are found in the genus Dendrobium, particularly in species like Dendrobium nobile. Representatives Approximately 15 alkaloids belong to this group. Notable representatives include Dendrobin, Nobilonin, and Dendroxin. References Alkaloids
Dendrobium alkaloids
Chemistry
83
54,572,661
https://en.wikipedia.org/wiki/KOI8-B
KOI8-B is the informal name for an 8-bit Roman / Cyrillic character set constituting the common subset of the major KOI-8 variants (KOI8-R, KOI8-U, KOI8-RU, KOI8-E, KOI8-F). Accordingly, it is closely related to KOI8-R, but defines only the letter subset in the upper half. As such it was implemented by some font vendors for PC Unixes like Xenix in the late 1980s. Character set The following table shows the KOI8-B encoding. Each character is shown with its equivalent Unicode code point. See also KOI character encodings References External links http://czyborra.com/charsets/koi8-b.txt.gz http://czyborra.com/charsets/koi8-b.bdf.gz Character sets Computing in the Soviet Union
KOI8-B
Technology
206
940,429
https://en.wikipedia.org/wiki/1024%20%28number%29
1024 is the natural number following 1023 and preceding 1025. 1024 is a power of two: 2 (2 to the tenth power). It is the nearest power of two from decimal 1000 and senary 10000 (decimal 1296). It is the 64th quarter square. 1024 is the smallest number with exactly 11 divisors (but there are smaller numbers with more than 11 divisors; e.g., 60 has 12 divisors) . Enumeration of groups The number of groups of order 1024 is , up to isomorphism. An earlier calculation gave this number as , but in 2021 this was shown to be in error. This count is more than 99% of all the isomorphism classes of groups of order less than 2000. Approximation to 1000 The neat coincidence that 210 is nearly equal to 103 provides the basis of a technique of estimating larger powers of 2 in decimal notation. Using 210a+b ≈ 2b103a(or 2a≈2a mod 1010floor(a/10) if "a" stands for the whole power) is fairly accurate for exponents up to about 100. For exponents up to 300, 3a continues to be a good estimate of the number of digits. For example, 253 ≈ 8×1015. The actual value is closer to 9×1015. In the case of larger exponents, the relationship becomes increasingly inaccurate, with errors exceeding an order of magnitude for a ≥ 97. For example: In measuring bytes, 1024 is often used in place of 1000 as the quotients of the units byte, kilobyte, megabyte, etc. In 1999, the IEC coined the term kibibyte for multiples of 1024, with kilobyte being used for multiples of 1000. Special use in computers In binary notation, 1024 is represented as 10000000000, making it a simple round number occurring frequently in computer applications. 1024 is the maximum number of computer memory addresses that can be referenced with ten binary switches. This is the origin of the organization of computer memory into 1024-byte chunks or kibibytes. In the Rich Text Format (RTF), language code 1024 indicates the text is not in any language and should be skipped over when proofing. Most used languages codes in RTF are integers slightly over 1024. 1024×768 pixels and 1280×1024 pixels are common standards of display resolution. 1024 is the lowest non-system and non-reserved port number in TCP/IP networking. Ports above this number can usually be opened for listening by non-superusers. See also Powers of 1024 References Integers
1024 (number)
Mathematics
561
20,634,633
https://en.wikipedia.org/wiki/NGC%202552
NGC 2552 is a Magellanic spiral galaxy located some 22 million light years away. It can be found in constellation of Lynx. This is a type of unbarred dwarf galaxy, usually with a single spiral arm. It is inclined by 41° to the line of sight from the Earth along a position angle of 229°. The measured velocity dispersion of the stars in NGC 2552 is relatively low—a mere 19 ± 2 km/s. This galaxy forms part of a loose triplet that includes NGC 2541 and NGC 2500, which together belong to the NGC 2841 group. References External links astronomerica.awardspace.com NGC2552 image Unbarred spiral galaxies Magellanic spiral galaxies 2552 Lynx (constellation) 023340 04325
NGC 2552
Astronomy
159
44,086,751
https://en.wikipedia.org/wiki/Orban%20%28audio%20processing%29
Orban is an international company making audio processors for radio, television and Internet broadcasters. It has been operating since founder Bob Orban sold his first product in 1967. The company was originally based in San Francisco, California. History The Orban company started in 1967 when Bob Orban built and sold his first product, a stereo synthesizer, to WOR-FM in New York City, a year before Orban earned his master's degree from Stanford University. He teamed with synthesizer pioneers Bernie Krause and Paul Beaver to promote his products. In 1970, Orban established manufacturing and design in San Francisco. Bob Orban partnered with John Delantoni to form Orban Associates in 1975. The company was bought by Harman International in 1989, and the firm moved to nearby San Leandro in 1991. In 2000, Orban was bought by Circuit Research Labs (CRL) who moved manufacturing to Tempe, Arizona, in 2005, keeping the design team in the San Francisco Bay Area. Orban expanded into Germany in 2006 by purchasing Dialog4 System Engineering in Ludwigsburg. Orban USA acquired the company in 2009, based in Arizona. The Orban company was acquired by Daysequerra in 2016, moving manufacturing to New Jersey. In 2020, Orban Labs consolidated divisions and streamlined operations, with Orban Europe GmbH assuming responsibility for all Orban product sales worldwide. Over its years of trading, the Orban company has released many well-known audio-processing products, including the Orban Optimod 8000, which was the first audio processor to include FM processing and a stereo generator under one package, an innovative idea at the time, as no other processor took into account 75 μs pre-emphasis curve employed by FM, which leads to low average modulation and many peaks. This was followed by the Orban Optimod 8100, which went on to become the company's most successful product, and the Orban Optimod 8200, the first successful digital signal processor. It was entirely digital and featured a two-band AGC, followed by five-band or two-band processing, with phase cancellation of clipping distortion. Processors were also made for AM and digital radio as well, including the Orban Optimod 9200 and the Orban Optimod 6200, the first processor made exclusively for digital television, digital radio and Internet radio. During the 2000s, Orban followed up the 8200 by creating the Orban Optimod 8400 in 2000, the Orban Optimod 8500 in 2005, and the Orban Optimod 8600 in 2010. Present day The company's current product line includes its flagship audio processor, the Optimod-FM 5950 Other processors include the Orban Optimod-FM 5750, the Trio, the Optimod PCn1600 for digital, internet and mastering applications and the XPN-AM/ Optimod 9300 for AM radio. References External links Electronics companies established in 1967 1967 establishments in California 1989 mergers and acquisitions 2000 mergers and acquisitions 2009 mergers and acquisitions 2016 mergers and acquisitions Manufacturing companies based in the San Francisco Bay Area Audio electronics Harman International Signal processing
Orban (audio processing)
Technology,Engineering
648
1,921,074
https://en.wikipedia.org/wiki/El%20Castillo%2C%20Chichen%20Itza
El Castillo (, 'the Castle'), also known as the Temple of Kukulcan is a Mesoamerican step-pyramid that dominates the center of the Chichen Itza archaeological site in the Mexican state of Yucatán. The temple building is more formally designated by archaeologists as Chichen Itza Structure 5B18. Built by the pre-Columbian Maya civilization sometime between the 8th and 12th centuries AD, the building served as a temple to the deity Kukulcán, the Yucatec Maya Feathered Serpent deity closely related to Quetzalcoatl, a deity known to the Aztecs and other central Mexican cultures of the Postclassic period. It has a substructure that likely was constructed several centuries earlier for the same purpose. The temple consists of a series of square terraces with stairways up each of the four sides to the temple on top. Sculptures of plumed serpents run down the sides of the northern balustrade. Around the spring and autumn equinoxes, the late afternoon sun strikes off the northwest corner of the temple and casts a series of triangular shadows against the northwest balustrade, creating the illusion of the feathered serpent "crawling" down the temple. To contemporary visitors, the event has been very popular and is witnessed by thousands at the spring equinox, but it is not known whether the phenomenon is a result of a purposeful design since the light-and-shadow effect can be observed without major changes during several weeks near the equinoxes. Scientific research led since 1998 suggests that the temple mimics the chirping sound of the quetzal bird when humans clap their hands around it. The researchers argue that this phenomenon is not accidental, that the builders of this temple felt divinely rewarded by the echoing effect of this structure. Technically, the clapping noise rings out and scatters against the temple's high and narrow limestone steps, producing a chirp-like tone that declines in frequency. All four sides of the temple have approximately 91 steps which, when added together and including the temple platform on top as the final "step", may produce a total of 365 steps (the steps on the south side of the temple are eroded). That number is equal to the number of days of the Haab' year and likely is significantly related to rituals. The structure is high, plus an additional for the temple at the top. The square base measures across. Construction The construction of the Temple of Kukulcán ("El Castillo"), like other Mesoamerican temples, likely reflected the common practice by the Maya of executing several phases of construction for their temples. The last construction probably took place between 900–1000 AD, while the substructure may have been constructed earlier, between 600–800 AD. Based on archaeological research, construction of the Temple of Kukulcán was based on the concept of axis mundi. Anthropologists think that the site remained sacred regardless of how the structure was positioned on the location. When a temple structure was renewed, the former construction was destroyed using a ritual that involved resolving the space of spiritual forces to preserve its sacredness. It is estimated that this last construction dates to the eleventh century AD. The older, inner temple is referred to as the "substructure". During the 1930s restoration work, an entryway was cut into the balustrade of the northeastern exterior staircase to provide access to archaeologists, and later for tourists for the rest of the 20th century. Interior In 1566, the temple was described by Friar Diego de Landa in the manuscript known as Yucatán at the Time of the Spanish Encounter (Relación de las cosas de Yucatán). Almost three centuries later, John Lloyd Stephens described the architecture of the temple with even more detail in his book Incidents of Travel in Yucatán (Incidentes del viaje Yucatán), published in 1843. At that time, the archaeological site of Chichén Itzá was located on an estate, also called Chichén Itzá, owned by Juan Sosa. Frederick Catherwood illustrated the book with lithographs depicting the temple covered in abundant vegetation on all sides. There are some photographs taken in the beginning of the twentieth century that also show the temple partially covered by said vegetation. In 1924, the Carnegie Institution for Science in Washington, D.C. requested permission from the Mexican government to carry out explorations and restoration efforts in and around the area of Chichen Itza. In 1927, with the assistance of Mexican archaeologists, they started the task. In April 1931, looking to confirm the hypothesis that the structure of the temple of Kukulcán was built on top of a much older temple, the work of excavation and exploration began in spite of generalized beliefs contrary to that hypothesis. On June 7, 1932, a box with coral, obsidian, and turquoise encrusted objects was found alongside human remains, which are exhibited in the National Anthropology Museum in Mexico City. The Temple of Kukulcán (El Templo) is located above a cavity filled with water, labeled a sinkhole or cenote. Recent archaeological investigations suggest that an earlier construction phase is located closer to the southeastern cenote, rather than being centered. This specific proximity to the cenote suggests that the Maya may have been aware of the cenote’s existence and purposefully constructed it there to facilitate their religious beliefs. After extensive excavation work, in April 1935, a Chac Mool statue, with its nails, teeth, and eyes inlaid with mother of pearl was found inside the temple. The room where the discovery was made was nicknamed the "Hall of offerings" or "North Chamber". After more than a year of excavation, in August 1936, a second room was found, only meters away from the first. Inside this room, dubbed the "Chamber of Sacrifices", archaeologists found two parallel rows of human bone set into the back wall, as well as a red jaguar statue. Both deposits of human remains were found oriented north-northeast. Researchers concluded that there must be an inner temple approximately wide, shaped similarly to the outer temple, with nine steps and a height of up to the base of the temple where the Chac Mool and the jaguar were found. What appears to be a throne (referred to as the "Red Jaguar") was discovered in the room described as the throne room. The jaguar throne was previously presumed to have been decorated with flint and green stone discs, but recent research has determined the jaguar to be composed of highly symbolic and valued materials for ritualistic significance. The use of x-ray fluorescence (XRF) was used to determine that the sculpture is painted red with a pigment that includes cinnabar, or mercury sulfide (HgS). Cinnabar was not in accessible proximity to Chichén Itzá, so its transport through long-distance trade would have placed a high value on it. Additionally, the color red appears to have been significant to Maya cultural symbolism. It is associated with creating life as well as death and sacrifice. Studies suggest that objects in Maya culture were imbued with vital essence, so the choice of painting the jaguar red may be a reflection of these beliefs, deeming the jaguar as an offering. The high status associated with the cinnabar pigment and its red tone suggest that the jaguar was linked to the ritual importance of closing a temple for renewal. The four fangs of the Red Jaguar have been identified as gastropod mollusk shells (Lobatus costatus) using a digital microscope and comparative analysis from malacology experts from the National Institute of Anthropology and History. The shells also are thought to be another valued resource material that may have been traded into Chichén Itzá. The green stones were analyzed and determined to be a form of jadeite. Jadeite was valuable economically and socially, and the acquisition and application of the material is indicative of the access Chichén Itzá had along its trade routes. Archaeological studies indicate that the Red Jaguar is similar to other depictions of thrones found in Maya murals (Temple of Chacmool), thus whoever was seated on this throne (possibly the high priest) could have been accessing the point of axis mundi, which is essential to the elements and relationship to the cosmological system. The symbolic use of materials related to the underworld and death also suggest that it acted as an offering for ritually closing the temple. Alignment The location of the temple within the site sits directly above a cenote, or water cave, and is aligned at the intersection between four other cenotes: the Sacred Cenote (North), Xtoloc (South), Kanjuyum (East), and Holtún (West). This alignment supports the position of the Temple of Kukulcán as an axis mundi. The western and eastern sides of the temple are angled to the zenith sunset and nadir sunrise, which may correspond with other calendar events such as the start of the traditional planting and harvesting seasons. An approximate correspondence with the Sun's positions on its zenith and nadir passages is likely coincidental, however, because very few Mesoamerican orientations match these events and even for such cases, different explanations are much more likely. Since the sunrise and sunset dates recorded by solar orientations that prevail in Mesoamerican architecture, tend to be separated by multiples of 13 and 20 days (i.e. of basic periods of the calendrical system), and given their clustering in certain seasons of the year, it has been argued that the orientations allowed the use of observational calendars intended to facilitate a proper scheduling of agricultural and related ritual activities. In agreement with this pattern, detected both in the Maya Lowlands and elsewhere in Mesoamerica, the north (and main) face of the temple of Kukulcán at Chichén Itzá has an azimuth of 111.72°, corresponding to sunsets on May 20 and July 24, separated by 65 and 300 days (multiples of 13 and 20). Significantly, the same dates are recorded by a similar temple at Tulum. Recent developments Around 2006, the National Institute of Anthropology and History (INAH), which manages the archaeological site of Chichen Itza, started closing monuments to the public. While visitors may walk around them, they may no longer climb them or enter the chambers. This followed a climber falling to her death. Researchers discovered an enormous cenote (also known as a sinkhole) beneath the 1,000-year-old temple of Kukulcán. The forming sinkhole beneath the temple is approximately and as many as deep. The water filling the cavern is thought to run from north to south. They also found a layer of limestone approximately thick at the top of the cenote, upon which the temple sits. Recent archaeological investigations have used electrical resistivity tomography (ERT) to examine the construction sequence of Kukulcán. To preserve the site from potential damage, electrodes were placed non-traditionally as flat-based detectors around the quadrangle of the temple bodies. After each body of the temple was tested, the data revealed two previous construction phases within Kukulcán with a possible temple at the top of the second substructure. Determining the dates when these constructions happened will provide time periods of when Chichen Itza may have been significantly occupied. Gallery See also List of Mesoamerican pyramids Pyramid of the Magician at Uxmal Tikal Temple I Tikal Temple II Tikal Temple III Tikal Temple IV Tikal Temple V Notes References Šprajc, Ivan, and Pedro Francisco Sánchez Nava (2013). Astronomía en la arquitectura de Chichén Itzá: una reevaluación. Estudios de Cultura Maya XLI: p. 31–60. Gray, Richard. "Sacred Sinkhole Discovered 1,000-year-old-Mayan-Temple-Eventually-Destroy-Pyramid." Science & tech August 17, 2015. Dailymail. Web. August 17, 2015. Justice, Adam. "Scientists discover sacred sinkhole cave under Chichen Itza pyramid." Science (2015). Ibtimes. Web. August 14, 2015. Juárez-Rodríguez, O., Argote-Espino D., Santos-Ramírez, M., & López-García, P. (2017). Portable XRF analysis for the identification of raw materials of the Red Jaguar sculpture in Chichén Itzá, Mexico. Quaternary International, Quaternary International. Tejero-Andrade, A., Argote-Espino, D., Cifuentes-Nava, G., Hernández-Quintero, E., Chávez, R.E., & García-Serrano, A. (2018). ‘Illuminating’ the interior of Kukulkan's Pyramid, Chichén Itzá, Mexico, by means of a non-conventional ERT geophysical survey. Journal of Archaeological Science, 90, 1-11. Wren, L., Kristan-Graham, C., Nygard, T., & Spencer, K. R. (2018). Landscapes of the Itza : Archaeology and art history at Chichen Itza and neighboring sites. Gainesville: University Press of Florida. Buildings and structures completed in the 12th century Archaeoastronomy Chichen Itza Maya architecture Buildings and structures in Yucatán 12th-century establishments in the Maya civilization Pyramids in Mexico
El Castillo, Chichen Itza
Astronomy
2,761
15,561,485
https://en.wikipedia.org/wiki/15-Crown-5
15-Crown-5 is a crown ether with the formula (C2H4O)5. It is a cyclic pentamer of ethylene oxide that forms complex with various cations, including sodium (Na+) and potassium (K+); however, it is complementary to Na+ and thus has a higher selectivity for Na+ ions. Synthesis 15-Crown-5 can be synthesized using a modified Williamson ether synthesis: (CH2OCH2CH2Cl)2 + O(CH2CH2OH)2 + 2 NaOH → (CH2CH2O)5 + 2 NaCl + 2 H2O It also forms from the cyclic oligomerization of ethylene oxide in the presence of gaseous boron trifluoride. Properties Analogous to 18-crown-6, 15-crown-5 binds to sodium ions. Thus, when treated with this complexing agent, sodium salts often become soluble in organic solvents. First-row transition metal dications fit snugly inside the cavity of 15-crown-5. They are too small to be included in 18-crown-6. The binding of transition metal cations results in multiple hydrogen-bonded interactions from both equatorial and axial aqua ligands, such that highly crystalline solid-state supramolecular polymers can be isolated. Metal salts isolated in this form include Co(ClO4)2, Ni(ClO4)2, Cu(ClO4)2, and Zn(ClO4)2. Seven coordinate species are most common for transition metal complexes of 15-crown-5, with the crown ether occupying the equatorial plane, along with 2 axial aqua ligands. 15-crown-5 has also been used to isolate salts of oxonium ions. For example, from a solution of tetrachloroauric acid, the oxonium ion has been isolated as the salt . Neutron diffraction studies revealed a sandwich structure, which shows a chain of water with remarkably long O-H bond (1.12 Å) in the acidic proton, but with a very short OH•••O distance (1.32 Å). A derivative of 15-crown-5, benzo-15-crown-5, has been used to produce anionic complexes of carbido ligands as their salts: See also Host guest chemistry Phase transfer catalyst References Further reading External links ChemicalLand21.com www.ChemBlink.com Crown ethers
15-Crown-5
Chemistry
514
3,381,929
https://en.wikipedia.org/wiki/List%20of%20reptiles
Reptiles are tetrapod animals in the class Reptilia, comprising today's turtles, crocodilians, snakes, amphisbaenians, lizards, tuatara, and their extinct relatives. The study of these traditional reptile orders, historically combined with that of modern amphibians, is called herpetology. The following list of reptiles lists the vertebrate class of reptiles by family, spanning two subclasses. Reptile here is taken in its traditional (paraphyletic) sense, and thus birds are not included (although birds are considered reptiles in the cladistic sense). Subclass Anapsida Order Testudines – turtles Suborder Cryptodira Family Chelydridae – common snapping turtles and alligator snapping turtle Family Emydidae – pond turtles and box turtles Family Testudinidae – tortoises Family Geoemydidae – Asian river turtles and allies Family Carettochelyidae – pignose turtles Family Trionychidae – softshell turtles Family Dermatemydidae – river turtles Family Kinosternidae – mud turtles Family Cheloniidae – sea turtles Family Dermochelyidae – leatherback turtles Suborder Pleurodira Family Chelidae – Austro-American sideneck turtles Family Pelomedusidae – Afro-American sideneck turtles Family Podocnemididae – Madagascan big-headed turtles and American sideneck river turtles Subclass Diapsida Superorder Lepidosauria Order Sphenodontia – tuatara Family Sphenodontidae Order Squamata – scaled reptiles Family Agamidae – agamas Family Chamaeleonidae – chameleons Family Iguanidae Subfamily Corytophaninae – casquehead lizard Subfamily Iguaninae – iguanas Subfamily Leiocephalinae Subfamily Leiosaurinae Subfamily Liolaeminae Subfamily Oplurinae – Madagascar iguanids Family Crotaphytidae – collared and leopard lizards Family Phrynosomatidae – horned lizards Family Polychrotidae – anoles Family Hoplocercidae – wood lizards Family Tropiduridae – Neotropical ground lizards Family Gekkonidae – geckos Family Pygopodidae – legless lizards Family Dibamidae – blind lizards Family Cordylidae – spinytail lizards Family Gerrhosauridae – plated lizards Family Gymnophthalmidae – spectacled lizards Family Teiidae – whiptails and tegus Family Lacertidae – lacertids Family Scincidae – skinks Family Xantusiidae – night lizards Family Anguidae – glass lizards Family Anniellidae – American legless lizards Family Xenosauridae – knob-scaled lizards Family Helodermatidae – Gila monsters Family Lanthanotidae – earless monitor lizards Family Varanidae – monitor lizards Suborder Amphisbaenia Family Amphisbaenidae – worm lizards Family Trogonophidae – shorthead worm lizards Family Bipedidae – two-legged worm lizards Suborder Serpentes – snakes Infraorder Alethinophidia Family Acrochordidae – wart snakes Family Aniliidae – false coral snakes Family Anomochilidae – dwarf pipe snakes Family Atractaspididae – African burrowing asps, stiletto snakes Family Boidae – Gray, 1825 – boas, anacondas Subfamily Boinae Subfamily Erycinae – Old World sand boas Family Bolyeriidae – Mauritius snakes Family Colubridae – Colubrids, typical snakes Subfamily Xenodermatinae Subfamily Homalopsinae Subfamily Boodontinae Subfamily Pseudoxyrhophiinae Subfamily Colubrinae Subfamily Psammophiinae Subfamily Natricinae Subfamily Pseudoxenodontinae Subfamily Dipsadinae Subfamily Xenodontinae Family Cylindrophiidae – Asian pipe snakes Family Elapidae – cobras, coral snakes, mambas, sea snakes Family Loxocemidae – Mexican pythons Family Pythonidae – pythons Family Tropidophiidae – dwarf boas Family Uropeltidae – pipe snakes, shield-tailed snakes Family Viperidae – vipers, pitvipers Subfamily Azemiopinae – Fae's viper Subfamily Causinae – night adders Subfamily Crotalinae – pitvipers, rattlesnakes Subfamily Viperinae – true vipers Family Xenopeltidae – sunbeam snakes Infraorder Scolecophidia – blind snakes Family Anomalepididae – primitive blind snakes Family Leptotyphlopidae – slender blind snakes, thread snakes Family Typhlopidae – blind snakes, typical blind snakes Division Archosauria Superorder Crocodylomorpha Order Crocodylia – crocodilians Suborder Eusuchia Family Crocodylidae – crocodiles Family Alligatoridae – alligators Family Gavialidae – gharials See also Reptile List of regional reptiles lists List of birds List of snakes Herping References External links Reptile Database Reptiles de:Systematik der Reptilien
List of reptiles
Biology
1,045
37,463,922
https://en.wikipedia.org/wiki/Delaware%20Valley%20Association%20of%20Structural%20Engineers
The Delaware Valley Association of Structural Engineers (DVASE) is a structural engineering association established in 1991. Its headquarters are in Fort Washington, Pennsylvania, and its member firms are located in Pennsylvania and New Jersey. It is officially the eastern chapter of the Structural Engineers Association of Pennsylvania. Initially a monthly discussion group for business and liability issues, DVASE has since grown and evolved to provide educational offerings. See also National Council of Structural Engineers Associations References External links American engineering organizations Organizations established in 1991 Structural engineering 1991 establishments in the United States
Delaware Valley Association of Structural Engineers
Engineering
108
5,654,032
https://en.wikipedia.org/wiki/HTC%20Typhoon
The HTC Typhoon is a smartphone that runs the Microsoft Windows Mobile operating system. The phone is manufactured by Taiwanese HTC Corporation (HTC). At the time when the Typhoon was made, HTC was not in the business of selling devices to end-users. Instead, the company had many partners who would rebrand and distribute its devices. It is based on the ARM Texas Instruments OMAP 730 processor running at 200 MHz. It has 32 MB internal RAM and 64 MB of flash ROM, and is expandable via a miniSD slot. It has a TFT display with 65,536 colours at a resolution of 176x220. It runs Microsoft Windows Mobile 2003 SE as its operating system, however it is also capable of running Windows Mobile 5.0 after a version was leaked onto the internet. It supports Java applications. Additionally, hacked, or "cooked" versions of Windows Mobile 6, 6.1 and 6.5 have circulated on the internet. Versions "Typhoon" is the HTC codename for this device, and the device has been rebranded by several distributors and cell phone carriers, under the following names: Audiovox SMT5600 Dopod 565 i-mate SP3 Krome Intellekt iQ700 Orange SPV C500 Qtek 8010 Vitelcom/Movistar TSM520 O2 Xphone IIm External links Review of the C500 Windows Mobile Standard devices Typhoon References
HTC Typhoon
Technology
295
58,255,782
https://en.wikipedia.org/wiki/Faouzia
Faouzia Ouihya (, ; born 5 July 2000), known mononymously as Faouzia, is a Moroccan-Canadian singer-songwriter and musician. Born in Morocco, she moved with her family to Canada at a young age. During that time she learned how to play various instruments, and began composing her first songs. She released several singles and collaborated with many musicians on vocals and songwriting prior to releasing her debut extended play (EP), Stripped, in August 2020. In 2023, she was nominated and was one of the recipients of the Top 25 Canadian Immigrant Awards. Life and career 2000–2014: Early life Faouzia Ouihya was born in Casablanca, Morocco to Mohammed Ouihya and Bouchra Alaoui. She moved with her family at the age of one to Notre-Dame-de-Lourdes, Manitoba in Canada, before settling in the rural town of Carman, Manitoba. She has two sisters: Samia (one of her managers) and Kenza (her photographer). She was raised Muslim and often traveled to her native country. Faouzia said she feels "very connected to the country and the region [North Africa]. Even though I grew up in Canada, I grew up eating Moroccan food, [and] wearing Moroccan attire." In an interview she revealed she felt excluded as a child, saying "maybe not just fitting in is the biggest thing I've had to overcome". Her first composition was inspired by this feeling of exclusion, in which she embraced people's differences. Her passion for music began at the age of four when she watched her sister Samia playing the piano, wishing she could learn how to play it. Faouzia began writing songs and poems when she was five years old and playing piano at the age of six. She later studied how to play guitar and violin. She speaks fluent English, French, and Arabic; the latter being the one she mostly used with her family. 2015–2019: Career beginnings At the age of fifteen, she won Song of the Year, the Audience Award, and Grand Prix at the 2015 La Chicane Électrique. She began posting her songs and other covers on YouTube which led to a contract with Paradigm Talent Agency. Thanks to her early success, she released her debut single "Knock on My Door" on 1 November through various platforms. In 2016, she won second place in the Canada's Walk of Fame Emerging Artist Mentorship Program. In 2017, she was the recipient of the Grand Prize at the Nashville Unsigned Only music competition. The same year, she collaborated with fellow Manitoban artist Matt Epp on their single "The Sound", and won the International Songwriting Competition, the largest songwriting competition in the world. The two are the first Canadians in competition's 16-year history to win the top prize, beating 16,000 other entries from 137 countries. She performed with the Winnipeg Symphony Orchestra at The Forks, Winnipeg celebrating the 150th anniversary of Canada. Faouzia is featured in the song "Battle" on David Guetta's studio album 7, announced on 24 August 2018. In a French language interview with Le Matin, Guetta noted Faouzia's "great voice, powerful vibrato, and unique style" for why he chose her for his album. Faouzia recalled she "was still in high school when I heard the news that there was a possibility of me working with him", and affirmed it was "one of my proudest career moments, so far." At that time, she enrolled in the University of Manitoba, majoring in computer engineering. She also featured in the song "Money" on French rapper Ninho's studio album Destin and the song got certified gold on 9 July 2019. 2020–present: Stripped and Citizens In early 2020, Faouzia was invited by Kelly Clarkson to translate her song "I Dare You" to Moroccan Arabic, which was released on 16 April. About a month later, the Swedish EDM duo Galantis invited her to feature in their song "I Fly" for the soundtrack of the film Scoob! (2020). On 6 August, Faouzia released her first extended play, Stripped. It features 6 stripped songs, 5 of which were previously released, and one of which, "100 Bandaids", is a new track. To promote the EP, she performed the tracks live in a concert at the Burton Cummings Theatre on 20 August. On 5 November 2020, Faouzia released the single "Minefields" alongside American singer-songwriter John Legend. On 21 March 2021, Faouzia released "Don't Tell Me I'm Pretty" on YouTube. On 29 June, she released "Hero" accompanied by its video-game-themed music video. In July, Faouzia revealed that she has been working on her debut studio album for a few years. On 28 October, she released "Puppet". On 30 March 2022, announced her second EP, Citizens, and released "RIP, Love" as a single from the project. Citizens was released on 19 May and features her previously released singles "Minefields", "Don't Tell Me I'm Pretty", and "Puppet". On 7 October, she released "Habibi (My Love)". On 14 April 2023, Faouzia released "I'm Blue", which was previously released on YouTube on 30 August 2019. As part of a project titled Doll Summer, she released the singles "Don't Call Me" and "Plastic Therapy" on 9 June, followed by "La La La" on 4 August and "IL0V3Y0U" on 8 September. On 23 June, Faouzia and French DJ Martin Solveig released "Now or Never", which serves as a single from Solveig's upcoming sixth studio album. She wrote on the track "Beg Forgiveness" from ¥$ (Kanye West and Ty Dolla Sign) album Vultures 1, released on 10 February 2024. In 2024, she joined the ninth season of the Chinese singing competition Singer 2024, in which she was a fourth-place winner overall. Artistry Musical style and themes Faouzia is a pop, R&B, synth-pop, and acoustic pop artist. She has described her music as "emotional" and "intense". Her early songwriting was heavily inspired by people she was close to. However, her later songs were more personal since she "really wanted my heart in my story." Gloria Morey noted that her music has "the musical elements of upbeat pop songs which often contain quite shallow lyrics, but Faouzia’s lyrics are very meaningful and, well, the opposite of shallow.” Faouzia possesses a potential coloratura mezzo-soprano vocal range that spans from C♯ 3 to G5 in mixed voice and A6 in whistle tones. Faouzia sings mostly in English language, featuring Arabic tonalities in her vocals. She has also performed in Arabic and in French language. Influences Faouzia cites her parents and sisters as her biggest influence in pursuing a music career. She grew up listening to pop musicians Rihanna, Lady Gaga, Ariana Grande, Beyoncé, Sia, Adele, Kelly Clarkson, and John Legend. About Rihanna, she said "has always been an inspiration of mine growing up and still to this day." Faouzia added that she, Beyoncé, Lady Gaga, and Sia are her major influences as a songwriter. She told that "Say Something" by A Great Big World featuring Christina Aguilera and "Hello" by Adele are some of her favourite songs. At a young age she listened alongside her parents to Arab music acts such as Umm Kulthum and Fairuz. Faouzia declared they "are two of my all-time favourite artists." She also listened to Assala Nasri and Khaled. When she was learning music she listened to composers Chopin, Bach, and Mozart. Pop rock bands Fall Out Boy and Imagine Dragons have also served as influences for her, and she attended one of the latter's concerts. Discography Extended plays Singles As lead artist Notes As featured artist Other charted songs Guest appearances Videography References External links Canadian women singer-songwriters 2000 births 21st-century Canadian women singers 21st-century Moroccan women singers Franco-Manitoban people Living people Moroccan emigrants to Canada Singers from Manitoba Musicians from Casablanca 21st-century Canadian singer-songwriters Cancer (constellation)
Faouzia
Astronomy
1,741
13,816,659
https://en.wikipedia.org/wiki/HD%2016175
HD 16175 is a 7th magnitude G-type star with temperature about 6000 K located away in the Andromeda constellation. This star is only visible through binoculars or better equipment; it is also 3.3 times more luminous, is 1.34 times more massive, and has a radius 1.66 times bigger than our local star. The star HD 16175 is named Buna. The name was selected in the NameExoWorlds campaign by Ethiopia, during the 100th anniversary of the IAU. Buna is the commonly used word for coffee in Ethiopia. Planetary system The discovery of the exoplanet HD 16175 b was published in the June 2009 issue of the Publications of the Astronomical Society of the Pacific. The planetary parameters were updated in 2016. In 2023, the inclination and true mass of HD 16175 b were determined via astrometry. See also HD 96167 List of extrasolar planets References External links F-type subgiants 016175 012191 Andromeda (constellation) Planetary systems with one confirmed planet BD+41 0496 Buna
HD 16175
Astronomy
226
52,511,031
https://en.wikipedia.org/wiki/AsteroidOS
AsteroidOS is an open source operating system designed for smartwatches. It is available as a firmware replacement for some Android Wear devices. The motto for the AsteroidOS project is "Free your wrist." Wareable.com reviewed version 1.0 and gave it 3.5 out of 5 stars. Software Architecture AsteroidOS is built like an embedded Linux distribution with OpenEmbedded. It works on top of the Linux kernel and the systemd service manager. AsteroidOS also includes various mobile Linux middlewares originally developed for Mer and Nemo Mobile such as lipstick and MCE. The user interface is completely written with the Qt5 framework. Applications are coded in QML with graphic components coming from Qt Quick and QML-Asteroid. An SDK with a cross-compilation toolchain integrated to Qt Creator can be generated from OpenEmbedded for easier development. Asteroid-launcher is a Wayland compositor and customizable home screen managing applications, watchfaces, notifications and quick settings. Asteroid-launcher runs on top of the libhybris compatibility layer to make use of Bionic GPU drivers. AsteroidOS offers Bluetooth Low Energy synchronization capabilities with the asteroid-btsyncd daemon running on top of BlueZ5. A reference client named AsteroidOS Sync is available for Android users. There is also a Companion App for Sailfish OS(Starship) and one for Ubuntu Touch(Telescope), but it has not yet been updated to the current release of Ubuntu Touch. An app for Linux-based smartphones like the Librem 5 distributed by Purism is also in the making(Buran), but cannot be used due to a currently still unfixed bug in QT5. Shipped Applications As of the 1.1 nightly release, the following applications are shipped and pre-installed by default in AsteroidOS: Agenda: Provides simple event scheduling capabilities Alarm Clock: Makes the watch vibrate at a specific time of day Calculator: Allows basic calculations Compass: A functional Compass app (only preinstalled on devices with supported sensors) Diamonds: A game, which is inspired by 2048. Flashlight: A simple flashlight app where the screen acts as a light source. Heart Rate: An app for heart-rate-monitor bpm retrieval Music: Controls a synchronized device's music player Settings: Configures Time, Date, Language, Bluetooth, Brightness, AOD(on supported devices), Nightstand, Wallpapers, Custom Launchers, Watch faces and USB Modes (Charging,ADB ,SSH ,MTP) Stopwatch: Measures an elapsed time Timer: Counts down a specified time interval Weather: Provides weather forecast for five days See also Wear OS Sailfish OS Ubuntu Touch OpenEmbedded Hybris (software) Qt (software) Linux(Kernel) References Smartwatches Wearable computers Free software operating systems Mobile operating systems
AsteroidOS
Technology
613
53,576,321
https://en.wikipedia.org/wiki/Single-cell%20transcriptomics
Single-cell transcriptomics examines the gene expression level of individual cells in a given population by simultaneously measuring the RNA concentration (conventionally only messenger RNA (mRNA)) of hundreds to thousands of genes. Single-cell transcriptomics makes it possible to unravel heterogeneous cell populations, reconstruct cellular developmental pathways, and model transcriptional dynamics — all previously masked in bulk RNA sequencing. Background The development of high-throughput RNA sequencing (RNA-seq) and microarrays has made gene expression analysis a routine. RNA analysis was previously limited to tracing individual transcripts by Northern blots or quantitative PCR. Higher throughput and speed allow researchers to frequently characterize the expression profiles of populations of thousands of cells. The data from bulk assays has led to identifying genes differentially expressed in distinct cell populations, and biomarker discovery. These studies are limited as they provide measurements for whole tissues and, as a result, show an average expression profile for all the constituent cells. This has a couple of drawbacks. Firstly, different cell types within the same tissue can have distinct roles in multicellular organisms. They often form subpopulations with unique transcriptional profiles. Correlations in the gene expression of the subpopulations can often be missed due to the lack of subpopulation identification. Secondly, bulk assays fail to recognize whether a change in the expression profile is due to a change in regulation or composition — for example if one cell type arises to dominate the population. Lastly, when your goal is to study cellular progression through differentiation, average expression profiles can only order cells by time rather than by developmental stage. Consequently, they cannot show trends in gene expression levels specific to certain stages. Recent advances in biotechnology allow the measurement of gene expression in hundreds to thousands of individual cells simultaneously. While these breakthroughs in transcriptomics technologies have enabled the generation of single-cell transcriptomic data, they also presented new computational and analytical challenges. Bioinformaticians can use techniques from bulk RNA-seq for single-cell data. Still, many new computational approaches have had to be designed for this data type to facilitate a complete and detailed study of single-cell expression profiles. Experimental steps There is so far no standardized technique to generate single-cell data: all methods must include cell isolation from the population, lysate formation, amplification through reverse transcription and quantification of expression levels. Common techniques for measuring expression are quantitative PCR or RNA-seq. Isolating single cells There are several methods available to isolate and amplify cells for single-cell analysis. Low throughput techniques are able to isolate hundreds of cells, are slow, and enable selection. These methods include: Micropipetting Cytoplasmic aspiration Laser capture microdissection. High-throughput methods are able to quickly isolate hundreds to tens of thousands of cells. Common techniques include: Fluorescence activated cell sorting (FACS) Microfluidic devices Combining FACS with scRNA-seq has produced optimized protocols such as SORT-seq. A list of studies that utilized SORT-seq can be found here. Moreover, combining microfluidic devices with scRNA-seq has been optimized in 10x Genomics protocols. Quantitative PCR (qPCR) To measure the level of expression of each transcript qPCR can be applied. Gene specific primers are used to amplify the corresponding gene as with regular PCR and as a result data is usually only obtained for sample sizes of less than 100 genes. The inclusion of housekeeping genes, whose expression should be constant under the conditions, is used for normalisation. The most commonly used house keeping genes include GAPDH and α-actin, although the reliability of normalisation through this process is questionable as there is evidence that the level of expression can vary significantly. Fluorescent dyes are used as reporter molecules to detect the PCR product and monitor the progress of the amplification - the increase in fluorescence intensity is proportional to the amplicon concentration. A plot of fluorescence vs. cycle number is made and a threshold fluorescence level is used to find cycle number at which the plot reaches this value. The cycle number at this point is known as the threshold cycle (Ct) and is measured for each gene. Single-cell RNA-seq The single-cell RNA-seq technique converts a population of RNAs to a library of cDNA fragments. These fragments are sequenced by high-throughput next generation sequencing techniques and the reads are mapped back to the reference genome, providing a count of the number of reads associated with each gene. Normalisation of RNA-seq data accounts for cell to cell variation in the efficiencies of the cDNA library formation and sequencing. One method relies on the use of extrinsic RNA spike-ins (RNA sequences of known sequence and quantity) that are added in equal quantities to each cell lysate and used to normalise read count by the number of reads mapped to spike-in mRNA. Another control uses unique molecular identifiers (UMIs)-short DNA sequences (6–10nt) that are added to each cDNA before amplification and act as a bar code for each cDNA molecule. Normalisation is achieved by using the count number of unique UMIs associated with each gene to account for differences in amplification efficiency. A combination of both spike-ins, UMIs and other approaches have been combined for more accurate normalisation. Considerations A problem associated with single-cell data occurs in the form of zero inflated gene expression distributions, known as technical dropouts, that are common due to low mRNA concentrations of less-expressed genes that are not captured in the reverse transcription process. The percentage of mRNA molecules in the cell lysate that are detected is often only 10-20%. When using RNA spike-ins for normalisation the assumption is made that the amplification and sequencing efficiencies for the endogenous and spike-in RNA are the same. Evidence suggests that this is not the case given fundamental differences in size and features, such as the lack of a polyadenylated tail in spike-ins and therefore shorter length. Additionally, normalisation using UMIs assumes the cDNA library is sequenced to saturation, which is not always the case. Data analysis Insights based on single-cell data analysis assume that the input is a matrix of normalised gene expression counts, generated by the approaches outlined above, and can provide opportunities that are not obtainable by bulk. Three main insights provided: Identification and characterization of cell types and their spatial organisation in time Inference of gene regulatory networks and their strength across individual cells Classification of the stochastic component of transcription The techniques outlined have been designed to help visualise and explore patterns in the data in order to facilitate the revelation of these three features. Clustering Clustering allows for the formation of subgroups in the cell population. Cells can be clustered by their transcriptomic profile in order to analyse the sub-population structure and identify rare cell types or cell subtypes. Alternatively, genes can be clustered by their expression states in order to identify covarying genes. A combination of both clustering approaches, known as biclustering, has been used to simultaneously cluster by genes and cells to find genes that behave similarly within cell clusters. Clustering methods applied can be K-means clustering, forming disjoint groups or Hierarchical clustering, forming nested partitions. Biclustering Biclustering provides several advantages by improving the resolution of clustering. Genes that are only informative to a subset of cells and are hence only expressed there can be identified through biclustering. Moreover, similarly behaving genes that differentiate one cell cluster from another can be identified using this method. Dimensionality reduction Dimensionality reduction algorithms such as Principal component analysis (PCA) and t-SNE can be used to simplify data for visualisation and pattern detection by transforming cells from a high to a lower dimensional space. The result of this method produces graphs with each cell as a point in a 2-D or 3-D space. Dimensionality reduction is frequently used before clustering as cells in high dimensions can wrongly appear to be close due to distance metrics behaving non-intuitively. Principal component analysis The most frequently used technique is PCA, which identifies the directions of largest variance principal components and transforms the data so that the first principal component has the largest possible variance, and successive principle components in turn each have the highest variance possible while remaining orthogonal to the preceding components. The contribution each gene makes to each component is used to infer which genes are contributing the most to variance in the population and are involved in differentiating different subpopulations. Differential expression Detecting differences in gene expression level between two populations is used both single-cell and bulk transcriptomic data. Specialised methods have been designed for single-cell data that considers single cell features such as technical dropouts and shape of the distribution e.g. Bimodal vs. unimodal. Gene ontology enrichment Gene ontology terms describe gene functions and the relationships between those functions into three classes: Molecular function Cellular component Biological process Gene Ontology (GO) term enrichment is a technique used to identify which GO terms are over-represented or under-represented in a given set of genes. In single-cell analysis input list of genes of interest can be selected based on differentially expressed genes or groups of genes generated from biclustering. The number of genes annotated to a GO term in the input list is normalised against the number of genes annotated to a GO term in the background set of all genes in genome to determine statistical significance. Pseudotemporal ordering Pseudo-temporal ordering (or trajectory inference) is a technique that aims to infer gene expression dynamics from snapshot single-cell data. The method tries to order the cells in such a way that similar cells are closely positioned to each other. This trajectory of cells can be linear, but can also bifurcate or follow more complex graph structures. The trajectory, therefore, enables the inference of gene expression dynamics and the ordering of cells by their progression through differentiation or response to external stimuli. The method relies on the assumptions that the cells follow the same path through the process of interest and that their transcriptional state correlates to their progression. The algorithm can be applied to both mixed populations and temporal samples. More than 50 methods for pseudo-temporal ordering have been developed, and each has its own requirements for prior information (such as starting cells or time course data), detectable topologies, and methodology. An example algorithm is the Monocle algorithm that carries out dimensionality reduction of the data, builds a minimal spanning tree using the transformed data, orders cells in pseudo-time by following the longest connected path of the tree and consequently labels cells by type. Another example is the diffusion pseudotime (DPT) algorithm, which uses a diffusion map and diffusion process. Another class of methods such as MARGARET employ graph partitioning for capturing complex trajectory topologies such as disconnected and multifurcating trajectories. Network inference Gene regulatory network inference is a technique that aims to construct a network, shown as a graph, in which the nodes represent the genes and edges indicate co-regulatory interactions. The method relies on the assumption that a strong statistical relationship between the expression of genes is an indication of a potential functional relationship. The most commonly used method to measure the strength of a statistical relationship is correlation. However, correlation fails to identify non-linear relationships and mutual information is used as an alternative. Gene clusters linked in a network signify genes that undergo coordinated changes in expression. Integration The presence or strength of technical effects and the types of cells observed often differ in single-cell transcriptomics datasets generated using different experimental protocols and under different conditions. This difference results in strong batch effects that may bias the findings of statistical methods applied across batches, particularly in the presence of confounding. As a result of the aforementioned properties of single-cell transcriptomic data, batch correction methods developed for bulk sequencing data were observed to perform poorly. Consequently, researchers developed statistical methods to correct for batch effects that are robust to the properties of single-cell transcriptomic data to integrate data from different sources or experimental batches. Laleh Haghverdi performed foundational work in formulating the use of mutual nearest neighbors between each batch to define batch correction vectors. With these vectors, you can merge datasets that each include at least one shared cell type. An orthogonal approach involves the projection of each dataset onto a shared low-dimensional space using canonical correlation analysis. Mutual nearest neighbors and canonical correlation analysis have also been combined to define integration "anchors" comprising reference cells in one dataset, to which query cells in another dataset are normalized. Another class of methods (e.g., scDREAMER) uses deep generative models such as variational autoencoders for learning batch-invariant latent cellular representations which can be used for downstream tasks such as cell type clustering, denoising of single-cell gene expression vectors and trajectory inference. See also RNA-Seq Single-cell analysis Single-cell sequencing Transcriptome Transcriptomics References External links Dissecting Tumor Heterogeneity with Single-Cell Transcriptomics The ultimate single-cell RNA sequencing guide by single-cell RNA sequencing service provider Single Cell Discoveries. DNA sequencing Molecular biology techniques Biotechnology
Single-cell transcriptomics
Chemistry,Biology
2,754
2,838,061
https://en.wikipedia.org/wiki/XMLVend
XMLVend is a South African developed, open interface standard, which facilitates the sale of prepaid electricity credit between electricity utilities and clients. It is an application of web services to facilitate trade between various types of devices and a utility prepayment vending server. This standard is already being introduced and used in prepaid water. External links Eskom XMLVend implementation website Eskom Technical Documents Eskom Prepayment Johannesburg Water Web services
XMLVend
Technology
93
36,952,903
https://en.wikipedia.org/wiki/Tlalcuahuitl
Tlalcuahuitl or land rod also known as a cuahuitl was an Aztec unit of measuring distance that was approximately , to or long. The abbreviation used for tlalcuahuitl is (T) and the unit square of a tlalcuahuitl is (T²). Subdivisions of tlalcuahuitl Acolhua Congruence Arithmetic Using their knowledge of tlalcuahuitl, Barbara J. Williams of the Department of Geology at the University of Wisconsin and María del Carmen Jorge y Jorge of the Research Institute for Applied Mathematics and FENOMEC Systems at the National Autonomous University of Mexico believe the Aztecs used a special type of arithmetic. This arithmetic (tlapōhuallōtl ) the researchers called Acolhua Congruence Arithmetic and it was used to calculate the area of Aztec people's land as demonstrated below: See also meter feet References Units of length Aztec mathematics
Tlalcuahuitl
Mathematics
191
301,429
https://en.wikipedia.org/wiki/Linear%20elasticity
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics. The fundamental "linearizing" assumptions of linear elasticity are: infinitesimal strains or "small" deformations (or strains) and linear relationships between the components of stress and strain. In addition linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and engineering design scenarios. Linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis. Mathematical formulation Equations governing a linear elastic boundary value problem are based on three tensor partial differential equations for the balance of linear momentum and six infinitesimal strain-displacement relations. The system of differential equations is completed by a set of linear algebraic constitutive relations. Direct tensor form In direct tensor form that is independent of the choice of coordinate system, these governing equations are: Cauchy momentum equation, which is an expression of Newton's second law. In convective form it is written as: Strain-displacement equations: Constitutive equations. For elastic materials, Hooke's law represents the material behavior and relates the unknown stresses and strains. The general equation for Hooke's law is where is the Cauchy stress tensor, is the infinitesimal strain tensor, is the displacement vector, is the fourth-order stiffness tensor, is the body force per unit volume, is the mass density, represents the nabla operator, represents a transpose, represents the second material derivative with respect to time, and is the inner product of two second-order tensors (summation over repeated indices is implied). Cartesian coordinate form Expressed in terms of components with respect to a rectangular Cartesian coordinate system, the governing equations of linear elasticity are: Equation of motion: where the subscript is a shorthand for and indicates , is the Cauchy stress tensor, is the body force density, is the mass density, and is the displacement.These are 3 independent equations with 6 independent unknowns (stresses). In engineering notation, they are: Strain-displacement equations: where is the strain. These are 6 independent equations relating strains and displacements with 9 independent unknowns (strains and displacements). In engineering notation, they are: Constitutive equations. The equation for Hooke's law is: where is the stiffness tensor. These are 6 independent equations relating stresses and strains. The requirement of the symmetry of the stress and strain tensors lead to equality of many of the elastic constants, reducing the number of different elements to 21 . An elastostatic boundary value problem for an isotropic-homogeneous media is a system of 15 independent equations and equal number of unknowns (3 equilibrium equations, 6 strain-displacement equations, and 6 constitutive equations). Specifying the boundary conditions, the boundary value problem is completely defined. To solve the system two approaches can be taken according to boundary conditions of the boundary value problem: a displacement formulation, and a stress formulation. Cylindrical coordinate form In cylindrical coordinates () the equations of motion are The strain-displacement relations are and the constitutive relations are the same as in Cartesian coordinates, except that the indices ,, now stand for ,,, respectively. Spherical coordinate form In spherical coordinates () the equations of motion are The strain tensor in spherical coordinates is (An)isotropic (in)homogeneous media In isotropic media, the stiffness tensor gives the relationship between the stresses (resulting internal stresses) and the strains (resulting deformations). For an isotropic medium, the stiffness tensor has no preferred direction: an applied force will give the same displacements (relative to the direction of the force) no matter the direction in which the force is applied. In the isotropic case, the stiffness tensor may be written: where is the Kronecker delta, K is the bulk modulus (or incompressibility), and is the shear modulus (or rigidity), two elastic moduli. If the medium is inhomogeneous, the isotropic model is sensible if either the medium is piecewise-constant or weakly inhomogeneous; in the strongly inhomogeneous smooth model, anisotropy has to be accounted for. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium. The constitutive equation may now be written as: This expression separates the stress into a scalar part on the left which may be associated with a scalar pressure, and a traceless part on the right which may be associated with shear forces. A simpler expression is: where λ is Lamé's first parameter. Since the constitutive equation is simply a set of linear equations, the strain may be expressed as a function of the stresses as: which is again, a scalar part on the left and a traceless shear part on the right. More simply: where is Poisson's ratio and is Young's modulus. Elastostatics Elastostatics is the study of linear elasticity under the conditions of equilibrium, in which all forces on the elastic body sum to zero, and the displacements are not a function of time. The equilibrium equations are then In engineering notation (with tau as shear stress), This section will discuss only the isotropic homogeneous case. Displacement formulation In this case, the displacements are prescribed everywhere in the boundary. In this approach, the strains and stresses are eliminated from the formulation, leaving the displacements as the unknowns to be solved for in the governing equations. First, the strain-displacement equations are substituted into the constitutive equations (Hooke's Law), eliminating the strains as unknowns: Differentiating (assuming and are spatially uniform) yields: Substituting into the equilibrium equation yields: or (replacing double (dummy) (=summation) indices k,k by j,j and interchanging indices, ij to, ji after the, by virtue of Schwarz' theorem) where and are Lamé parameters. In this way, the only unknowns left are the displacements, hence the name for this formulation. The governing equations obtained in this manner are called the elastostatic equations, the special case of the steady Navier–Cauchy equations given below. Once the displacement field has been calculated, the displacements can be replaced into the strain-displacement equations to solve for strains, which later are used in the constitutive equations to solve for stresses. The biharmonic equation The elastostatic equation may be written: Taking the divergence of both sides of the elastostatic equation and assuming the body forces has zero divergence (homogeneous in domain) () we have Noting that summed indices need not match, and that the partial derivatives commute, the two differential terms are seen to be the same and we have: from which we conclude that: Taking the Laplacian of both sides of the elastostatic equation, and assuming in addition , we have From the divergence equation, the first term on the left is zero (Note: again, the summed indices need not match) and we have: from which we conclude that: or, in coordinate free notation which is just the biharmonic equation in . Stress formulation In this case, the surface tractions are prescribed everywhere on the surface boundary. In this approach, the strains and displacements are eliminated leaving the stresses as the unknowns to be solved for in the governing equations. Once the stress field is found, the strains are then found using the constitutive equations. There are six independent components of the stress tensor which need to be determined, yet in the displacement formulation, there are only three components of the displacement vector which need to be determined. This means that there are some constraints which must be placed upon the stress tensor, to reduce the number of degrees of freedom to three. Using the constitutive equations, these constraints are derived directly from corresponding constraints which must hold for the strain tensor, which also has six independent components. The constraints on the strain tensor are derivable directly from the definition of the strain tensor as a function of the displacement vector field, which means that these constraints introduce no new concepts or information. It is the constraints on the strain tensor that are most easily understood. If the elastic medium is visualized as a set of infinitesimal cubes in the unstrained state, then after the medium is strained, an arbitrary strain tensor must yield a situation in which the distorted cubes still fit together without overlapping. In other words, for a given strain, there must exist a continuous vector field (the displacement) from which that strain tensor can be derived. The constraints on the strain tensor that are required to assure that this is the case were discovered by Saint Venant, and are called the "Saint Venant compatibility equations". These are 81 equations, 6 of which are independent non-trivial equations, which relate the different strain components. These are expressed in index notation as: In engineering notation, they are: The strains in this equation are then expressed in terms of the stresses using the constitutive equations, which yields the corresponding constraints on the stress tensor. These constraints on the stress tensor are known as the Beltrami-Michell equations of compatibility: In the special situation where the body force is homogeneous, the above equations reduce to A necessary, but insufficient, condition for compatibility under this situation is or . These constraints, along with the equilibrium equation (or equation of motion for elastodynamics) allow the calculation of the stress tensor field. Once the stress field has been calculated from these equations, the strains can be obtained from the constitutive equations, and the displacement field from the strain-displacement equations. An alternative solution technique is to express the stress tensor in terms of stress functions which automatically yield a solution to the equilibrium equation. The stress functions then obey a single differential equation which corresponds to the compatibility equations. Solutions for elastostatic cases Thomson's solution - point force in an infinite isotropic medium The most important solution of the Navier–Cauchy or elastostatic equation is for that of a force acting at a point in an infinite isotropic medium. This solution was found by William Thomson (later Lord Kelvin) in 1848 (Thomson 1848). This solution is the analog of Coulomb's law in electrostatics. A derivation is given in Landau & Lifshitz. Defining where is Poisson's ratio, the solution may be expressed as where is the force vector being applied at the point, and is a tensor Green's function which may be written in Cartesian coordinates as: It may be also compactly written as: and it may be explicitly written as: In cylindrical coordinates () it may be written as: where is total distance to point. It is particularly helpful to write the displacement in cylindrical coordinates for a point force directed along the z-axis. Defining and as unit vectors in the and directions respectively yields: It can be seen that there is a component of the displacement in the direction of the force, which diminishes, as is the case for the potential in electrostatics, as 1/r for large r. There is also an additional ρ-directed component. Boussinesq–Cerruti solution - point force at the origin of an infinite isotropic half-space Another useful solution is that of a point force acting on the surface of an infinite half-space. It was derived by Boussinesq for the normal force and Cerruti for the tangential force and a derivation is given in Landau & Lifshitz. In this case, the solution is again written as a Green's tensor which goes to zero at infinity, and the component of the stress tensor normal to the surface vanishes. This solution may be written in Cartesian coordinates as [recall: and , = Poisson's ratio]: Other solutions Point force inside an infinite isotropic half-space. Point force on a surface of an isotropic half-space. Contact of two elastic bodies: the Hertz solution (see Matlab code). See also the page on Contact mechanics. Elastodynamics in terms of displacements Elastodynamics is the study of elastic waves and involves linear elasticity with variation in time. An elastic wave is a type of mechanical wave that propagates in elastic or viscoelastic materials. The elasticity of the material provides the restoring force of the wave. When they occur in the Earth as the result of an earthquake or other disturbance, elastic waves are usually called seismic waves. The linear momentum equation is simply the equilibrium equation with an additional inertial term: If the material is governed by anisotropic Hooke's law (with the stiffness tensor homogeneous throughout the material), one obtains the displacement equation of elastodynamics: If the material is isotropic and homogeneous, one obtains the (general, or transient) Navier–Cauchy equation: The elastodynamic wave equation can also be expressed as where is the acoustic differential operator, and is Kronecker delta. In isotropic media, the stiffness tensor has the form where is the bulk modulus (or incompressibility), and is the shear modulus (or rigidity), two elastic moduli. If the material is homogeneous (i.e. the stiffness tensor is constant throughout the material), the acoustic operator becomes: For plane waves, the above differential operator becomes the acoustic algebraic operator: where are the eigenvalues of with eigenvectors parallel and orthogonal to the propagation direction , respectively. The associated waves are called longitudinal and shear elastic waves. In the seismological literature, the corresponding plane waves are called P-waves and S-waves (see Seismic wave). Elastodynamics in terms of stresses Elimination of displacements and strains from the governing equations leads to the Ignaczak equation of elastodynamics In the case of local isotropy, this reduces to The principal characteristics of this formulation include: (1) avoids gradients of compliance but introduces gradients of mass density; (2) it is derivable from a variational principle; (3) it is advantageous for handling traction initial-boundary value problems, (4) allows a tensorial classification of elastic waves, (5) offers a range of applications in elastic wave propagation problems; (6) can be extended to dynamics of classical or micropolar solids with interacting fields of diverse types (thermoelastic, fluid-saturated porous, piezoelectro-elastic...) as well as nonlinear media. Anisotropic homogeneous media For anisotropic media, the stiffness tensor is more complicated. The symmetry of the stress tensor means that there are at most 6 different elements of stress. Similarly, there are at most 6 different elements of the strain tensor . Hence the fourth-order stiffness tensor may be written as a matrix (a tensor of second order). Voigt notation is the standard mapping for tensor indices, With this notation, one can write the elasticity matrix for any linearly elastic medium as: As shown, the matrix is symmetric, this is a result of the existence of a strain energy density function which satisfies . Hence, there are at most 21 different elements of . The isotropic special case has 2 independent elements: The simplest anisotropic case, that of cubic symmetry has 3 independent elements: The case of transverse isotropy, also called polar anisotropy, (with a single axis (the 3-axis) of symmetry) has 5 independent elements: When the transverse isotropy is weak (i.e. close to isotropy), an alternative parametrization utilizing Thomsen parameters, is convenient for the formulas for wave speeds. The case of orthotropy (the symmetry of a brick) has 9 independent elements: Elastodynamics The elastodynamic wave equation for anisotropic media can be expressed as where is the acoustic differential operator, and is Kronecker delta. Plane waves and Christoffel equation A plane wave has the form with of unit length. It is a solution of the wave equation with zero forcing, if and only if and constitute an eigenvalue/eigenvector pair of the acoustic algebraic operator This propagation condition (also known as the Christoffel equation) may be written as where denotes propagation direction and is phase velocity. See also Castigliano's method Cauchy momentum equation Clapeyron's theorem Contact mechanics Deformation Elasticity (physics) GRADELA Hooke's law Infinitesimal strain theory Michell solution Plasticity (physics) Signorini problem Spring system Stress (mechanics) Stress functions References Elasticity (physics) Solid mechanics Sound
Linear elasticity
Physics,Materials_science
3,521
51,453,939
https://en.wikipedia.org/wiki/Testis-enhanced%20gene%20transfer%20family
The testis-enhanced gene transcript (TEGT) family includes the testis-enhanced gene transcript proteins of mammals, which are expressed at high levels in the testis, the putative glutamate/aspartate binding proteins of plants and animals, the YccA protein of Escherichia coli and the YetJ protein of Bacillus subtilis. These proteins are about 200-250 residues in length and exhibit 7 TMSs. Homology Homologues are found in a variety of Gram-negative and Gram-positive bacteria, yeast, fungi, plants, animals and viruses. The E. coli genome encodes three paralogues, YbhL, YbhM and YccA. Distant homologues found in Drosophilia melanogaster and the rat are the N-methyl-D-aspartate receptor-associated protein (NMDARAI) and the N-methyl-D-aspartate receptor glutamate binding chain, respectively. Two others are the rat neural membrane protein 35 and the Arabidopsis thaliana Bax inhibitor-1 (BI-1) protein capable of suppressing Bax-induced cell death in yeast. BI-1 One of these proteins, TEGT or the Bax Inhibitor-1 (TC# 1.A.14.1.1), has a C-terminal domain that forms a Ca2+-permeable channel. BI-1 is an ER-localized protein that protects against apoptosis and ER stress. BI-1 has been proposed to modulate ER Ca2+ homeostasis by acting as a Ca2+-leak channel. These proteins are distantly related to the ionotropic glutamate-binding protein of the N-methyl D-aspartate (NMDA) receptor of man. Homologues include a putative cold shock inducible protein and a SecY stabilizing protein. Function Based on experimental determination of the BI-1 topology, Bultynck et al. proposes that its C-terminal α-helical 20 amino acid peptide catalyzes Ca2+ flux both in vivo and in vitro. The Ca2+-leak properties were conserved among animal, but not plant and yeast orthologs. By mutating one of the critical aspartate residues (D213) in the proposed Ca2+-channel pore in full-length BI-1, D213 proved to be essential for BI-1 dependent ER Ca2+-leak. Structure Chang et al. published crystal structures of a bacterial homolog, YetJ (TC# 1.A.14.2.3) at 1.9 Å resolution and characterized its calcium leak activity. Its seven-transmembrane-helix fold features two triple-helix sandwiches wrapped around a central C-terminal helix. Structures obtained in closed and open conformations are reversibly interconvertible by changes in the pH. A hydrogen-bonded perturbed pair of conserved aspartyl residues explains the pH dependence of this transition, and the pH regulates calcium influx in proteoliposomes. Homology models for human BI-1 provided insight into its cytoprotective activity. Transport Reaction The generalized reaction catalyzed by TEGT channels is: cations (out) ⇌ cations (in) References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
Testis-enhanced gene transfer family
Biology
715
2,910,109
https://en.wikipedia.org/wiki/Phi2%20Cancri
{{DISPLAYTITLE:Phi2 Cancri}} Phi2 Cancri (φ2 Cancri) is a binary star in the constellation Cancer, about 280 light-years from Earth. Both components are white A-type main-sequence dwarfs with apparent magnitudes of +6.3. They are separated by 5.126 arcseconds on the sky, and their mean apparent brightness is +5.55 magnitudes. References A-type main-sequence stars Binary stars Cancri, Phi2 Cancer (constellation) Durchmusterung objects Cancri, 23 041404 71150/1 3310/11
Phi2 Cancri
Astronomy
137
1,409,855
https://en.wikipedia.org/wiki/Information%20economics
Information economics or the economics of information is the branch of microeconomics that studies how information and information systems affect an economy and economic decisions. One application considers information embodied in certain types of commodities that are "expensive to produce but cheap to reproduce." Examples include computer software (e.g., Microsoft Windows), pharmaceuticals and technical books. Once information is recorded "on paper, in a computer, or on a compact disc, it can be reproduced and used by a second person essentially for free." Without the basic research, initial production of high-information commodities may be too unprofitable to market, a type of market failure. Government subsidization of basic research has been suggested as a way to mitigate the problem. The subject of "information economics" is treated under Journal of Economic Literature classification code JEL D8 – Information, Knowledge, and Uncertainty. The present article reflects topics included in that code. There are several subfields of information economics. Information as signal has been described as a kind of negative measure of uncertainty. It includes complete and scientific knowledge as special cases. The first insights in information economics related to the economics of information goods. In recent decades, there have been influential advances in the study of information asymmetries and their implications for contract theory, including market failure as a possibility. Information economics is formally related to game theory as two different types of games that may apply, including games with perfect information, complete information, and incomplete information. Experimental and game-theory methods have been developed to model and test theories of information economics, including potential public-policy applications such as mechanism design to elicit information-sharing and otherwise welfare-enhancing behavior. An example of game theory in practice would be if two potential employees are going for the same promotion at work and are conversing with their employer about the job. However, one employee may have more information about what the role would entail then the other. Whilst the less informed employee may be willing to accept a lower pay rise for the new job, the other may have more knowledge on what the role's hours and commitment would take and would expect a higher pay. This is a clear use of incomplete information to give one person the advantage in a given scenario. If they talk about the promotion with each other in a process called colluding there may be the expectation that both will have equally informed knowledge about the job. However the employee with more information may mis-inform the other one about the value of the job for the work that is involved and make the promotion appear less appealing and hence not worth it. This brings into action the incentives behind information economics and highlights non-cooperative games. Value of information The starting point for economic analysis is the observation that information has economic value because it allows individuals to make choices that yield higher expected payoffs or expected utility than they would obtain from choices made in the absence of information. Data valuation is an emerging discipline that seeks to understand and measure the economic characteristics of information and data. Information, the price mechanism and organizations Much of the literature in information economics was originally inspired by Friedrich Hayek's "The Use of Knowledge in Society" on the uses of the price mechanism in allowing information decentralization to order the effective use of resources. Although Hayek's work was intended to discredit the effectiveness of central planning agencies over a free market system, his proposal that price mechanisms communicate information about scarcity of goods inspired Abba Lerner, Tjalling Koopmans, Leonid Hurwicz, George Stigler and others to further develop the field of information economics. Next to market coordination through the price mechanism, transactions can also be executed within organizations. The information requirements of the transaction are the prime determinant for the actual (mix of) coordination mechanism(s) that we will observe. Information asymmetry Information asymmetry means that the parties in the interaction have different information, e.g. one party has more or better information than the other. Expecting the other side to have better information can lead to a change in behavior. The less informed party may try to prevent the other from taking advantage of him. This change in behavior may cause inefficiency. Examples of this problem are selection (adverse or advantageous) and moral hazard. Adverse selection occurs when one side of the partnership has information the other does not and this can occur deliberately or by accident due to poor communication. A classic paper on adverse selection is George Akerlof's The Market for Lemons. The most common example of the Lemons Market is in the automobile industry. As suggested by Akerlof, there are four car types that a buyer could consider. This includes choosing either a new or used car, and choosing a good or bad car, or Lemon as it is more commonly known. When considering the market options there is possibility of purchasing a new lemon car as there is a used good car. The uncertainty that arises from the probably of purchasing a lemon due to asymmetric information can cause the buyer to have doubts about the car's quality and inherent outcome when purchased. This same dilemma exists in a multitude of markets where sellers have an incentive to not disclose information about their product if it is poor quality due to knowledge that the average standard across the industry from good products existing will boost their selling power. The asymmetrical information known about the car's quality can lead to a breakdown in the automobile industry's overall efficiency. This is due to two reasons. Firstly, uncertainty between the buyers and sellers and secondly in the broader market where only sellers with below average vehicles will be willing to sell due to the reduced quality being represented. There are two primary solutions for adverse selection; signaling and screening. Moral hazard includes a partnership between a principal and agent and occurs when the agent may change their behaviour or actions after a contract has been finalised which can cause adverse consequences for the principal. Moral hazard is present when there is a change in the agent's behaviour after taking out insurance cover to protect them. For example, if someone purchased car insurance for their vehicle and afterwards held their responsibility to a lower standard by going over the speed limit for example or generally driving recklessly. The Global Financial Crisis of 2008 is another example, where Mortgage-backed securities were formed through the collation of subprime mortgages and sold to investors without disclosing the risk involved. For moral hazard, contracting between principal and agent may be describable as a second best solution where payoffs alone are observable with information asymmetry. Insurance covers will often include a waiting period clause to refrain agents from changing their attitude. Signaling Michael Spence originally proposed the idea of signaling. He proposed that in a situation with information asymmetry, it is possible for people to signal their type, thus credibly transferring information to the other party and resolving the asymmetry. This idea was originally studied in the context of looking for a job. An employer is interested in hiring a new employee who is skilled in learning. Of course, all prospective employees will claim to be skilled at learning, but only they know if they really are. This is an information asymmetry. Spence proposed that going to college can function as a credible signal of an ability to learn. Assuming that people who are skilled in learning can finish college more easily than people who are unskilled, then by attending college the skilled people signal their skill to prospective employers. This is true even if they didn't learn anything in school, and school was there solely as a signal. This works because the action they took (going to school) was easier for people who possessed the skill that they were trying to signal (a capacity for learning). Screening Joseph E. Stiglitz pioneered the theory of screening. In this way the underinformed party can induce the other party to reveal their information. They can provide a menu of choices in such a way that the optimal choice of the other party depends on their private information. By making a particular choice, the other party reveals that he has information that makes that choice optimal. For example, an amusement park wants to sell more expensive tickets to customers who value their time more and money more than other customers. Asking customers their willingness to pay will not work - everyone will claim to have low willingness to pay. But the park can offer a menu of priority and regular tickets, where priority allows skipping the line at rides and is more expensive. This will induce the customers with a higher value of time to buy the priority ticket and thereby reveal their type. Risk and Uncertainty of Information Fluctuations in the availability and accuracy of information can induce some level of risk and uncertainty. Difference between Risk and Uncertainty Risk is defined by the circumstances under which the probability of every outcome is known by the decision-making individual and that, among all possible outcomes, it is not fully certain which will occur. In contrast, uncertainty refers to the situation whereby the probability of every outcome is unknown and cannot be accurately estimated thus, individuals will often lack sufficient economic information to make an informed decision. Risk Attitudes Risk attitude directly influences the behaviour of economic agents during decision-making under uncertainty by altering the individuals' perception towards the valuation and reliability of information within the market. Stakeholders, particularly managers, will often demonstrate different risk attitudes which dictate their decision-making towards a variety of investments. Risk attitude is classified under three main categories: risk aversion, risk neutrality and risk-seeking dispositions. Risk-averse managers have a tendency to prefer investments with a low degree of uncertainty that generates relatively lower expected returns, as opposed to those with a high degree of uncertainty that generates relatively higher expected returns. They are more likely to choose a decision with a guaranteed outcome that has minimal risk, even if that meant foregoing a payoff that is potentially higher. Risk-neutral managers primarily focus on maximising the expected outcome irrespective of the level of risk. This indifference fuels their inclination to pursue risky investment decisions only if the potential payoff was greater than the potential losses. While, risk-seeking managers have the tendency to prefer investments with the highest potential return, even if that decision meant undertaking a higher degree of risk. Information goods Buying and selling information is not the same as buying and selling most other goods. There are three factors that make the economics of buying and selling information different from solid goods: First of all, information is non-rivalrous, which means consuming information does not exclude someone else from also consuming it. A related characteristic that alters information markets is that information has almost zero marginal cost. This means that once the first copy exists, it costs nothing or almost nothing to make a second copy. This makes it easy to sell over and over. However, it makes classic marginal cost pricing completely infeasible. Second, exclusion is not a natural property of information goods, though it is possible to construct exclusion artificially. However, the nature of information is that if it is known, it is difficult to exclude others from its use. Since information is likely to be both non-rivalrous and non-excludable, it is frequently considered an example of a public good. Third is that the information market does not exhibit high degrees of transparency. That is, to evaluate the information, the information must be known, so you have to invest in learning it to evaluate it. To evaluate a bit of software you have to learn to use it; to evaluate a movie you have to watch it. The importance of these properties is explained by De Long and Froomkin in The Next Economy. Network effects Carl Shapiro and Hal Varian described Network effect (also called network externalities) as products gaining additional value from each additional user of that good or service. Network effects are externalities in which they provide an immediate benefit when an additional user joins the network, increasing the network size. The total value of the network depends upon the total adopters but carries only a marginal benefit for new users. This leads to a direct network effect for each user's adoption of the good, with an increased incentive for adoption as other user's adopt and join the network. The indirect network effect occurs as a complementary goods benefit from the adoption of the initial product. The growth of data is constantly expanding and growing at an exponential rate, however, the application of this data is far lower than the creation of it. New data brings about a potential increase in misleading or inaccurate information which can crowd out the correct information. This increase in unverified information is due to the easy and free nature of creating online data, disrupting potential for users from finding sourced and verified data. Critical mass As new networks are developed, early adopters form the social dynamics of the greater population and develop product maturity known as Critical mass. Product maturity is when they become self-sustaining and is more likely to occur when there are positive cash flows, consistent revenue flows, customer retention and brand engagement. To form a following, low initial prices need to be offered, along with widespread marketing to help create the snowball effect. More information In 2001, the Nobel prize in economics was awarded to George Akerlof, Michael Spence, and Joseph E. Stiglitz "for their analyses of markets with asymmetric information". See also Adverse selection Contract theory Game theory Indigo Era (economics) Information economy Moral hazard Product bundling Screening Signaling Single-crossing condition References Further reading Papers Bakos, Yannis and Brynjolfsson, Erik 2000. "Bundling and Competition on the Internet: Aggregation Strategies for Information Goods" Marketing Science Vol. 19, No. 1 pp. 63–82. Bakos, Yannis and Brynjolfsson, Erik 1999. "Bundling Information Goods: Pricing, Profits and Efficiency" Management Science, Vol. 45, No. 12 pp. 1613–1630 Brynjolfsson, Erik, and Saunders, Adam, 2009. "Wired for Innovation: How information technology is reshaping the economy", , Mas-Colell, Andreu; Michael D. Whinston, and Jerry R. Green, 1995, Microeconomic Theory. Oxford University Press. Chapters 13 and 14 discuss applications of adverse selection and moral hazard models to contract theory. Milgrom, Paul R., 1981. "Good News and Bad News: Representation Theorems and Applications," Bell Journal of Economics, 12(2), pp. 380–391. Nelson, Phillip, 1970. "Information and Consumer Behavior," Journal of Political Economy, 78(2), p p. 311–329. _, 1974. "Advertising as Information," Journal of Political Economy, 82(4), pp. 729–754. Technology, 978-0134645957 Pissarides, C. A., 2001. "Search, Economics of," International Encyclopedia of the Social & Behavioral Sciences, pp. 13760–13768. Abstract. Rothschild, Michael and Joseph Stiglitz, 1976. "Equilibrium in Competitive Insurance Markets: An Essay on the Economics of Imperfect Information," Quarterly Journal of Economics, 90(4), pp. 629–649. Shapiro, Carl, and Hal R. Varian, 1999. Information Rules: A Strategic Guide to the Network Economy. Harvard University Press. Description and scroll to chapter-preview links. Stigler, George J., 1961. "The Economics of Information," Journal of Political Economy, 69(3), pp. 213–225. Stiglitz, Joseph E. and Andrew Weiss, 1981. "Credit Rationing in Markets with Imperfect Information," American Economic Review, 71(3), pp. 393–410. Monographs Birchler, Urs, and Monika Bütler, 2007. Information Economics. London, Routledge. . Description and chapter-arrow-page links, pp. vii-xi. Douma, Sytse and Hein Schreuder, 2013. "Economic Approaches to Organizations". 5th edition. London: Pearson • Maasoumi, Esfandiar, 1987. "Information theory," The New Palgrave: A Dictionary of Economics, v. 2, pp. 846–51. Marilyn M. Parker, Robert J. Benson, H.E. Trainor, 1988, Information Economics: Linking Business Performance to Information Theil, Henri, 1967. Economics and Information Theory. Amsterdam, North Holland. Dictionaries The New Palgrave Dictionary of Economics, 2008. 2nd Edition, selected entries: "bubbles" by Markus K. Brunnermeier "information aggregation and prices" by James Jordan. "information cascades,"] by Sushil Bikhchandani, David Hirshleifer, and Ivo Welch. "information sharing among firms" by Xavier Vives. "information technology and the world economy"] by Dale W. Jorgenson and Khuong Vu. "insider trading" by Andrew Metrick. "learning and information aggregation in networks"] by Douglas Gale and Shachar Kariv. "mechanism design" by Roger B. Myerson. "revelation principle" by Roger B. Myerson. "monetary business cycles (imperfect information)"] by Christian Hellwig. "prediction markets" by Justin Wolfers and Eric Zitzewitz. "social networks in labour markets" by Antoni Calvó-Armengo and Yannis M. Ioannides. "strategic and extensive form games" by Martin J. Osborne. External links Economics Microeconomics fr:Economie de l'information
Information economics
Physics
3,567
773,271
https://en.wikipedia.org/wiki/AC%20power%20plugs%20and%20sockets
AC power plugs and sockets connect devices to mains electricity to supply them with electrical power. A plug is the connector attached to an electrically-operated device, often via a cable. A socket (also known as a receptacle or outlet) is fixed in place, often on the internal walls of buildings, and is connected to an AC electrical circuit. Inserting ("plugging in") the plug into the socket allows the device to draw power from this circuit. Plugs and wall-mounted sockets for portable appliances became available in the 1880s, to replace connections to light sockets. A proliferation of types were subsequently developed for both convenience and protection from electrical injury. Electrical plugs and sockets differ from one another in voltage and current rating, shape, size, and connector type. Different standard systems of plugs and sockets are used around the world, and many obsolete socket types are still found in older buildings. Coordination of technical standards has allowed some types of plug to be used across large regions to facilitate the production and import of electrical appliances and for the convenience of travellers. Some multi-standard sockets allow use of several types of plug. Incompatible sockets and plugs may be used with the help of adaptors, though these may not always provide full safety and performance. Overview of connections Single-phase sockets have two current-carrying connections to the power supply circuit, and may also have a third pin for a safety connection to earth ground. The plug is a male connector, usually with protruding pins that match the openings and female contacts in a socket. Some plugs also have a female contact, used only for the earth ground connection. Typically no energy is supplied to any exposed pins or terminals on the socket. In addition to the recessed contacts of the energised socket, plug and socket systems often have other safety features to reduce the risk of electric shock or damage to appliances. History When commercial electric power was first introduced in the 1880s, it was used primarily for lighting. Other portable appliances (such as vacuum cleaners, electric fans, smoothing irons, and curling-tong heaters) were connected to light-bulb sockets. As early as 1885 a two-pin plug and wall socket format was available on the British market. By about 1910 the first three-pin earthed (grounded) plugs appeared. Over time other safety improvements were gradually introduced to the market. The earliest national standard for plug and wall socket forms was set in 1915. Safety features Protection from accidental contact Designs of plugs and sockets have gradually developed to reduce the risk of electric shock and fire. Plugs are shaped to prevent bodily contact with live parts. Sockets may be recessed and plugs designed to fit closely within the recess to reduce risk of a user contacting the live pins. Contact pins may be sheathed with insulation over part of their length, so as to reduce exposure of energized metal during insertion or removal of the plug. Sockets may have automatic shutters to stop foreign objects from being inserted into energized contacts. Sockets are often set into a surround which prevents accidental contact with the live wires in the wall behind it. Some also have an integrated cover (e.g. a hinged flap) covering the socket itself when not in use, or a switch to turn off the socket. Overcurrent protection Some plugs have a built-in fuse which breaks the circuit if too much current is passed. Earthing (grounding) A third contact for a connection to earth is intended to protect against insulation failure of the connected device. Some early unearthed plug and socket types were revised to include an earthing pin or phased out in favour of earthed types. The plug is often designed so that the earth ground contact connects before the energized circuit contacts. The assigned IEC appliance class is governed by the requirement for earthing or equivalent protection. Class I equipment requires an earth contact in the plug and socket, while Class II equipment is unearthed and protects the user with double insulation. Polarisation Where a "neutral" conductor exists in supply wiring, polarisation of the plug can improve safety by preserving the distinction in the equipment. For example, appliances may ensure that switches interrupt the line side of the circuit, or can connect the shell of a screw-base lampholder to neutral to reduce electric shock hazard. In some designs, polarised plugs cannot be mated with non-polarised sockets. In NEMA 1 plugs, for example, the neutral blade is slightly wider than the hot blade, so it can only be inserted one way. Wiring systems where both circuit conductors have a significant potential with respect to earth do not benefit from polarised plugs. Voltage rating of plugs and power cords Plugs and power cords have a rated voltage and current assigned to them by the manufacturer. Using a plug or power cord that is inappropriate for the load may be a safety hazard. For example, high-current equipment can cause a fire when plugged into an extension cord with a current rating lower than necessary. Sometimes the cords used to plug in dual voltage 120 V / 240 V equipment are rated only for 125 V, so care must be taken by travellers to use only cords with an appropriate voltage rating. Extension Various methods can be used to increase the number or reach of sockets. Extension cords Extension cords (extension leads) are used for temporary connections when a socket is not within convenient reach of an appliance's power lead. This may be in the form of a single socket on a flexible cable or a power strip with multiple sockets. A power strip may also have switches, surge voltage protection, or overcurrent protection. Multisocket adaptors Multisocket adaptors (or "splitters") allow the connection of two or more plugs to a single socket. They are manufactured in various configurations, depending on the country and the region in which they are used, with various ratings. This allows connecting more than one electrical consumer item to one single socket and is mainly used for low power devices (TV sets, table lamps, computers, etc.). They are usually rated at 6 A 250 V, 10 A 250 V, or 16 A 250 V. This is the general rating of the adaptor, and indicates the maximum total load in amps, regardless of the number of sockets used (for example, if a 16 A 250 V adaptor has four sockets, it would be fine to plug four different devices into it that each consume 2 A as this represents a total load of only 8 A, whereas if only two devices were plugged into it that each consumed 10 A, the combined 20 A load would overload the circuit). In some countries these adaptors are banned and are not available in shops, as they may lead to fires due to overloading them or can cause excessive mechanical stress to wall-mounted sockets. Adaptors can be made with ceramic, Bakelite, or other plastic bodies. Cross-compatibility Universal sockets "Universal" or "multi-standard" sockets are intended to accommodate plugs of various types. In some jurisdictions, they violate safety standards for sockets. Safety advocates, the United States Army, and a manufacturer of sockets point out a number of safety issues with universal socket and adaptors, including voltage mismatch, exposure of live pins, lack of proper earth ground connection, or lack of protection from overload or short circuit. Universal sockets may not meet technical standards for durability, plug retention force, temperature rise of components, or other performance requirements, as they are outside the scope of national and international technical standards. A technical standard may include compatibility of a socket with more than one form of plug. The Thai dual socket is specified in figure 4 of TIS 166-2549 and is designed to accept Thai plugs, and also Type A, B, and C plugs. Chinese dual sockets have both an unearthed socket complying with figure 5 of GB 1002-2008 (both flat pin and 4.8 mm round pin), and an earthed socket complying with figure 4 of GB 1002-2008. Both Thai and Chinese dual sockets also physically accept plugs normally fitted to 120 V appliances (e.g. 120 V rated NEMA 1-15 ungrounded plugs). This can cause an electrical incompatibility, since both states normally supply residential power only at 220 V. Swappable cables and plugs Commonly, manufacturers provide an IEC 60320 inlet on an appliance, with a detachable power cord (mains flex lead) and appropriate plug in order to avoid manufacturing whole appliances, with the only difference being the type of plug. Alternatively, the plug itself can often be swappable using standard or proprietary connectors. Travel adaptors Adaptors between standards are not included in most standards, and as a result they have no formal quality criteria defined. Physical compatibility does not ensure that the appliance and socket match in frequency or voltage. Adaptors allow travellers to connect devices to foreign sockets, but do not change voltage or frequency. A voltage converter is required for electrical compatibility in places with a different voltage than the device is designed for. Mismatch in frequency between supply and appliances may still cause problems even at the correct voltage. Some appliances have a switch for the selection of voltage. Standard types in present use The plugs and sockets used in a given area are regulated by local governments. The International Electrotechnical Commission (IEC) maintains a guide with letter designations for generally compatible types of plugs, which expands on earlier guides published by the United States Department of Commerce. This is a de facto naming standard and guide to travellers. Some letter types correspond to several current ratings or different technical standards, so the letter does not uniquely identify a plug and socket within the type family, nor guarantee compatibility. Physical compatibility of the plug and socket does not ensure correct voltage, frequency, or current capacity. Not all plug and socket families have letters in the IEC guide, but those that have are noted in this article, as are some additional letters commonly used by retail vendors. In Europe, CENELEC publishes a list of approved plug and socket technical standards used in the member countries. Argentina IRAM 2073 and 2071 (Type I) The plug and socket system used in Class 1 applications in Argentina is defined by IRAM standards. These two standards are; IRAM 2073 "Two pole plugs with earthing contact for domestic and similar purposes, rated 10 A and 20 A, 250 V AC" and IRAM 2071 "Two pole socket – outlets with earthing contact for 10 A and 20 A, 250 V AC., for fixed installations." The plug and socket system is similar in appearance to the Australian and Chinese plugs. It has an earthing pin and two flat current-carrying pins forming an inverted V-shape (120°). The flat pins for the 10 A version measure and for the 20 A version, and are set at 30° to the vertical at a nominal pitch of . The pin length is the same as in the Chinese version. The earthing pin length is for the 10 A version and for the 20 A version. On the plugs, the pole length is for the 10 A version and for the 20 A version. The most important difference from the Australian plug is that the Argentine plug is wired with the live and neutral contacts reversed. In Brazil, similar plugs and sockets are still commonly used in old installations for high-power appliances like air conditioners, dishwashers, and household ovens. Although being often called "Argentine plug," it is actually based on the American NEMA 10-20 standard, and is incompatible with Argentine IRAM plugs. Since Brazil adopted the NBR 14136 standard which includes a 20 A version, the original motivation to use the NEMA 10-20 plug has ceased to exist. Australian/New Zealand standard AS/NZS 3112 (Type I), used in Australasia This Australian/New Zealand standard is used in Australia, New Zealand, Fiji, Tonga, Solomon Islands, and Papua New Guinea. It defines a plug with an earthing pin, and two flat current-carrying pins which form an inverted V-shape. The flat pins measure and are set at 30° to the vertical at a nominal pitch of . Australian and New Zealand wall sockets (locally often referred to as power points) almost always have switches on them for extra safety, as in the UK. An unearthed version of this plug with two angled power pins but no earthing pin is used with double-insulated appliances, but the sockets always include an earth contact. There are several AS/NZS 3112 plug variants, including ones with larger or differently shaped pins used for devices drawing 15, 20, 25 and 32 A. These sockets accept plugs of equal or lower current rating, but not higher. For example, a 10 A plug will fit all sockets but a 20 A plug will fit only 20, 25 and 32 A sockets. In New Zealand, PDL 940 "tap-on" or "piggy-back" plugs are available which allow a second 10 A plug to be fitted to the rear of the plug. In Australia these piggy-back plugs are now available only on pre-made extension leads. Australia's standard plug/socket system was originally codified as standard C112 (floated provisionally in 1937, and adopted as a formal standard in 1938), which was based on a design patented by Harvey Hubbell and was superseded by AS 3112 in 1990. The requirement for insulated pins was introduced in the 2004 revision. The current version is AS/NZS 3112:2011, Approval and test specification – Plugs and socket-outlets. Brazilian standard NBR 14136 (Type N) Brazil, which had been using mostly Europlugs, and NEMA 1-15 and NEMA 5-15 standards, adopted a (non-compliant) variant of IEC 60906-1 as the national standard in 1998 under specification NBR 14136 (revised in 2002). These are used for both 220-volt and 127-volt regions of the country, despite the IEC 60906-2 recommendation that NEMA 5-15 be used for 120 V connections. There are two types of sockets and plugs in NBR 14136: one for 10 A, with a 4.0 mm pin diameter, and another for 20 A, with a 4.8 mm pin diameter. This differs from IEC 60906-1 which specifies a pin diameter of 4.5 mm and a rating of 16 A. NBR 14136 does not require shutters on the apertures, a further aspect of non-compliance with IEC 60906-1. NBR 14136 was not enforced in that country until 2007, when its adoption was made optional for manufacturers. It became compulsory on 1 January 2010. Few private houses in Brazil have an earthed supply, so even if a three-pin socket is present it is not safe to assume that all three terminals are actually connected. Most large domestic appliances were sold with the option to fit a flying earth tail to be locally earthed, but many consumers were unsure how to use this and so did not connect it. The new standard has an earth pin, which in theory eliminates the need for the flying earth tail. British and compatible standards BS 546 and related types (Type D and M) BS 546, "Two-pole and earthing-pin plugs, socket-outlets and socket-outlet adaptors for AC (50-60 Hz) circuits up to 250 V" describes four sizes of plug rated at 2 A, 5 A (Type D), 15 A (Type M) and 30 A. The plugs have three round pins arranged in a triangle, with the larger top pin being the earthing pin. The plugs are polarised and unfused. Plugs are non-interchangeable between current ratings. Introduced in 1934, the BS 546 type has mostly been displaced in the UK by the BS 1363 standard. According to the IEC, some 40 countries use Type D and 15 countries use Type M. Some, such as India and South Africa, use standards based on BS 546. BS 1363 (Type G) BS 1363 "13 A plugs, socket-outlets, adaptors and connection units" is the main plug and socket type used in the United Kingdom. According to the IEC it is also used in over 50 countries worldwide. Some of these countries have national standards based on BS 1363, including: Bahrain, Hong Kong, Ireland, Cyprus, Malaysia, Malta, Saudi Arabia, Singapore, Sri Lanka, and UAE. This plug has three rectangular pins forming an isosceles triangle. The BS 1363 plug has a fuse rated to protect its flexible cord from overload and consequent fire risk. Modern appliances may only be sold with a fuse of the appropriate size pre-installed. BS 4573 (UK shaver) The United Kingdom, Ireland, and Malta use the BS 4573 two-pin plug and socket for electric shavers and toothbrushes. The plug has insulated sleeves on the pins. Although similar to the Europlug Type C, the diameter and spacing of the pins are slightly different and hence it will not fit into a Schuko socket. There are, however, two-pin sockets and adaptors which will accept both BS 4573 and Europlugs. CEE 7 standard The International Commission on the Rules for the Approval of Electrical Equipment (IECEE) was a standards body which published Specification for plugs and socket-outlets for domestic and similar purposes as CEE Publication 7 in 1951. It was last updated by Modification 4 in March 1983. CEE 7 consists of general specifications and standard sheets for specific connectors. Standard plugs and sockets based on two round pins with centres spaced at 19 mm are in use in Europe, most of which are listed in IEC/TR 60083 "Plugs and socket-outlets for domestic and similar general use standardized in member countries of IEC." EU countries each have their own regulations and national standards; for example, some require child-resistant shutters, while others do not. CE marking is neither applicable nor permitted on plugs and sockets. CEE 7/1 unearthed socket and CEE 7/2 unearthed plug CEE 7/1 unearthed sockets accept CEE 7/2 round plugs with pins. Because they have no earth connections they have been or are being phased out in most countries. Some countries still permit their use in dry areas, while others allow their sale for replacements only. Older sockets are so shallow that it is possible to accidentally touch the live pins of a plug. CEE 7/1 sockets also accept CEE 7/4, CEE 7/6 and CEE 7/7 plugs without providing an earth connection. The earthed CEE 7/3 and CEE 7/5 sockets do not allow insertion of CEE 7/2 unearthed round plugs. CEE 7/3 socket and CEE 7/4 plug (German "Schuko"; Type F) The CEE 7/3 socket and CEE 7/4 plug are commonly called Schuko, an abbreviation for Schutzkontakt, Protective contact to earth ("Schuko" itself is a registered trademark of a German association established to own the term). The socket has a circular recess with two round holes and two earthing clips that engage before live pin contact is made. The pins are . The Schuko system is unpolarised, allowing live and neutral to be reversed. The socket accepts Europlugs and CEE 7/17 plugs and also includes CEE 7/7. It is rated at 16 A. The current German standards are DIN 49441 and DIN 49440. The standard is used in Germany and several other European countries and on other continents. Some countries require child-proof socket shutters; the DIN 49440 standard does not have this requirement. The plug is used in most or many countries of Europe, Asia, and Africa, as well as in the countries of South Korea, Peru, Chile and Uruguay. The few European countries not using it at all are Belgium, Czech Republic, Cyprus, Ireland, Liechtenstein, Switzerland, and the UK, or not using it predominantly are Denmark, Faroe Island, France, Italy, Monaco, San Marino, Slovakia. CEE 7/5 socket and CEE 7/6 plug (French; Type E) French standard NF C 61-314 defines the CEE 7/5 socket and CEE 7/6 plug, (and also includes CEE 7/7, 7/16 and 7/17 plugs). The socket has a circular recess with two round holes. The round earth pin projecting from the socket connects before the energized contacts touch. The earth pin is centred between the apertures, offset by . The plug has two round pins measuring , spaced apart and with an aperture for the socket's projecting earth pin. This standard is also used in Belgium, Poland, the Czech Republic, Slovakia and some other countries. Although the plug is polarised, CEE 7 does not define the placement of the live and neutral, and different countries have conflicting standards for that. For example, the French standard NF C 15-100 requires live to be on the right side, while Czech standard ČSN 33 2180 requires live to be on the left side of a socket. Thus, a French plug when plugged into a Czech socket (or a Czech plug when plugged into a French socket) will always have its polarity reversed, with no way for the user to remedy this situation apart from rewiring the plug. One approach for resolving this situation is taken in Poland, where CEE 7/5 sockets are typically installed in pairs, the upper (upside-down) one having the "French" polarity and the lower one having the "Czech" polarity, so that the user can choose what to plug where. CEE 7/4 (Schuko) plugs are not compatible with the CEE 7/5 socket because of the round earthing pin permanently mounted in the socket; CEE 7/6 plugs are not compatible with Schuko sockets due to the presence of indentations on the side of the recess, as well as the earth clips. CEE 7/7 plugs have been designed to solve this incompatibility by being able to fit in either type of socket. Sales and installations of 7/5 sockets are legally permitted in Denmark since 2008, but the sockets are hard to find in physical stores, and installation is exceedingly rarely performed. CEE 7/7 plug (compatible with E and F) The CEE 7/7 plug fits in either French or Schuko sockets. It is rated at 16A and looks similar to CEE 7/4 plugs, but with earth contacts to fit both CEE 7/5 sockets and CEE 7/3 ones. It is polarised when used with a French-style CEE 7/5 socket, but can be inserted in two ways into a CEE 7/3 socket. However, with the French socket it is not specified whether the live connection is on the left or right, as this can vary between countries. Earthed appliances are typically sold fitted with non-rewireable CEE 7/7 plugs attached, though rewireable versions are also available. This plug can be inserted into a Danish Type K socket, but the earth contact will not connect. CEE 7/16 plugs The CEE 7/16 unearthed plug is used for unearthed appliances. It has two round 4 by 19 mm (0.157 by 0.748 in) pins, rated at 2.5 A. There are two variants. CEE 7/16 Alternative I Alternative I is a round plug with cutouts to make it compatible with CEE 7/3 and CEE 7/5 sockets. (The similar-appearing CEE 7/17 has larger pins and a higher current rating.) This alternative is seldom used. CEE 7/16 Alternative II "Europlug" (Type C) Alternative II, popularly known as the Europlug, is a flat 2.5 A-rated plug defined by Cenelec standard EN 50075 and national equivalents. The Europlug is not rewirable and must be supplied with a flexible cord. It can be inserted in either direction, so line and neutral are connected arbitrarily. To improve contact with socket parts the Europlug has slightly flexible pins which converge toward their free ends. There is no socket defined to accept only the Europlug. Instead, the Europlug fits a range of sockets in common use in Europe. These sockets include CEE 7/1, CEE 7/3 (German/"Schuko") and CEE 7/5 (French). Most Israeli, Swiss, Danish and Italian sockets were designed to accept pins of various diameters, mainly 4.8 mm, but also 4.0 mm and 4.5 mm, and are usually fed by final circuits with either 10 A or 16 A overcurrent protection devices. Although the standard does not permit extension cables and does not define any socket-outlets, unauthorized extension cables and sockets are manufactured. UK shaver sockets are designed to accept BS 4573 shaver plugs while also accepting Europlugs. In this configuration, the connection supply is only rated at 200 mA. It is not permissible within the UK for the shaver socket to be fitted and used for a higher rated current draw than the 200 mA maximum. The Europlug is also used in parts of the Middle East, Africa, South America, and Asia. CEE 7/17 unearthed plug This is a round plug compatible with CEE 7/1, CEE 7/3, and CEE 7/5 sockets. It has two round pins measuring . The pins are not sheathed, in contrast to e.g. CEE 7/16 Europlugs. It may be rated at either 10 A or 16 A. A typical use is for appliances that exceed the 2.5 A rating of CEE 7/16 Europlugs. It may be used for unearthed Class II appliances (and in South Korea for all domestic non-earthed appliances). It is also defined as the Class II plug in Italian standard CEI 23-50. It is sometimes called a contour plug, because its collar contour follows that of the socket's recess. The collar prevents accidental contact with the non sheathed pins when inserting or removing the plug in a recessed socket. It can be inserted into Israeli SI 32 outlets with some difficulty, as well as Danish (type K) ones. The Soviet GOST 7396 standard includes both the CEE 7/17 and the CEE 7/16 variant II plug. China GB 2099.1-2008 and GB 1002-2008 (Type A & I) The standard for Mainland Chinese plugs and sockets (excluding Hong Kong and Macau) is set out in GB 2099.1-2008 and GB 1002-2008. As part of China's commitment for entry into the WTO, the new CPCS (Compulsory Product Certification System) has been introduced, and compliant Chinese plugs have been awarded the CCC Mark by this system. The plug is three wire, earthed, rated at 10 A, 250 V and used for Class 1 applications; a slightly larger 16 A version also exists. The nominal pin dimensions of the 10 A version are: 1.5 mm thick by 6.4 mm wide, the line & neutral are 18 mm long, and the earth is 21 mm long. It is similar to the Australian plug. Many 3 pin sockets in China include a physical lockout preventing access to the active and neutral terminals unless an earth pin (which is slightly longer than the other 2 pins) is entered first. China also uses American/Japanese NEMA 1-15 sockets and plugs for Class-II appliances (however, polarized plugs with one prong wider than the other are not accepted); a common socket type that also accepts Europlug (type C) is also defined in GB 1002. The voltage at a Chinese socket of any type is 220 V. Type I plugs and sockets from different countries have different pin lengths. This means that the uninsulated pins of a Chinese plug may become live while there is still a large enough gap between the faces of the plug and socket to allow a finger to touch the pin. Danish Section 107-2-D1 earthed (Type K) This Danish standard plug is described in the Danish Plug Equipment Section 107-2-D1 Standard sheet (SRAF1962/DB 16/87 DN10A-R). The Danish standard provides for sockets to have child-resistant shutters. The Danish socket will also accept the CEE 7/16 Europlug or CEE 7/17 Schuko-French hybrid plug. CEE 7/4 (Schuko), CEE 7/7 (Schuko-French hybrid), and earthed CEE 7/6 French plugs will also fit into the socket but will not provide an earth connection and may be attached to appliances requiring more than the 13 A maximum rating of the socket. A variation (standard DK 2-5a) of the Danish plug is for use only on surge protected computer sockets. It fits into the corresponding computer socket and the normal socket, but normal plugs deliberately do not fit into the special computer socket. The plug is often used in companies, but rarely in private homes. There is a variation for hospital equipment with a rectangular left pin, which is used for life support equipment. Traditionally all Danish sockets were equipped with a switch to prevent touching live pins when connecting/disconnecting the plug. Today, sockets without switch are allowed, but it is a requirement that the sockets have a cavity to prevent touching the live pins. The shape of the plugs generally makes it difficult to touch the pins when connecting/disconnecting. Since the early 1990s earthed sockets have been required in all new electric installations in Denmark. Older sockets need not be earthed, but all sockets, including old installations, must be protected by earth-fault interrupters (HFI or HPFI in Danish) by 1 July 2008. As of 1 July 2008, wall sockets for French two-pin, female earth CEE 7/5 are permitted for installations in Denmark. This was done because little electrical equipment sold to private users is equipped with a Danish plug. In Europe, devices are usually sold with the Europlug CEE 7/16 and Hybrid plug CEE 7/7, as these fit in most countries. However, in Denmark this often leads to the situation that the protective earth is not connected. CEE 7/3 sockets were not permitted until 15 November 2011. Many international travel adaptor sets sold outside Denmark match CEE 7/16 (Europlug) and CEE 7/7 (Schuko-French hybrid) plugs which can readily be used in Denmark. Though Type K remains by far the most common socket in Danish homes as of January 2024, newssites and industry magazines have warned that plugging a Schuko plug directly into a Type K socket can give noticeable electric shocks to the point of pain, be dangerous to the point of hospitalising, or even be life-threatening. IEC 60906-1 (Type N) In 1986, the International Electrotechnical Commission published IEC 60906-1, a specification for a plug and socket that look similar, but are not identical, to the Swiss plug and socket. This standard was intended to one day become common for all of Europe and other regions with 230 V mains, but the effort to adopt it as a European Union standard was put on hold in the mid-1990s. The plug and socket are rated 16 A 250 V AC and are intended for use only on systems having nominal voltages between 200 V and 250 V AC The plug pins are 4.5 mm in diameter, line and neutral are on centres 19 mm apart. The earth pin is offset by 3.0 mm. The line pin is on the right when looking at a socket with the earth pin offset up. Shutters over the line and neutral pins are mandatory. The only country to have officially adopted the standard is South Africa as SANS 164-2. Brazil developed a plug resembling IEC 60906-1 as the national standard under specification NBR 14136. The NBR 14136 standard has two versions, neither of which has pin dimensions or ratings complying with IEC 60906-1. Use at 127 V is permitted by NBR 14136, which is against the intention of IEC 60906-1. Israel SI32 (Type H) The plug defined in SI 32 (IS16A-R) is used only in Israel, including the Gaza Strip and the West Bank. There are two versions: an older one with flat pins, and a newer one with round pins. The pre-1989 system has three flat pins in a Y-shape, with line and neutral apart. The plug is rated at 16 A. In 1989 the standard was revised, with three round pins in the same locations designed to allow the socket to accept both older and newer Israeli plugs, and also non-grounded Europlugs (often used in Israel for equipment which does not need to be grounded and does not use more current than the Europlug is rated for). Pre-1989 sockets which accept only old-style plugs have become very rare in Israel. Sockets have a defined polarity; looking at the front, neutral is to the left, ground at the bottom, and line to the right. Italy (Type L) Italian plugs and sockets are defined by the standard CEI 23-50 which superseded CEI 23-16. This includes models rated at 10 A and 16 A that differ in contact diameter and spacing (see below for details). Both are symmetrical, allowing the line and neutral contacts to be inserted in either direction. This plug is also commonly used in Chile and Uruguay. 10 A plugs and socket: Pins which are 4 mm in diameter, the centres spaced 19 mm apart. The 10 A three-pin earthed rear entry plug is designated CEI 23-50 S 11 (there are also two side-entry versions, SPA 11 and SPB 11). The 10 A two-pin unearthed plug is designated CEI 23-50 S 10. The 10 A three-pin earthed socket is designated CEI 23-50 P 11, and the 10 A two-pin unearthed socket is designated CEI 23-50 P 10. Both 10 A sockets also accept CEE 7/16 (Europlugs). 16 A plug and socket: Pins which are 5 mm in diameter, the centres spaced 26 mm apart. The 16 A three-pin earthed rear entry plug is designated CEI 23-50 S 17 (there are also two side-entry versions, SPA 17 and SPB 17). The 16 A two-pin unearthed plug is designated CEI 23-50 S 16. The 16 A three-pin earthed socket is designated CEI 23-50 P 17, there is not a 16 A two-pin unearthed socket. The 16 A socket used to be referred to as per la forza motrice (for electromotive force, see above) or sometimes (inappropriately) industriale (industrial) or even calore (heat). The two standards were initially adopted because up to the second half of the 20th century in many regions of Italy electricity was supplied by means of two separate consumer connections – one for powering illumination and one for other purposes – and these generally operated at different voltages, typically 127 V (a single phase from 220 V three-phase) and 220 V (a single phase from three-phase 380 V or two-phase from 220 V three-phase). The electricity on the two supplies was separately metered, was sold at different tariffs, was taxed differently and was supplied through separate and different sockets. Even though the two electric lines (and respective tariffs) were gradually unified beginning in the 1960s (the official, but purely theoretical, date was the summer of 1974) many houses had dual wiring and two electricity meters for years thereafter; in some zones of Lazio the 127 V network was provided for lighting until 1999. The two gauges for plugs and sockets thus became a de facto standard which is now formalized under CEI 23-50. Some older installations have sockets that are limited to either the 10 A or the 16 A style plug, requiring the use of an adaptor if the other gauge needs to be connected. Numerous cross adaptors were used. Almost every appliance sold in Italy nowadays is equipped with CEE 7/7 (German/French), CEE 7/16 or CEE 7/17 plugs, but the standard Italian sockets will not accept the first and the third ones since the pins of the CEE 7/7 and CEE 7/17 plugs are thicker (4.8 mm) than the Italian ones (4 mm); besides the pins are not sheathed and forcing them into a linear Italian socket may lead to electric shock. Adaptors are standardized in Italy under CEI 23-57 which can be used to connect CEE 7/7 and CEE 7/17 and plugs to linear CEI 23-50 sockets. Europlugs are also in common use in Italy; they are standardized under CEI 23-34 S 1 for use with the 10 A socket and can be found fitted to Class II appliances with low current requirement (less than 2.5 A). The current Italian standards provide for sockets to have child-resistant shutters ("Sicury" patent). Italian multiple standard sockets In modern installations in Italy (and in other countries where Type L plugs are used) it is usual to find sockets that can accept more than one standard. The simplest type, designated CEI 23-50 P 17/11, has a central round hole flanked by two figure-8 shaped holes, allowing the insertion of CEI 23-50 S 10 (Italian 10 A plug unearthed), CEI 23-50 S 11 (Italian 10 A plug earthed), CEI 23-50 S 16 (Italian 16 A plug unearthed), CEI 23-50 S 17 (Italian 16 A plug earthed) and CEE 7/16 (Europlug). The advantage of this socket style is its small, compact face; its drawback is that it accepts neither CEE 7/7 nor CEE 7/17, very commonly found in new appliances sold in Italy. Vimar brand claims to have patented this socket first in 1975 with their Bpresa model; however soon other brands started selling similar products, mostly naming them with the generic term presa bipasso (twin-gauge socket) that is now of common use. A second, quite common type is called CEI 23-50 P 30 and looks like a Schuko socket, but adds a central earthing hole (optional according to CEI 23-50, but virtually always present). This design can accept CEE 7/4 (German), CEE 7/7 (German/French), CEE 7/16, CEE 7/17 (Konturenstecker, German/French unearthed), CEI 23-50 S 10 and CEI 23-50 S 11 plugs. Its drawback is that it is twice as large as a normal Italian socket, it does not accept 16 A Italian plugs and the price is higher; for those reasons Schuko sockets have been rarely installed in Italy until recent times. Other types may push compatibility even further. The CEI 23-50 P 40 socket, which is quickly becoming the standard in Italy along with CEI 23-50 P 17/11, accepts CEE 7/4, CEE 7/7, CEE 7/16, CEE 7/17, CEI 23-50 S 10, CEI 23-50 S 11, CEI 23-50 S 16 and CEI 23-50 S 17 plugs; its drawback is that it does not accept SPA 11, SPB 11, SPA 17 and SPB 17 side-entry plugs; however almost no appliance is sold with these types, which are mainly used to replace existing plugs. The Vimar-brand universale (all purpose) socket accepts CEE 7/4, CEE 7/7, CEE 7/16, CEE 7/17, CEI 23-50 S 10, CEI 23-50 S 11, CEI 23-50 S 16, CEI 23-50 S 17 and also NEMA 1-15 (US/Japan) plugs (older versions also had extra holes to accept UK shaver plugs). North America, Central America and IEC 60906-2 Most of North America and Central America, and some of South America, use connectors standardized by the National Electrical Manufacturers Association (NEMA). The devices are named using the format NEMA n-mmX, where n is an identifier for the configuration of pins and blades, mm is the maximum current rating, and X is either P for plug or R for receptacle. For example, NEMA 5-15R is a configuration type 5 receptacle supporting 15 A. Corresponding P and R versions are designed to be mated. Within the series, the arrangement and size of pins will differ, to prevent accidental mating of devices with a higher current draw than the receptacle can support. NEMA 1-15 ungrounded (Type A) NEMA-1 plugs have two parallel blades and are rated 15 A at 125 volts. They provide no ground connection but will fit a grounding NEMA 5-15 receptacle. Early versions were not polarised, but most plugs are polarised today via a wider neutral blade. (Unpolarised AC adaptors are a common exception.) Harvey Hubbell patented a parallel blade plug in 1913, where the blades were equal width (). In 1916 Hubbell received a patent for a polarised version where one blade was both longer and wider than the other (), in the polarised version of NEMA 1-15, introduced in the 1950s, both blades are the same length, only the width varies. Ungrounded NEMA-1 outlets are not permitted in new construction in the United States and Canada, but can still be found in older buildings. NEMA 5-15 grounded (Type B) The NEMA 5-15 plug has two flat parallel blades like NEMA 1-15, and a ground (earth) pin. It is rated 15 A at 125 volts. The ground pin is longer than the line and neutral blades, such that an inserted plug connects to ground before power. The ground hole is officially D-shaped, although some round holes exist. Both current-carrying blades on grounding plugs are normally narrow, since the ground pin enforces polarity. This socket is recommended in IEC standard 60906-2 for 120-volt 60 Hz installations. The National Electrical Contractors Association's National Electrical Installation Standards (NECA 130-2010) recommends that sockets be mounted with the ground hole up, such that an object falling on a partially inserted connector contacts the ground pin first. However, the inverted orientation (with ground pin downwards) is more commonly used. The ground-down orientation has been called the "sad socket", "dismayed face", or "shocked face" by some. Tamper-resistant sockets may be required in new residential construction, with shutters on the power blade sockets to prevent contact by objects inserted into the socket. In stage lighting, this connector is sometimes known as PBG for Parallel Blade with Ground, Edison or Hubbell, the name of a common manufacturer. NEMA 5-20 The NEMA 5-20 AP variant has blades perpendicular to each other. The receptacle has a T-slot for the neutral blade which accepts either 15 A parallel-blade plugs or 20 A plugs. NEMA 14-50 NEMA 14-50 devices are frequently found in RV parks, since they are used for "shore power" connections of larger recreational vehicles. Also, it was formerly common to connect mobile homes to utility power via a 14-50 device. Newer applications include Tesla's Mobile Connector for vehicle charging, which formally recommended the installation of a 14-50 receptacle for home use. Other NEMA types 30- and 50-amp rated sockets are often used for high-current appliances such as clothes dryers and electric stoves. JIS C 8303, Class II unearthed The Japanese Class II plug and socket appear physically identical to NEMA 1-15 and also carries 15 A. The relevant Japanese Industrial Standard, JIS C 8303, imposes stricter dimensional requirements for the plug housing, different marking requirements, and mandatory testing and type approval. Older Japanese sockets and multi-plug adaptors are unpolarised—the slots in the sockets are the same size—and will accept only unpolarised plugs. Japanese plugs generally fit into most North American sockets without modification, but polarised North American plugs may require adaptors or replacement non-polarised plugs to connect to older Japanese sockets. In Japan the voltage is 100 V, and the frequency is either 50 Hz (Eastern Japan: Tokyo, Yokohama, Tohoku, Kawasaki, Sapporo, Sendai and Hokkaido) or 60 Hz (Western Japan: Osaka, Kyoto, Nagoya, Shikoku, Kyushu and Hiroshima) depending on whether the customer is located on the Osaka or Tokyo grid. Therefore, some North American devices which can be physically plugged into Japanese sockets may not function properly. JIS C 8303, Class I earthed Japan also uses a grounded plug similar to the North American NEMA 5-15. However, it is less common than its NEMA 1-15 equivalent. Since 2005, new Japanese homes are required to have class I grounded sockets for connecting domestic appliances. This rule does not apply for sockets not intended to be used for domestic appliances, but it is strongly advised to have class I sockets throughout the home. Soviet standard GOST 7396 C 1 unearthed This Soviet plug, still sometimes used in the region, has pin dimensions and spacing equal to the Europlug, but lacks the insulation sleeves. Unlike the Europlug, it is rated 6 A. It has a round body like the European CEE 7/2 or flat body with a round base like CEE 7/17. The round base has no notches. The pins are parallel and do not converge. The body is made of fire-resistant thermoset plastic. The corresponding 6 A socket accepts the Europlug, but not others as the 4.5 mm holes are too small to accept the 4.8 mm pins of CEE 7/4, CEE 7/6 or CEE 7/7 plugs. There were also moulded rubber plugs available for devices up to 16 A similar to CEE 7/17, but with a round base without any notches. They could be altered to fit a CEE 7/5 or CEE 7/3 socket by cutting notches with a sharp knife. Swiss SN 441011 (Type J) The Swiss standard, also used in Liechtenstein and Rwanda (and in other countries alongside other standards) is SN 441011 (until 2019 SN SEV 1011) Plugs and socket-outlets for household and similar purposes. The standard defines a hierarchical system of plugs and sockets with two, three and five pins, and 10 A or 16 A ratings. Sockets will accept plugs with the same or fewer pins and the same or lower ratings. The standard also includes three-phase devices rated at 250 V (phase-to-neutral) / 440 V (phase-to-phase). It does not require the use of child protective shutters. The standard was first described in 1959. 10 A plugs and sockets (Type J) SEV 1011 defines a "Type 1x" series of 10 A plugs and sockets. The type 11 plug is unearthed, with two 4 mm diameter round pins spaced 19 mm apart. The type 12 plug adds a central 4 mm diameter round earth pin, offset by 5 mm. The type 12 socket has no recess, while the type 13 socket is recessed. Both sockets will accept type 11 and type 12 plugs, and also the 2.5 A Europlug. Earlier type 11 & 12 plugs had line and neutral pins without sleeved pins, which present a shock hazard when partially inserted into non-recessed sockets. The IEC type J designation refers to SEV 1011's type 12 plugs and type 13 sockets. Unique to Switzerland is a three-phase power socket compatible with single-phase plugs: The type 15 plug has three round pins, of the same dimensions as type 12, plus two smaller flat rectangular pins for two additional power phases. The type 15 socket is recessed, and has five openings (three round and two flat rectangular). It will accept plugs of types 11, 12, 15 and the Europlug. 16 A plugs and sockets SEV 1011 also defines a "Type 2x" series of 16 A plugs and sockets. These are the same as their 10 A "Type 1x" counterparts, but replace the round pins with 4 mm × 5 mm rectangular pins. The sockets will accept "Type 1x" plugs. The unearthed type 21 plug has two rectangular pins, with centres 19 mm apart. The type 23 plug adds a central rectangular earth pin, offset by 5 mm. The recessed type 23 socket will accept plugs of types 11, 12, 21, 23 and the Europlug. Again, the three-phase power socket is compatible with single-phase plugs, either of 10 A or 16 A ratings: The type 25 plug has three rectangular pins of the same dimensions as type 23, plus two rectangular pins of the same dimensions as type 15. The corresponding type 25 socket is recessed and will accept plugs of types 11, 12, 15, 21, 23, 25 and the Europlug. Regulation of adaptors and extensions A 2012 appendix to SEV 1011:2009, SN SEV 1011:2009/A1:2012 Plugs and socket-outlets for household and similar purposes – A1: Multiway and intermediate adaptors, cord sets, cord extension sets, travel adaptors and fixed adaptors defines the requirements applicable to multiway and intermediate adaptors, cord sets, cord extension sets, and travel and fixed adaptors, it covers the electrical safety and user requirements, including the prohibition of stacking (the connection of one adaptor to another). Non-conforming products must be withdrawn from the Swiss market before the end of 2018. Pictures Thai three-pin plug TIS 166-2549 (Type O) Thai Industrial Standard (TIS) 166-2547 and its subsequent update TIS 166-2549 replaced prior standards which were based on NEMA 1-15 and 5-15, as Thailand uses 220 V electricity. The plug has two round power pins 4.8 mm in diameter and 19 mm in length, insulated for 10 mm and spaced 19 mm apart, with an earthing pin of the same diameter and 21.4 mm in length, located 11.89 mm from the line connecting the two power pins. The earth pin spacing corresponds to that of NEMA 5 and provides compatibility with prior hybrid three-pin sockets, which accept NEMA 1-15, NEMA 5-15 and Europlugs, all of which have been variably used in Thailand. The hybrid socket is also defined in TIS 166-2547, in addition to a plain three-round-pin socket, with plans to replace the former and phase out support for NEMA-compatible plugs. Sockets are polarised (as in NEMA 5-15). The plug is similar to, but not interchangeable with, the Israeli SI32 plug. The Thai plug is designated as "Type O" at IEC World Plugs. Special purpose plugs and sockets Special purpose sockets may be found in residential, industrial, commercial or institutional buildings. Examples of systems using special purpose sockets include: "Clean" (low electrical noise) earth for use with computer systems, Device for Connection of Luminaires (DCL) is a European standard for ceiling- and hanging light fixtures. Emergency power supply, Uninterruptible power supply for critical or life-support equipment, Isolated power for medical instruments, tools used in wet conditions, or electric razors, "Balanced" or "technical" power used in audio and video production studios, Theatrical lighting, CEE 17 are a series of industrial grade (IP44) three-phase "pin & sleeve" connectors for industrial purposes, carpentry- and gardening appliances and also used as a weather-resistant connector for outdoor usage, like Caravans, Motorhomes, camper vans and tents for mains hook-up at camp-sites. Sockets for electric clothes dryers, electric ovens, and air conditioners with higher current rating. Special-purpose sockets may be labelled or coloured to identify a reserved use of a system, or may have keys or specially shaped pins to prevent use of unintended equipment. Single-phase electric stove plugs and sockets The plugs and sockets used to power electric stoves from a single-phase line have to be rated for greater current values than those used with three-phase supply because all the power has to be transferred through two contacts, not three. If not hardwired to the supply, electric stoves may be connected to the mains with an appropriate high power connector. Some countries do not have wiring regulations for single-phase electric stoves. In Russia, an electric stove can often be seen connected with a 25 or 32 A connector. In Norway a 25 A grounded connector, rectangular shaped with rounded corners, is used for single-phase stoves. The connector has three rectangular pins in a row, with the grounding pin longer than other two. The corresponding socket is recessed to prevent shocks. The Norwegian standard is NEK 502:2005 – standard sheet X (socket) and sheet XI (plug). They are also known as the two pole and earth variants of CEE 7/10 (socket) and CEE 7/11 (plug). Shaver supply units National wiring regulations sometimes prohibit the use of sockets adjacent to water taps, etc. A special socket, with an isolation transformer, may allow electric razors to be used near a sink. Because the isolation transformer is of low rating, such outlets are not suitable to operate higher-powered appliances such as hair dryers. An IEC standard 61558-2-5, adopted by CENELEC and as a national standard in some countries, describes one type of shaver supply unit. Shaver sockets may accept multiple two-pin plug types including Australian (Type I) and BS 4573. The isolation transformer often includes a 115 V output accepting two-pin US plugs (Type A). Shaver supply units must also be current limited, IEC 61558-2-5 specifies a minimum rating of 20 VA and maximum of 50 VA. Sockets are marked with a shaver symbol, and may also say "shavers only". Isolation transformers and dedicated NEMA 1-15 shaver receptacles were once standard installation practice in North America, but now a GFCI receptacle is used instead. This provides the full capacity of a standard receptacle but protects the user of a razor or other appliance from leakage current. Differences between BS4573 Type C and Europlug Type C. The BS4573 plug has round 5mm contacts, spacing 16mm. The Euro-plug has 4mm contacts, spacing 19mm. In order to plug a Europlug into a BS4573 socket, an adaptor is required. Comparison of standard types Unusual types Lampholder plug A lampholder plug fits into a light socket in place of a light bulb to connect appliances to lighting circuits. Where a lower rate was applied to electric power used for lighting circuits, lampholder plugs enabled the consumers to reduce their electricity costs. Lampholder plugs are rarely fused. Edison screw lampholder adaptors (for NEMA 1-15 plugs) are still commonly used in the Americas. Soviet adaptor plugs Some appliances sold in the Soviet Union had a flat unearthed plug with an additional pass-through socket on the top, allowing a stacked arrangement of plugs. The usual Soviet apartment of the 1960s had very few sockets, so this design was very useful, but somewhat unsafe; the brass cylinders of the secondary socket were uncovered at the ends (to allow them to be unscrewed easily), recessed by only 3 mm, and provided bad contact because they relied on the secondary plug's bisected expanding pins. The pins of the secondary plug (which lacked insulation sleeves) could not be inserted into the cylindrical sockets completely, leaving a 5 mm gap between the primary and secondary plugs. The adaptors were mostly used for low power appliances (for example, connecting both a table lamp and a radio to a socket). UK Walsall Gauge plug Unlike the standard BS 1363 plugs found in the UK, the earth pin is on a horizontal axis and the live and neutral pins on a vertical axis. This style of plug/socket was used by university laboratories (from batteries) and the BBC, and is still in use in parts of the London Underground for 110VAC voltage supply. In the 1960s they were used for 240 V DC in the Power laboratory of the Electrical Engineering department of what was then University College, Cardiff. Power was supplied by the public 240 V DC mains which remained available in addition to the 240 V AC mains until circa 1969, and thereafter from in-house rectifiers. They were also used in the Ministry of Defence Main Building inside circuits powered from the standby generators to stop staff from plugging in unauthorised devices. They were also known to be used in some British Rail offices for the same reason. Italian BTicino brand Magic Security connector In the 1960s, the Italian firm BTicino introduced an alternative to the Europlug or CEI 23-16 connectors then in use, called Magic Security. The socket is rectangular, with lateral key pins and indentations to maintain polarisation, and to prevent insertion of a plug with different current ratings. Three single-phase general purpose connectors were rated 10 A, 16 A and 20 A; and a three-phase industrial connector rated 10 A; all of them have different key-pin positioning so plugs and sockets cannot be mismatched. The socket is closed by a safety lid (bearing the word Magic on it) which can be opened only with an even pressure on its surface, thus preventing the insertion of objects (except the plug itself) inside the socket. The contacts are positioned on both sides of the plug; the plug is energised only when it is inserted fully into the socket. The system is not compatible with Italian CEI plugs, nor with Europlugs. Appliances were never sold fitted with these security plugs, and the use of adaptors would defeat the safety features, so the supplied plugs had to be cut off and replaced with the security connector. Even so, the Magic security system had some success at first because its enhanced safety features appealed to customers; standard connectors of the day were not considered safe enough. The decline of the system occurred when safety lids similar to the Magic type were developed for standard sockets. In Italy, the system was never definitively abandoned. Though very rarely seen today, it is still marked as available in BTicino's catalogue, (except for the three-phase version, which stopped being produced in July 2011). In Chile, 10 A Magic connectors are commonly used for computer/laboratory power networks, as well as for communications or data equipment. This allows delicate electronics equipment to be connected to an independent circuit breaker, usually including a surge protector or an uninterruptible power supply backup. The different style of plug makes it more difficult for office workers to connect computer equipment to a standard unprotected power line, or to overload the UPS by connecting other office appliances. In Iceland, Magic plugs were widely used in homes and businesses alongside Europlug and Schuko installations. Their installation in new homes was still quite common even in the late 1980s. See also Anderson Powerpole DC connector History of AC power plugs and sockets IEC 60309 high-power industrial and polyphase connectors IEC 60320 Appliance couplers for household and similar general purposes Industrial and multiphase power plugs and sockets Mains electricity Mains electricity by country lists voltage, frequency, and connector types for over 200 countries Perilex Pattress Plug load Polyphase system Smart plug Stage pin connector Light switch References External links Digital Museum of Plugs and Sockets (comprehensive collection of plugs and sockets) Glossary of standards terms Edison thread Electrical standards Electrical wiring Mains power connectors
AC power plugs and sockets
Physics,Engineering
12,670
624,406
https://en.wikipedia.org/wiki/Amagat%27s%20law
Amagat's law or the law of partial volumes describes the behaviour and properties of mixtures of ideal (as well as some cases of non-ideal) gases. It is of use in chemistry and thermodynamics. It is named after Emile Amagat. Overview Amagat's law states that the extensive volume of a gas mixture is equal to the sum of volumes of the component gases, if the temperature and the pressure remain the same: This is the experimental expression of volume as an extensive quantity. According to Amagat's law of partial volume, the total volume of a non-reacting mixture of gases at constant temperature and pressure should be equal to the sum of the individual partial volumes of the constituent gases. So if are considered to be the partial volumes of components in the gaseous mixture, then the total volume would be represented as Both Amagat's and Dalton's law predict the properties of gas mixtures. Their predictions are the same for ideal gases. However, for real (non-ideal) gases, the results differ. Dalton's law of partial pressures assumes that the gases in the mixture are non-interacting (with each other) and each gas independently applies its own pressure, the sum of which is the total pressure. Amagat's law assumes that the volumes of the component gases (again at the same temperature and pressure) are additive; the interactions of the different gases are the same as the average interactions of the components. The interactions can be interpreted in terms of a second virial coefficient for the mixture. For two components, the second virial coefficient for the mixture can be expressed as where the subscripts refer to components 1 and 2, the are the mole fractions, and the are the second virial coefficients. The cross term of the mixture is given by for Dalton's law and for Amagat's law. When the volumes of each component gas (same temperature and pressure) are very similar, then Amagat's law becomes mathematically equivalent to Vegard's law for solid mixtures. Ideal gas mixture When Amagat's law is valid and the gas mixture is made of ideal gases, where: is the pressure of the gas mixture, is the volume of the i-th component of the gas mixture, is the total volume of the gas mixture, is the amount of substance of i-th component of the gas mixture (in mol), is the total amount of substance of gas mixture (in mol), is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, is the absolute temperature of the gas mixture (in K), is the mole fraction of the i-th component of the gas mixture. It follows that the mole fraction and volume fraction are the same. This is true also for other equation of state. References Eponymous laws of physics Gas laws Gases
Amagat's law
Physics,Chemistry
595
188,261
https://en.wikipedia.org/wiki/Test%20card
A test card, also known as a test pattern or start-up/closedown test, is a television test signal, typically broadcast at times when the transmitter is active but no program is being broadcast (often at sign-on and sign-off). Used since the earliest TV broadcasts, test cards were originally physical cards at which a television camera was pointed, allowing for simple adjustments of picture quality. Such cards are still often used for calibration, alignment, and matching of cameras and camcorders. From the 1950s, test card images were built into monoscope tubes which freed up the use of TV cameras which would otherwise have to be rotated to continuously broadcast physical test cards during downtime hours. Electronically generated test patterns, used for calibrating or troubleshooting the downstream signal path, were introduced in the late-1960s, and became commonly used from the 1970s and 80s. These are generated by test signal generators, which do not depend on the correct configuration (and presence) of a camera, and can also test for additional parameters such as correct color decoding, sync, frames per second, and frequency response. These patterns are specially tailored to be used in conjunction with devices such as a vectorscope, allowing precise adjustments of image equipment. The audio broadcast while test cards are shown is typically a sine wave tone, radio (if associated or affiliated with the television channel) or music (usually instrumental, though some also broadcast with jazz or popular music). Digitally generated cards came later, associated with digital television, and add a few features specific of digital signals, like checking for error correction, chroma subsampling, aspect ratio signaling, surround sound, etc. More recently, the use of test cards has also expanded beyond television to other digital displays such as large LED walls and video projectors. Technical details Test cards typically contain a set of patterns to enable television cameras and receivers to be adjusted to show the picture correctly (see SMPTE color bars). Most modern test cards include a set of calibrated color bars which will produce a characteristic pattern of "dot landings" on a vectorscope, allowing chroma and tint to be precisely adjusted between generations of videotape or network feeds. SMPTE bars—and several other test cards—include analog black (a flat waveform at 7.5 IRE, or the NTSC setup level), full white (100 IRE), and a "sub-black", or "blacker-than-black" (at 0 IRE), which represents the lowest low-frequency transmission voltage permissible in NTSC broadcasts (though the negative excursions of the colorburst signal may go below 0 IRE). Between the color bars and proper adjustment of brightness and contrast controls to the limits of perception of the first sub-black bar, an analog receiver (or other equipment such as VTRs) can be adjusted to provide impressive fidelity. Test cards have also been used to determine actual coverage contours for new television broadcasting antennas and/or networks. In preparation for the new commercial ITV service in the 1950s, the Independent Television Authority (ITA) tasked Belling & Lee, an Enfield-based British electronics company best known for inventing the Belling-Lee connector just over three decades earlier, with designing a series of Pilot Test Transmission test cards and slides intended for potential viewers and DX-enthusiasts to test the ITA's new Band III VHF transmitter network that was designed with the assistance of the General Post Office (GPO), then the UK's government-run PTT agency. These test cards, some featuring the G9AED call sign assigned by the GPO for said transmissions, featured a squiggly line in a circle in the middle of the test card with an on-screen line gauge indicated in miles which was used as a guide to reveal the distance between the receiver, the (temporary) transmitter and a replicating landscape feature causing ghosting. Said test cards were mainly transmitted from temporary mobile transmitters attached to caravan trailers based at the predicted locations of the ITA's eventual main transmitters, such as Croydon, Lichfield, Emley Moor and Winter Hill. Almost a decade later, the BBC started using a modified SMPTE monochrome test card radiating from the Crystal Palace transmitter to test its new UHF network which it eventually launched as BBC Two in 1964. Test cards are also used in the broader context of video displays for concerts and live events. There are a variety of different test patterns, each testing a specific technical parameter: gradient monotone bars for testing brightness and color; a crosshatch pattern for aspect ratio, alignment, focus, and convergence; and a single-pixel border for over-scanning and dimensions. History Test cards are as old as TV broadcasts, with documented use by the BBC in the United Kingdom in its early 30-line mechanical Baird transmissions from 1934 and later on as simplified "tuning signals" shown before startup as well as in Occupied France during World War II. They evolved to include gratings for resolution testing, grids to assist with picture geometry adjustments, and grayscale for brightness and contrast adjustments. For example, all these elements can be seen in a Radiodiffusion-Télévision Française 819-line test card introduced in 1953. In North America, most test cards such as the famous Indian-head test pattern of the 1950s and 1960s have long since been relegated to history. The SMPTE color bars occasionally turn up, but with most North American broadcasters now following a 24-hour schedule, these too have become a rare sight. With the introduction of color TV, electronically generated test cards were introduced. They are named after their generating equipment (ex: Grundig VG1000, Philips PM5544, Telefunken FuBK, etc.), TV station (ex: BBC test card) or organization (ex: SMPTE color bars, EBU colour bars). In developed countries such as Australia, Canada, the United Kingdom, and the United States, the financial imperatives of commercial television broadcasting mean that air-time is now typically filled with programmes and commercials (such as infomercials) 24 hours a day, and non-commercial broadcasters have to match this. A late test card design, introduced in 2005 and fully adapted for HD, SD, 16:9 and 4:3 broadcasts, is defined on ITU-R Rec. BT.1729. It offers markings specificity design to test format conversions, chroma sampling, etc. Formerly a common sight, test cards are now only rarely seen outside of television studios, post-production, and distribution facilities. In particular, they are no longer intended to assist viewers in calibration of television sets. Several factors have led to their demise for this purpose: Modern microcontroller-controlled analogue televisions rarely if ever need adjustment, so test cards are much less important than previously. Likewise, modern cameras and camcorders seldom need adjustment for technical accuracy, though they are often adjusted to compensate for scene light levels, and for various artistic effects. Use of digital interconnect standards, such as CCIR 601 and SMPTE 292M, which operate without the non-linearities and other issues inherent to analog broadcasting, do not introduce color shifts or brightness changes; thus the requirement to detect and compensate for them using this reference signal has been virtually eliminated. (Compare with the obsolescence of stroboscopes as used to adjust the speed of record players.) On the other hand, digital test signal generators do include test signals which are intended to stress the digital interface, and many sophisticated generators allow the insertion of jitter, bit errors, and other pathological conditions that can cause a digital interface to fail. Likewise, use of digital broadcasting standards, such as DVB and ATSC, eliminates the issues introduced by modulation and demodulation of analog signals. Test cards including large circles were used to confirm the linearity of the set's deflection systems. As solid-state components replaced vacuum tubes in receiver deflection circuits, linearity adjustments were less frequently required (few newer sets have user-adjustable "VERT SIZE" and "VERT LIN" controls, for example). In LCD and other deflectionless displays, the linearity is a function of the display panel's manufacturing quality; for the display to work, the tolerances will already be far tighter than human perception. For custom-designed video installations, such as LED displays in buildings or at live events, some test images are custom-made to fit the specific size and shape of the setup in question. These custom test images can also be an opportunity for the technicians to hide inside jokes for the crew to see while installing equipment for a show. Monoscope Rather than physical test cards, which had to be televised using a camera, television stations often used a special purpose camera tube which had the test pattern painted on the inside screen of the tube. Each tube was only capable of generating the one test image, hence it was called a monoscope. Monoscopes were similar in construction to an ordinary cathode-ray tube (CRT), only instead of displaying an image on its screen it scanned a built-in image. The monoscope contained a formed metal target in place of the phosphor coating at its "screen" end and as the electron beam scanned the target, rather than displaying an image, a varying electrical signal was produced generating a video signal from the etched pattern. Monoscope tubes had the advantage over test cards that a full TV camera was not needed, and the image was always properly framed and in focus. They fell out of use after the 1960s as they were not able to produce color images. Other uses A lesser-known kind of test pattern is used for the calibration of photocopiers. Photocopier test patterns are physical sheets that are photocopied, with the difference in the resulting photocopy revealing any tell-tale deviations or defects in the machine's ability to copy. There are also test patterns kits and software developed specifically for many consumer electronics. The B&K Television Analyst was developed in the 1960s for testing monochrome TV sets in the NTSC standard and was later modified for European and Australian PAL standards. Among other uses, it consisted of a flying spot scanner on which a test pattern printed on a cellulose acetate slide was shown. When CRT monitors were still commonly used on personal computers, specific test patterns were created for proper calibration of such monitors in the cases whereby multimedia images could not be shown properly on said monitors. Some VCD and DVD lens cleaner discs, such as the Kyowa Sonic lens cleaning kits from 1997–2001, also included test patterns as well. More recent examples include the THX Optimizer which can be accessed in the setup menu in almost every THX-certified DVD, as well as well as the "HDR sRGB Graphics Test (400 nits)" and "Test Patterns" series available on Netflix meant to test out streaming bandwidth on Internet-enabled devices, especially on widescreen smart HDR TVs, 4K and 8K displays and also used to sync audio and video feeds, which can be affected, among other factors, by Bluetooth and Internet latency. Test patterns are also used to calibrate CCTV cameras and monitors, as well as medical imaging displays and equipment for telemedicine and diagnostic purposes, such as the SMPTE RP-133 medical diagnostic imaging test pattern specification for medical and surgical displays, created around 1983–86; as well as a later derivative called the TG18-QC test pattern created by the AAPM in 2001. Test patterns to calibrate X-ray machines, in particular those manufactured by Leeds Test Objects in England, also exist as well. In numismatics Television has had such an impact in today's life that it has been the main motif for numerous collectors' coins and medals. One of the most recent examples is The 50 Years of Television commemorative coin minted on 9 March 2005, in Austria. The obverse of the coin shows the centre portion of the Telefunken T05 test card, while the reverse shows several milestones in the history of television. In popular culture The Philips Pattern and SMPTE color bars are widely recognised as one of the iconic popular culture symbols of the 1980s and 1990s in the markets where they were used. Numerous novelty and collectible items has been patterned after the famous test card, including wall clocks, bedsheets, wristwatches, and clothing. The character Sheldon Cooper on The Big Bang Theory wore tees with both patterns and a bloggers identified the SMPTE shirt's use in more than a dozen episodes over the life of the series. The BBC Test Card F features throughout 2006-07 TV sci-fi detective series Life on Mars. Test card music In Britain, music rather than radio sound was usually played with the test card. The music played by the BBC, and afterwards ITV, was library music, which was licensed on more favourable terms for frequent use than commercially available alternatives. Later, Channel 4 used UK library LPs from publishers like KPM, Joseph Weinberger and Ready Music. Until September 1955, the BBC used live playing 78 RPM commercial records as an audio background to the test cards. After that date, they switched to using recorded music on tape. The following year, the BBC began to build up its own library of specially produced music for the half hour tapes – initially three tunes in similar style, followed by an identification sign (the three notes B-B-C played on celesta). ITV (which began its first trade transmissions in 1957) continued to use commercially available recordings until the late 1960s, when it also began to make specially produced tapes. For rights reasons, much of the music was recorded by light music orchestras in France and Germany, though sometimes by British musicians, or top international session players using pseudonyms, such as The Oscar Brandenburg Orchestra (an amalgamation of Neil Richardson, Alan Moorhouse and Johnny Pearson) or the Stuttgart Studio Orchestra. Other composers and bandleaders commissioned for this type of work included Gordon Langford, Ernest Tomlinson. Roger Roger, Heinz Kiessling, Werner Tautz, Frank Chacksfield and Syd Dale. During the 1980s, the test card was gradually seen less and less - it was pushed out first by Teletext pages, then extended programme hours. The same tapes were used to accompany both the test card and Ceefax on BBC channels, but some fans argue that new tapes introduced after Ceefax became the norm in 1983 were less musically interesting. List of TV test cards BBC Tuning Signals and Test Cards A, B, C, D, E, F, G, H, J, W, X (1934–2006, Mechanical 30- and 240-lines, Monochrome, PAL, SDTV, HDTV, 405- and 625-lines) RCA Victor monochrome test pattern (with RCA logos and Nipper the dog illustrations at corners; c. 1933/34–1937, 343-lines) RCA/NBC monochrome test patterns #1 and #2 (1938–39, 441-lines) RCA Indian-head test pattern (1939, 525-lines) ABC/CBS/Crosley-Avco/DuMont/NBC monochrome "bullseye" test patterns (c. 1939–47, 525-lines) RMA 1946 resolution chart (1946, 525- and 625-lines) Marconi Resolution Chart No. 1/English Electric Valve Company Test Chart (c. 1947/c. 1970, 525- and 625-lines) ТИТ-0249, ИТ-72 and таблица 0286 monochrome test cards (1949, c. 1975–78, c. 1990–92, used in Soviet Union and Russia) DuMont Industrial Color Television test pattern (1950, experimentally shown on KE2XDR) DFF (Deutscher Fernsehfunk) monochrome (Q1/QI1, Test nr. 04, modified EBU monochrome) and colour (modified HTV TR.0782) test patterns (1952–1991, SECAM, used in East Germany) Radiodiffusion-Télévision Française "Marly Horses" test card (1953, 819-lines) ТИТ-0154 colour test card (1954, abandoned prototype Soviet Union NIIR/SECAM IV system) ITA/GPO/Belling & Lee G9AED Pilot Test Transmission test cards (1955–56, 405-lines) Associated-Rediffusion–Marconi "diamond" monochrome test card versions 1, 2 and 3 (1955–1958, 625-lines; Version 1 also used by RTV in British Hong Kong, TVM in Crown Colony of Malta and WNTV in the western part of Colonial Nigeria) EIA 1956 resolution chart (1956, 525- and 625-lines) Chequerboard optical and electronic "tea towel" test cards (1950s/60s, monochrome, 625-lines, used in varying forms in West Germany, Italy, Netherlands, Soviet Union, Portugal and Spain) SMPTE optical monochrome test card (1950s?, 525-lines; 1962–1964, 625-lines) Philips PM 5522, 5534, PM 5538, PM 5540, PM 5543, PM 5544, PM 5552, PM 5634, PM 5644 (1960s, 525- and 625-lines, PAL, PALplus, SECAM, NTSC), see Philips circle pattern Telefunken T 05 (early-1960s, 625-lines) EBU electronic monochrome test pattern (1960s?, 625-lines) CBS/NBC color "bullseye" test patterns (c. 1964/65–early-1990s, NTSC) Telefunken FuBK (late-1960s, PAL) UEIT - Universal Electronic Test Chart (1970, SECAM) HTV TR.0782 test card (1970s, SECAM, used in Hungary, Poland, East Germany and Romania) EZO test card (1971, PAL, used in Czechoslovakia and Estonian SSR) BNT electronic test card (1972, SECAM, used in Bulgaria) TVE colour test card (1975, PAL) SMPTE color bars (1977, NTSC, HDTV, SDTV) EBU colour bars Electronic Test Pattern 1 (1979, PAL) Grundig VG 1001 (1980, PAL) Toolcraft-Goodwood colour test card (c. 1980s–2000s?, PAL, used on various Australian commercial TV stations) KCTV colour test cards (1970s?, mid-1990s, 2017, SECAM then PAL, used in North Korea) Snell & Wilcox SW2 (1990s, TPG20/21 Test Pattern Generators) and SW4 "Zone Plate" (2000s, NTSC, PAL, SDTV) GY/T 254-2011 test card (2011, HDTV, DTMB, used in Mainland China) See also Blue only mode China Girl (filmmaking) Colour chart List of BBC test cards Test Card F Webdriver Torso, YouTube account used for automated performance testing References External links The Test Card Circle, a UK fan site: details of the UK's Trade Test Transmissions including the history of the BBC and ITA Test Cards, a look at the music used and full details about the Trade Test Colour Films shown from the late fifties to 1973. The Test Card Gallery Nostalgia-TV: Television testikuva – test cards in Finland, in Finnish language only Broadcast engineering
Test card
Engineering
4,026
65,413,277
https://en.wikipedia.org/wiki/List%20of%20space%20programs%20of%20the%20United%20States
The United States has developed many space programs since the beginning of the spaceflight era in the mid-20th century. The government runs space programs by three primary agencies: NASA for civil space; the United States Space Force for military space; and the National Reconnaissance Office for intelligence space. These entities have invested significant resources to advance technological approaches to meet objectives. In the late 1980s, commercial interests emerged in the space industry and have expanded dramatically, especially within the last 10 to 15 years. NASA delivers the most visible elements of the U.S. space program. From crewed space exploration and the Apollo 11 landing on the Moon, to the Space Shuttle, International Space Station, Voyager, the Mars rovers, numerous space telescopes, and the Artemis program, NASA delivers on the civil space exploration mandate. NASA also cooperates with other U.S. civil agencies such as the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Geological Survey (USGS) to deliver space assets supporting the weather and civil remote sensing mandates of those organizations. In 2022, NASA's annual budget was approximately $24 billion. The Department of Defense delivers the military space programs. In 2019, the U.S. Space Force started as the primary DoD agent for delivery of military space capability. Systems such as the Global Positioning System, which is ubiquitous to users worldwide, was developed and is maintained by the DoD. Missile warning, defense weather, military satellite communications, and space domain awareness also acquire significant annual investment. In 2023, the annual DoD budget request focused on space is $24.5 billion dollars. The Intelligence Community, through entities that include the National Reconnaissance Office (NRO), invests significant resources in space. Surveillance and reconnaissance are the primary focuses of these entities. Commercial space activity in the United States was facilitated by the passage of the Commercial Space Launch Act in October 1984. Commercial crewed program activity was spurred by the establishment of the $10 million Ansari X Prize in May 1996. Definition of space flight Space programs of the United States date to the start of the Space Age in the late 1940s and early 1950s. Programs involve both crewed systems and uncrewed satellites, probes and platforms to meet diverse program objectives. From a definition perspective, the criteria for what constitutes spaceflight vary. In the United States, professional, military, and commercial astronauts who travel above an altitude of are awarded astronaut wings. The Fédération Aéronautique Internationale defines spaceflight as any flight over . This article follows the US definition of spaceflight. Similarly, for uncrewed missions, systems are required to travel above the same altitude thresholds. Government-led programs The following summarizes the major space programs where the United States government plays a leadership role in managing program delivery. Crewed government-led programs Uncrewed government-led programs Commercial space programs The following summarizes the major space programs where private interests play the leadership role in managing program delivery. Crewed commercial programs Uncrewed commercial programs See also Space policy of the United States List of European Space Agency programmes and missions Japanese space program List of government space agencies List of rockets of the United States List of NOAA satellites List of NASA missions NASA large strategic science missions List of uncrewed NASA missions Explanatory notes References United States United States space programs
List of space programs of the United States
Astronomy,Engineering
670
33,603,940
https://en.wikipedia.org/wiki/Web%20Services%20Flow%20Language
Web Services Flow Language 1.0 (WSFL) was an XML programming language proposed by IBM in 2001 for describing Web services compositions. Language considered two types of compositions. The first type was for describing business processes as a collection of web services and the second was for describing interactions between partners. WSFL was proposed to be layered on top of Web Services Description Language. In 2003 IBM and Microsoft combined WSFL and Xlang to BPEL4WS and submitted it to OASIS for standardization. Oasis published BPEL4WS as WS-BPEL to properly fit the naming of other WS-* standards. Web Services Endpoint Language (WSEL) Web Services Endpoint Language (WSEL) was an XML format proposed to be used to description of non-operational characteristics of service endpoints, such as quality-of-service, cost, or security properties. Format was proposed as a part of report which published Web Service Flow Language . It never gained wide acceptance. Notes References Leymann, Frank. (2001). "Web Services Flow Language (WSFL 1.0)". IBM Corporation. Hung, Patrick C. K. (2002). "Specifying Conflict of Interest in Web Services Endpoint Language (WSEL)". "ACM SIGecom Exchanges", Volume 3 Issue 3 Web service specifications World Wide Web Consortium standards XML-based standards Web services
Web Services Flow Language
Technology
285
2,902,704
https://en.wikipedia.org/wiki/49%20Aquarii
49 Aquarii, abbreviated 49 Aqr, is a star in the zodiac constellation of Aquarius. 49 Aquarii is its Flamsteed designation. It is a dim star with an apparent visual magnitude of 5.53. The distance to 49 Aqr, as determined from its annual parallax shift of , is 266 light years. It is moving closer to the Earth with a heliocentric radial velocity of −13 km/s. This is an aging K-type giant star with a stellar classification of . It shows a spectral anomaly with the absorption lines of cyanogen (CN). This is a red clump giant, indicating that it is generating energy through the helium fusion at its core. It is around 950 million years old with 2.2 times the mass of the Sun and has expanded to nine times the Sun's radius. It is radiating 50 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,954 K. References K-type giants Horizontal-branch stars Aquarius (constellation) Durchmusterung objects Aquarii, 049 191105 110529 8529
49 Aquarii
Astronomy
242
585,826
https://en.wikipedia.org/wiki/Invariant%20subspace
In mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually. For a single operator Consider a vector space and a linear map A subspace is called an invariant subspace for , or equivalently, -invariant, if transforms any vector back into . In formulas, this can be writtenor In this case, restricts to an endomorphism of : The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to , the operator has form for some and , where here denotes the matrix of with respect to the basis C. Examples Any linear map admits the following invariant subspaces: The vector space , because maps every vector in into The set , because . These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace. 1-dimensional subspaces If is a 1-dimensional invariant subspace for operator with vector , then the vectors and must be linearly dependent. Thus In fact, the scalar does not depend on . The equation above formulates an eigenvalue problem. Any eigenvector for spans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1. As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace. Diagonalization via projections Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically. Write as the direct sum ; a suitable can always be chosen by extending a basis of . The associated projection operator P onto W has matrix representation A straightforward calculation shows that W is -invariant if and only if PTP = TP. If 1 is the identity operator, then is projection onto . The equation holds if and only if both im(P) and im(1 − P) are invariant under T. In that case, T has matrix representation Colloquially, a projection that commutes with T "diagonalizes" T. Lattice of subspaces As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on V are characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T. The set of -invariant subspaces of is sometimes called the invariant-subspace lattice of and written . As the name suggests, it is a (modular) lattice, with meets and joins given by (respectively) set intersection and linear span. A minimal element in in said to be a minimal invariant subspace. In the study of infinite-dimensional operators, is sometimes restricted to only the closed invariant subspaces. For multiple operators Given a collection of operators, a subspace is called -invariant if it is invariant under each . As in the single-operator case, the invariant-subspace lattice of , written , is the set of all -invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersection Examples Let be the set of all linear operators on . Then . Given a representation of a group G on a vector space V, we have a linear transformation T(g) : V → V for every element g of G. If a subspace W of V is invariant with respect to all these transformations, then it is a subrepresentation and the group G acts on W in a natural way. The same construction applies to representations of an algebra. As another example, let and be the algebra generated by {1, T }, where 1 is the identity operator. Then Lat(T) = Lat(Σ). Fundamental theorem of noncommutative algebra Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, the fundamental theorem of noncommutative algebra asserts that Lat(Σ) contains non-trivial elements for certain Σ. One consequence is that every commuting family in L(V) can be simultaneously upper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to a flag of invariant subspaces, that a commuting family generates a commuting algebra, and that is not commutative when . Left ideals If A is an algebra, one can define a left regular representation Φ on A: Φ(a)b = ab is a homomorphism from A to L(A), the algebra of linear transformations on A The invariant subspaces of Φ are precisely the left ideals of A. A left ideal M of A gives a subrepresentation of A on M. If M is a left ideal of A then the left regular representation Φ on M now descends to a representation Φ' on the quotient vector space A/M. If [b] denotes an equivalence class in A/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {a ∈ A | ab ∈ M for all b}. The representation Φ' is irreducible if and only if M is a maximal left ideal, since a subspace V ⊂ A/M is an invariant under {Φ'(a) | a ∈ A} if and only if its preimage under the quotient map, V + M, is a left ideal in A. Invariant subspace problem The invariant subspace problem concerns the case where V is a separable Hilbert space over the complex numbers, of dimension > 1, and T is a bounded operator. The problem is to decide whether every such T has a non-trivial, closed, invariant subspace. It is unsolved. In the more general case where V is assumed to be a Banach space, Per Enflo (1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read. Almost-invariant halfspaces Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspace of a Banach space is said to be almost-invariant under an operator if for some finite-dimensional subspace ; equivalently, is almost-invariant under if there is a finite-rank operator such that , i.e. if is invariant (in the usual sense) under . In this case, the minimum possible dimension of (or rank of ) is called the defect. Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say that is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension. The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if is a complex infinite-dimensional Banach space and then admits an AIHS of defect at most 1. It is not currently known whether the same holds if is a real Banach space. However, some partial results have been established: for instance, any self-adjoint operator on an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space. See also Invariant manifold Lomonosov's invariant subspace theorem References Sources Linear algebra Operator theory Representation theory
Invariant subspace
Mathematics
1,720
5,593,595
https://en.wikipedia.org/wiki/Earth%20structure
An earth structure is a building or other structure made largely from soil. Since soil is a widely available material, it has been used in construction since prehistory. It may be combined with other materials, compressed and/or baked to add strength. Soil is still an economical material for many applications, and may have low environmental impact both during and after construction. Earth structure materials may be as simple as mud, or mud mixed with straw to make cob. Sturdy dwellings may be also built from sod or turf. Soil may be stabilized by the addition of lime or cement, and may be compacted into rammed earth. Construction is faster with pre-formed adobe or mudbricks, compressed earth blocks, earthbags or fired clay bricks. Types of earth structure include earth shelters, where a dwelling is wholly or partly embedded in the ground or encased in soil. Native American earth lodges are examples. Wattle and daub houses use a "wattle" of poles interwoven with sticks to provide stability for mud walls. Sod houses were built on the northwest coast of Europe, and later by European settlers on the North American prairies. Adobe or mud-brick buildings are built around the world and include houses, apartment buildings, mosques and churches. Fujian Tulous are large fortified rammed earth buildings in southeastern China that shelter as many as 80 families. Other types of earth structure include mounds and pyramids used for religious purposes, levees, mechanically stabilized earth retaining walls, forts, trenches and embankment dams. Soil Soil is created from rock that has been chemically or physically weathered, transported, deposited and precipitated. Soil particles include sand, silt and clay. Sand particles are the largest at in diameter and clay the smallest at less than in diameter. Both sand and silt are mostly inert rock particles, including quartz, calcite, feldspar and mica. Clays typically are phyllosilicate minerals with a sheet-like structure. The very small clay particles interact with each other physically and chemically. Even a small proportion of clay affects the physical properties of the soil much more than might be expected. Clays such as kaolinite do not expand or contract when wetted or dried, and are useful for brick-making. Others, such as smectites, expand or contract considerably when wet or dry, and are not suitable for building. Loam is a mix of sand, silt and clay in which none predominates. Soils are given different names depending on the relative proportions of sand, silt and clay such as "Silt Loam", "Clay Loam" and "Silty Clay". Loam construction, the subject of this article, referred to as adobe construction when it uses unfired clay bricks, is an ancient building technology. It was used in the early civilizations of the Mediterranean, Egypt and Mesopotamia, in the Indus, Ganges and Yellow river valleys, in Central and South America. As of 2005 about 1.5 billion people lived in houses built of loam. In recent years, interest in loam construction has revived in the developed world. It is seen as a way to minimize use of fossil fuels and pollution, particularly carbon dioxide, during manufacture, and to create a comfortable living environment through the high mass and high absorption of the material. The two main technologies are stamped or rammed earth, clay or loam, called pise de terre in French, and adobe, typically using sun-dried bricks made of a mud and straw mixture. Materials Earth usually requires some sort of processing for use in construction. It may be combined with water to make mud, straw may be added, some form of stabilizing material such as lime or cement may be used to harden the earth, and the earth may be compacted to increase strength. Mud Coursed mud construction is one of the oldest approaches to building walls. Moist mud is formed by hand to make the base of a wall, and allowed to dry. More mud is added and allowed to dry to form successive courses until the wall is complete. With puddled mud, a hand-made mud form is filled with wetter mud and allowed to dry. In Iran, puddled mud walls are called chine construction. Each course is about thick, and about high. Typically the technique is used for garden walls but not for house construction, presumably because of concern about the strength of walls made in this way. A disadvantage to the approach is that a lot of time can be spent waiting for each course to dry. Another technique, used in areas where wood is plentiful, is to build a wood-frame house and to infill it with mud, primarily to provide insulation. In parts of England a similar technique was used with cob. Cob Cob, sometimes referred to as "monolithic adobe", is a natural building material made from soil that includes clay, sand or small stones and an organic material such as straw. Cob walls are usually built up in courses, have no mortar joints and need 30% or more clay in the soil. Cob can be used as in-fill in post-and-beam buildings, but is often used for load bearing walls, and can bear up to two stories. A cob wall should be at least thick, and the ratio of width to height should be no more than one to ten. It will typically be plastered inside and out with a mix of lime, soil and sand. Cob is fireproof, and its thermal mass helps stabilize indoor temperatures. Tests have shown that cob has some resistance to seismic activity. However, building codes in the developed world may not recognize cob as an approved material. Sod or turf Cut sod bricks, called terrone in Spanish, can be used to make tough and durable walls. The sod is cut from soil that has a heavy mat of grass roots, which may be found in river bottom lands. It is stood on edge to dry before being used in construction. European settlers on the North American Prairies found that the sod least likely to deteriorate due to freezing or rain came from dried sloughs. Turf was once extensively used for the walls of houses in Ireland, Scotland and Iceland, where some turf houses may still be found. A turf house may last fifty years or longer if well-maintained in a cold climate. The Icelanders find that the best quality turf is the Strengur, the top of the grass turf. Stabilized earth Clay is usually hard and strong when dry, but becomes very soft when it absorbs water. The dry clay helps hold an earth wall together, but if the wall is directly exposed to rain, or to water leaking down from the roof, it may become saturated. Earth may be "stabilized" to make it more weather resistant. The practice of stabilizing earth by adding burnt lime is centuries old. Portland cement or bitumen may also be added to earth intended for construction which adds strength, although the stabilized earth is not as strong as fired clay or concrete. Mixtures of cement and lime, or pozzolana and lime, may also be used for stabilization. Preferably the sand content of the soil will be 65% – 75%. Soils with low clay content, or with no more than 15% non-expansive clay, are suitable for stabilized earth. The clay percentage may be reduced by adding sand, if available. If there is more than 15% clay it may take more than 10% cement to stabilize the soil, which adds to the cost. If earth contains little clay and holds 10% or more cement, it is in effect concrete. Cement is not particularly environmentally friendly, since the manufacturing process generates large amounts of carbon dioxide. Low-density stabilized earth will be porous and weak. The earth must therefore be compacted either by a machine that makes blocks or within the wall using the "rammed earth" technique. Rammed earth Rammed earth is a technique for building walls using natural raw materials such as earth, chalk, lime or gravel. A rammed earth wall is built by placing damp soil in a temporary form. The soil is manually or mechanically compacted and then the form is removed. Rammed earth is generally made without much water, and so does not need much time to dry as the building rises. It is susceptible to moisture, so must be laid on a course that stops rising dampness, must be roofed or covered to keep out water from above, and may need protection through some sort of plaster, paint or sheathing. In China, rammed earth walls were built by the Longshan people in 2600–1900 BC, during the period when cities first appeared in the region. Thick sloping walls made of rammed earth became a characteristic of traditional Buddhist monasteries throughout the Himalayas and became very common in northern Indian areas such as Sikkim. The technique spread to the Middle East, and to North Africa, and the city of Carthage was built of rammed earth. From there the technology was brought to Europe by the Romans. Rammed earth structures may be long lasting. Most of the Great Wall of China was made from rammed earth, as was the Alhambra in the Kingdom of Granada. In Northern Europe there are rammed earth buildings up to seven stories high and two hundred years old. Concrete The Romans made durable concrete strong enough for load-bearing walls. Roman concrete contains a rubble of broken bricks and rocks set in mortar. The mortar included lime and pozzolana, a volcanic material that contributed significantly to its strength. Roman concrete structures such as the Colosseum, completed in 80 AD, still stand. Their longevity may be explained by the fact that the builders used a relatively dry mix of mortar and aggregate and compacted it by pounding it down to eliminate air pockets. Although derived from earth products, concrete structures would not usually be considered earth structures. Building units Mud brick or adobe brick Mudbricks or Adobe bricks are preformed modular masonry units of sun-dried mud that were invented at different times in different parts of the world as civilization developed. Construction with bricks avoids the delays while each course of puddled mud dries. Wall murals show that adobe production techniques were highly advanced in Egypt by 2500 BC. Adobe construction is common throughout much of Africa today. Adobe bricks are traditionally made from sand and clay mixed with water to a plastic consistency, with straw or grass as a binder. The mud is prepared, placed in wooden forms, tamped and leveled, and then turned out of the mold to dry for several days. The bricks are then stood on end to air-cure for a month or more. In the southwest United States and Mexico adobe buildings had massive walls and were rarely more than two stories high. Adobe mission churches were never more than about . Since adobe surfaces are fragile, coatings are used to protect them. These coatings, periodically renewed, have included mud plaster, lime plaster, whitewash or stucco. Adobe walls were historically made by laying the bricks with mud mortar, which swells and shrinks at the same rate as the bricks when wetted or dried, heated or cooled. Modern adobe may be stabilized with cement and bonded with cement mortars, but cement mortars will cause unstabilized adobe bricks to deteriorate due to the different rates of thermal expansion and contraction. Compressed earth block Compressed earth blocks (CEB) were traditionally made by using a stick to ram soil into a wooden mold. Today they are usually made from subsoil compressed in a hand-operated or powered machine. In the developing world, manual machines can be a cost-effective solution for making uniform building blocks, while the more complex and expensive motorized machines are less likely to be appropriate. Although labor-intensive, CEB construction avoids the cost of buying and transporting materials. Block-making machines may form blocks that have interlocking shapes to reduce the requirement for mortar. The block may have holes or grooves so rods such as bamboo can be inserted to improve earthquake resistance. Suitable earth must be used, with enough clay to hold the block together and resist erosion, but not too much expansive clay. When the block has been made from stabilized earth, which contains cement, the concrete must be given perhaps three weeks to cure. During this time the blocks should be stacked and kept from drying out by sprinkling water over them. This may be a problem in hot, dry climates where water is scarce. Closely stacking the blocks and covering them with a polythene sheet may help reduce water loss. Earthbags Earthbag construction is a natural building technique that has evolved from historic military construction techniques for bunkers. Local subsoil of almost any composition can be used, although an adobe mix would be preferable. The soil is moistened so it will compact into a stable structure when packed into woven polypropylene or burlap sacks or tubes. Plastic mesh is sometimes used. Polypropylene (pp) sacks are most common, since they are durable when covered, cheap, and widely available. The bags are laid in courses, with barbed wire between each course to prevent slipping. Each course is tamped after it is laid. The structure in pp bags is similar to adobe but more flexible. With mesh tubing the structure is like rammed earth. Earthbags may be used to make dome-shaped or vertical wall buildings. With soil stabilization they may also be used for retaining walls. Fired clay brick The technique of firing clay bricks in a kiln dates to about 3500 BC. Fired bricks were being used to build durable masonry across Europe, Asia and North Africa by 1200 BC and still remain an important building material. Modern fired clay bricks are formed from clays or shales, shaped and then fired in a kiln for 8–12 hours at a temperature of 900–1150 °C. The result is a ceramic that is mainly composed of silica and alumina, with other ingredients such as quartz sand. The porosity of the brick depends on the materials and on the firing temperature and duration. The bricks may vary in color depending on the amount of iron and calcium carbonate in the materials used, and the amount of oxygen in the kiln. Bricks may decay due to crystallization of salts on the brick or in its pores, from frost action and from acidic gases. Bricks are laid in courses bonded with mortar, a combination of Portland cement, lime and sand. A wall that is one brick thick will include stretcher bricks with their long, narrow side exposed and header bricks crossing from side to side. There are various brickwork "bonds", or patterns of stretchers and headers, including the English, Dutch and Flemish bonds. Examples Earth sheltering Earth sheltering has been used for thousands of years to make energy-efficient dwellings. There are various configurations. At one extreme, an earth sheltered dwelling is completely underground, with perhaps an open courtyard to provide air and light. An earth house may be set into a slope, with windows or door openings in one or more of its sides, or the building may be on ground level, but with earth mounded against the walls, and perhaps with an earth roof. Pit houses made by Hohokam farmers between 100 and 900 AD, in what is now the southwest of the US, were bermed structures, partially embedded in south-facing slopes. Their successful design was used for hundreds of years. At Matmata, Tunisia, most of the ancient homes were built below ground level, and surrounded courtyards about square. The homes were reached through tunnels. Other examples of subterranean, semi-subterranean or cliff-based dwellings in both hot and cold climates are found in Turkey, northern China and the Himalayas, and the southwest USA. A number of Buddhist monasteries built from earth and other materials into cliff sides or caves in Himalayan areas such as Tibet, Bhutan, Nepal and northern India are often perilously placed. Starting in the 1970s, interest in the technique has revived in developed countries. By setting an earth house into the ground, the house will be cooler in the warm season and warmer in the cool season. Native American earth lodge An earth lodge is a circular building made by some of the Native Americans of North America. They have wood post and beam construction and are dome-shaped. A typical structure would have four or more central posts planted in the ground and connected at the top by cross beams. The smoke hole would be left open in the center. Around the central structure there was a larger ring of shorter posts, also connected by cross beams. Rafters radiated from the central cross beams to the outside cross beams, and then split planks or beams formed the slanting or vertical side walls. The structure was covered by sticks and brush or grass, covered in turn by a heavy layer of earth or sod. Some groups plastered the whole structure with mud, which dried to form a shell. Wattle and daub Wattle and daub is an old building technique in which vines or smaller sticks are interwoven between upright poles, and then mud mixed with straw and grass is plastered over the wall. The technique is found around the world, from the Nile Delta to Japan, where bamboo was used to make the wattle. In Cahokia, now in Illinois, USA, wattle and daub houses were built with the floor lowered by below the ground. A variant of the technique is called bajareque in Colombia. In prehistoric Britain simple circular wattle and daub shelters were built wherever adequate clay was available. Wattle and daub is still found as the panels in timber-framed buildings. Generally the walls are not structural, and in interior use the technique in the developed world was replaced by lath and plaster, and then by gypsum wallboard. Prairie sod house European pioneer farmers in the prairies of North America, where there is no wood for construction, often made their first home in a dug-out cave in the side of a hill or ravine, with a covering over the entrance. When they had time, they would build a sod house. The farmer would use a plow to cut the sod into bricks , which were then piled up to form the walls. The sod strips were piled grass-side down, staggered in the same way as brickwork, in three side-by-side rows, resulting in a wall over thick. The sod wall was built around door and window frames, and the corners of the wall were secured by rods driven vertically through them. The roof was made with poles or brush, covered with prairie grass, and then sealed with a layer of sod. Sod houses were strong and often lasted many years, but they were damp and dirty unless the interior walls were plastered. The roofs tended to leak, and sometimes collapsed in a rainstorm. Mud brick buildings There are innumerable examples of mud brick or adobe building around the world. The walled city of Shibam in Yemen, designated a World Heritage Site in 1982, is known for its ten-story unreinforced mud-brick buildings. The Djinguereber Mosque of Timbuktu, Mali, was first built at the start of the 14th century AD (8th century AH) from round mud bricks and a stone-mud mixture, and was rebuilt several times afterwards, steadily growing in size. Further south in Mali, the Great Mosque of Djenné, a dramatic example of Sahel mudbrick architecture. was built in 1907, based on the design of an earlier Great Mosque first built on the site in 1280. Mudbrick requires maintenance, and the fundamentalist ruler Seku Amadu had let the previous mosque collapse. The Casa Grande Ruins, now a national monument in Arizona protected by a modern roof, is a massive four-story adobe structure built by Hohokam people between 1200 and 1450 AD. The first European to record the great house was a Jesuit priest, Father Eusebio Kino, who visited the site in 1694. At that time it had long been abandoned. By the time a temporary roof was installed in 1903 the adobe building had been standing empty and unmaintained for hundreds of years. Huaca de la Luna in what is now northern Peru is a large adobe temple built by the Moche people. The building went through a series of construction phases, growing eventually to a height of about , with three main platforms, four plazas and many smaller rooms and enclosures. The walls were covered by striking multi-colored murals and friezes; those visible today date from about 400–610 AD. Toulous A Fujian Tulou is a type of rural dwelling of the Hakka people in the mountainous areas in southeastern Fujian, China. They were mostly built between the 13th and the 20th centuries. A tulou is a large, enclosed and fortified earth building, rectangular or circular, with very thick load-bearing rammed earth walls between three and five stories high. A toulou might house up to 80 families. Smaller interior buildings are often enclosed by these huge peripheral walls which can contain halls, storehouses, wells and living areas. The structure resembles a small fortified city. The walls are formed by compacting earth mixed with stone, bamboo, wood and other readily available materials, and are to thick. The result is a well-lit, well-ventilated, windproof and earthquake-proof building that is warm in winter and cool in summer. Mounds and pyramids Ziggurats were elevated temples constructed by the Sumerians between the end of the 4th millennium BC and the 2nd millennium BC, rising in a series of terraces to a temple up to above ground level. The Ziggurat of Ur contained about three million bricks, none more than in length, so construction would have been a huge project. The largest ziggurat was in Babylon, and is thought by some to be the Tower of Babel mentioned in the Bible. It was destroyed by Alexander the Great and only the foundations remain, but originally it stood high on a base about square. Sun-dried bricks were used for the interior and kiln-fired bricks for the facing. The bricks were held together by clay or bitumen. Many pre-Columbian Native American societies of ancient North America built large pyramidal earth structures known as platform mounds. Among the largest and best-known of these structures is Monks Mound at the site of Cahokia in what became Illinois, completed around 1100 AD, which has a base larger than that of the Great Pyramid at Giza. Many of the mounds underwent multiple episodes of mound construction at periodic intervals, some becoming quite large. They are believed to have played a central role in the mound-building peoples' religious life and documented uses include semi-public chief's house platforms, public temple platforms, mortuary platforms, charnel house platforms, earth lodge/town house platforms, residence platforms, square ground and rotunda platforms, and dance platforms. The Pyramid of the Sun in Teotihuacan, Mexico, was started in 100 AD. The stone-faced structure contains two million tons of rammed earth. Earthworks Earthworks are engineering works created through moving or processing quantities of soil or unformed rock. The material may be moved to another location and formed into a desired shape for a purpose. Levees, embankments and dams are types of earthwork. A levee, floodbank or stopbank is an elongated natural ridge or artificially constructed dirt fill wall that regulates water levels. It is usually earthen and often runs parallel to the course of a river in its floodplain or along low-lying coastlines. Mechanically stabilized earth (MSE) retaining walls may be used for embankments. MSE walls combine a concrete leveling pad, wall facing panels, coping, soil reinforcement and select backfill. A variety of designs of wall facing panels may be used. After the leveling pad has been laid and the first row of panels has been placed and braced, the first layer of earth backfill is brought in behind the wall and compacted. The first set of reinforcements is then laid over the earth. The reinforcements, which may be tensioned polymer or galvanized metal strips or grids, are attached to the facing panels. This process is repeated with successive layers of panels, earth and reinforcements. The panels are thus tied into the earth embankment to make a stable structure with balanced stresses. Although construction using the basic principles of MSE has a long history, MSE was developed in its current form in the 1960s. The reinforcing elements used can vary but include steel and geosynthetics. The term MSE is usually used in the US to distinguish it from "Reinforced Earth", a trade name of the Reinforced Earth Company, but elsewhere Reinforced Soil is the generally accepted term. MSE construction is relatively fast and inexpensive, and although labor-intensive, it does not demand high levels of skill. It is therefore suitable for developing as well as developed countries. Forts and trenches Earth has been used to construct fortifications for thousands of years, including strongholds and walls, often protected by ditches. Aerial photography in Europe has revealed traces of earth fortifications from the Roman era, and later medieval times. Offa's Dyke is a huge earthwork that stretches along the disputed border between England and Wales. Little is known about the period or the builder, King Offa of Mercia, who died in 796 AD. An early timber and earth fortification might later be succeeded by a brick or stone structure on the same site. Trenches were used by besieging forces to approach a fortification while protected from missiles. Sappers would build "saps", or trenches, that zig-zagged towards the fortress being attacked. They piled the excavated dirt to make a protective wall or gabion. The combined trench depth and gabion height might be . Sometimes the sap was a tunnel, dug several feet below the surface. Sappers were highly skilled and highly paid due to the extreme danger of their work. In the American Civil War (1861−1865) trenches were used for defensive positions throughout the struggle, but played an increasingly important role in the campaigns of the last two years. Military earthworks perhaps culminated in the vast network of trenches built during World War I (1914−1918) that stretched from Switzerland to the North Sea by the end of 1914. The two lines of trenches faced each other, manned by soldiers living in appalling conditions of cold, damp and filth. Conditions were worst in the Allied trenches. The Germans were more willing to accept the trenches as long-term positions, and used concrete blocks to build secure shelters deep underground, often with electrical lighting and heating. Embankment dams An embankment dam is a massive artificial water barrier. It is typically created by the emplacement and compaction of a complex semi-plastic mound of various compositions of soil, sand, clay and/or rock. It has a semi-permanent natural waterproof covering for its surface, and a dense, waterproof core. This makes such a dam impervious to surface or seepage erosion. The force of the impoundment creates a downward thrust upon the mass of the dam, greatly increasing the weight of the dam on its foundation. This added force effectively seals and makes waterproof the underlying foundation of the dam, at the interface between the dam and its stream bed. Such a dam is composed of fragmented independent material particles. The friction and interaction of particles binds the particles together into a stable mass rather than by the use of a cementing substance. The Syncrude Mildred Lake Tailings Dyke in Alberta, Canada, is an embankment dam about long and from high. By volume of fill, as of 2001 it was believed to be the largest earth structure in the world. Structural issues Designing for Earthquakes Regions with low seismic risk are safe for most earth buildings, but historic construction techniques often cannot resist even medium earthquake levels effectively because of earthen buildings' three highly undesirable qualities as a seismic building material: being relatively 'weak, heavy and brittle'. However, earthen buildings can be built to resist seismic loads. Key factors to improved seismic performance are soil strength, construction quality, robust layout and seismic reinforcement. Stronger soils make stronger walls. Adobe builders can test cured blocks for strength by dropping from a specific height or by breaking them with a lever. Builders using immediate techniques like earthbag, cob, or rammed earth may prefer approximate crushing tests on smaller samples that can be oven-dried and crushed under a small lever. Builders must understand construction processes and be able to produce consistent quality for strong buildings. Robust layout means buildings more square than elongated, and symmetrical not L-shaped, as well as no 'soft' first stories (stories with large windows, buildings on unbraced columns). New Zealand's earthen building guidelines check for enough bracing wall length in each of the two principal directions, based on wall thickness, story height, bracing wall spacing, and the roof, loft and second story weight above earthen walls. Seismic-Resistant Construction Techniques Building techniques that are more ductile than brittle, like the contained earth type of earthbag, or tire walls of earthships, may better avoid collapse than brittle unreinforced earth. Contained gravel base courses may add base isolation potential. Wall containment can be added to techniques like adobe to resist loss of material that leads to collapse. Confined masonry is effective for adobe against quake forces of 0.3 g may be useful with earthen masonry. Many types of reinforcement can increase wall strength, such as plastic or wire mesh and reinforcing rods of steel or fiberglass or bamboo. Earth resists compression well but is weak when twisted. Tensile reinforcement must span potential damage points and be well-anchored to increase out-of-plane stability. Bond beams at wall tops are vital and must be well attached to walls. Builders should be aware that organic reinforcements embedded in walls may be destroyed before the building is retired. Attachment details of reinforcement are critical to resist higher forces. Best adobe shear strength came from horizontal reinforcement attached directly to vertical rebar spanning from footing to bond beam. Interlaced wood in earthen walls reduces quake damage if wood is not damaged by dry rot or insects. Timberlacing includes finely webbed Dhajji, and other types. See also , sometimes considered earthen architecture , Chinese cave dwellings Notes References Citations Sources
Earth structure
Engineering
6,159
23,336,882
https://en.wikipedia.org/wiki/Clasper%20%28mathematics%29
In the mathematical field of low-dimensional topology, a clasper is a surface (with extra structure) in a 3-manifold on which surgery can be performed. Motivation Beginning with the Jones polynomial, infinitely many new invariants of knots, links, and 3-manifolds were found during the 1980s. The study of these new `quantum' invariants expanded rapidly into a sub-discipline of low-dimensional topology called quantum topology. A quantum invariant is typically constructed from two ingredients: a formal sum of Jacobi diagrams (which carry a Lie algebra structure), and a representation of a ribbon Hopf algebra such as a quantum group. It is not clear a-priori why either of these ingredients should have anything to do with low-dimensional topology. Thus one of the main problems in quantum topology has been to interpret quantum invariants topologically. The theory of claspers comes to provide such an interpretation. A clasper, like a framed link, is an embedded topological object in a 3-manifold on which one can perform surgery. In fact, clasper calculus can be thought of as a variant of Kirby calculus on which only certain specific types of framed links are allowed. Claspers may also be interpreted algebraically, as a diagram calculus for the braided strict monoidal category Cob of oriented connected surfaces with connected boundary. Additionally, most crucially, claspers may be roughly viewed as a topological realization of Jacobi diagrams, which are purely combinatorial objects. This explains the Lie algebra structure of the graded vector space of Jacobi diagrams in terms of the Hopf algebra structure of Cob. Definition A clasper is a compact surface embedded in the interior of a 3-manifold equipped with a decomposition into two subsurfaces and , whose connected components are called the constituents and the edges of correspondingly. Each edge of is a band joining two constituents to one another, or joining one constituent to itself. There are four types of constituents: leaves, disk-leaves, nodes, and boxes. Clasper surgery is most easily defined (after elimination of nodes, boxes, and disk-leaves as described below) as surgery along a link associated to the clasper by replacing each leaf with its core, and replacing each edge by a right Hopf link. Clasper calculus The following are the graphical conventions used when drawing claspers (and may be viewed as a definition for boxes, nodes, and disk-leaves): Habiro found 12 moves which relate claspers along which surgery gives the same result. These moves form the core of clasper calculus, and give considerable power to the theory as a theorem-proving tool. Cn-equivalence Two knots, links, or 3-manifolds are said to be -equivalent if they are related by -moves, which are the local moves induced by surgeries on a simple tree claspers without boxes or disk-leaves and with leaves. For a link , a -move is a crossing change. A -move is a Delta move. Most applications of claspers use only -moves. Main results For two knots and and a non-negative integer , the following conditions are equivalent: and are not distinguished by any invariant of type . and are -equivalent. The corresponding statement is false for links. Further reading S. Garoufalidis, M. Goussarov, and M. Polyak, Calculus of clovers and finite-type invariants of 3-manifolds, Geom. and Topol., vol. 5 (2001), 75–108. M.N. Goussarov, Variations of knotted graphs. The geometric technique of n-equivalence (Russian) Algebra i Analiz 12(4) (2000), 79–125; translation in St. Petersburg Math. J. 12(4) (2001) 569–604. M.N. Goussarov, Finite type invariants and n-equivalence of 3-manifolds C. R. Acad. Sci. Paris Ser. I Math. 329(6) (1999), 517–522. K. Habiro, Claspers and the Vassiliav skein module, PhD thesis, University of Tokyo (1997). K. Habiro, Claspers and finite type invariants of links, Geom. and Topol., vol. 4 (2000), 1–83. S. Matveev, Generalized surgeries of three-dimensional manifolds and representations of homology spheres, Mat. Zametki, 42 (1987) no. 2, 268–278. Low-dimensional topology 3-manifolds Geometric topology Knot theory
Clasper (mathematics)
Mathematics
944
9,542,516
https://en.wikipedia.org/wiki/Neurine
Neurine is an alkaloid found in egg yolk, brain, bile and in cadavers. It is formed during putrefaction of biological tissues by the dehydration of choline. It is a poisonous, syrupy liquid with a fishy odor. Neurine is a quaternary ammonium salt with three methyl groups and one vinyl group attached to the nitrogen atom. Synthetically, neurine can be prepared by the reaction of acetylene with trimethylamine. Neurine is unstable and decomposes readily to form trimethylamine. References Merck Index, 11th Edition, 6393. Alkaloids Quaternary ammonium compounds Vinyl compounds
Neurine
Chemistry
147
36,882,027
https://en.wikipedia.org/wiki/Pablo%20Mach%C3%B3n
Pablo Machón is a computer scientist, libre/free knowledge advocate, libre/free software developer and founding member and President of the Free Knowledge Foundation, an organization that promotes people's rights and freedoms relating knowledge, software and data/information standards. He was the Spanish Team Coordinator and vice-president of the Free Software Foundation Europe, and he is a visible free software political advocate and promoter, speaking for freedom in the digital age in various international events, organizations and the press He specializes in fostering free knowledge and free software in politics, and in the business and public administration sectors. He speaks fluent Spanish and English. References External links Free Knowledge Foundation Free Knowledge Foundation alternate page FKF Team (in Spanish) Free Software Foundation Europe Pablo Machón's personal website Pablo Machón's alternate personal web site Living people Free software programmers Year of birth missing (living people)
Pablo Machón
Technology
177
12,392,878
https://en.wikipedia.org/wiki/Polyvinyl%20toluene
Polyvinyltoluene (PVT, polyvinyl toluene) is a synthetic polymer of alkylbenzenes with a linear formula [CH2CH(C6H4CH3)]n. Commercial vinyl toluene is a mixture of methyl styrene isomers. Uses PVT can be doped with anthracene or other wavelength-shifting dopants to produce a plastic scintillator. When subjected to ionizing radiation (both particle radiation and gamma radiation), the amount of visible radiation emitted is proportional to the absorbed dose as long as the energy loss per length is not too large. A relation applicable to a wide range of values for energy loss per unit length is given by Birks' Law. PVT can be damaged by radiation with high stopping power, e.g. ion beams or by any kind of ionizing radiation. A review of radiation damage for PVT and other similar plastic scintillators can be found in Instrumentation in High Energy Physics. Such radiation breaks the C-H bonds and creates color centers which absorb the produced light, significantly reducing the light output. Following the increase in interest in Vinyl Records (as of 2022), PVT is being looked at as a replacement for PVC, the usual and historic material used to make Vinyl Records. PVT is considered more environmentally friendly than its older cousin PVC. References Vinyl polymers Phosphors and scintillators
Polyvinyl toluene
Chemistry
297
44,867,070
https://en.wikipedia.org/wiki/Magnesium%20monohydride
Magnesium monohydride is a molecular gas with formula MgH that exists at high temperatures, such as the atmospheres of the Sun and stars. It was originally known as magnesium hydride, although that name is now more commonly used when referring to the similar chemical magnesium dihydride. History George Downing Liveing and James Dewar are claimed to be the first to make and observe a spectral line from MgH in 1878. However they did not realise what the substance was. Formation A laser can evaporate magnesium metal to form atoms that react with molecular hydrogen gas to form MgH and other magnesium hydrides. An electric discharge through hydrogen gas at low pressure (20 pascals) containing pieces of magnesium can produce MgH. Thermally produced hydrogen atoms and magnesium vapour can react and condense in a solid argon matrix. This process does not work with solid neon, probably due to the formation of instead. A simple way to produce some MgH is to burn magnesium in a bunsen burner flame, where there is enough hydrogen to form MgH temporarily. Magnesium arcs in steam also produce MgH, but also produce MgO. Natural formation of MgH happens in stars, brown dwarfs, and large planets, where the temperature is high enough. The reaction that produces it is either or Mg + H → MgH. Decomposition is by the reverse process. Formation requires the presence of magnesium gas. The amount of magnesium gas is greatly reduced in cool stars by its extraction in clouds of enstatite, a magnesium silicate. Otherwise in these stars, below any magnesium silicate clouds where the temperature is hotter, the concentration of MgH is proportional to the square root of the pressure, and concentration of magnesium, and 10−4236/T. MgH is the second most abundant magnesium containing gas (after atomic magnesium) in the deeper hotter parts of planets and brown dwarfs. The reaction of Mg atoms with (dihydrogen gas) is actually endothermic and proceeds when magnesium atoms are excited electronically. The magnesium atom inserts into the bond between the two hydrogen atoms to create a temporary molecule, which spins rapidly and breaks up into a spinning MgH molecule and a hydrogen atom. The MgH molecules produced have a bimodal distribution of rotation rates. When Protium is changed for Deuterium in this reaction the distribution of rotations remains unchanged. (). The low rotation rate products also have low vibration levels, and so are "cold". Properties Spectrum The far infrared contains the rotational spectrum of MgH ranging from 0.3 to 2 THz. This also contains hyperfine structure. 24MgH is predicted to have spectral lines for various rotational transition for the following vibrational levels. The infrared vibration rotation bands are in the range 800–2200 cm−1. The fundamental vibration mode is at 6.7 μm. Three isotopes of magnesium and two of hydrogen multiply the band spectra with six isotopomers: 24MgH 25MgH 26MgH 24MgD 25MgD 26MgD. Vibration and rotation frequencies are significantly altered by the different masses of the atoms. The visible band spectrum of magnesium hydride was first observed in the 19th century, and was soon confirmed to be due to a combination of magnesium and hydrogen. Whether there was actually a compound was debated due to no solid material being able to be produced. Despite this the term magnesium hydride was used for whatever made the band spectrum. This term was used before magnesium dihydride was discovered. The spectral bands had heads with fluting in the yellow green, green, and blue parts of the visible spectrum. The yellow green band of the MgH spectrum is around the wavelength 5622 Å. The blue band is 4845 Å The main band of MgH in the visible spectrum is due to electronic transition between the A2Π→X2Σ+ levels combined with transitions in rotational and vibrational state. For each electronic transition, there are different bands for changes between the different vibrational states. The transition between vibrational states is represented using parenthesis (n,m), with n and m being numbers. Within each band there are many lines organised into three sets called branches. The P, Q and R branch are distinguished by whether the rotational quantum number increases by one, stays the same or decreases by one. Lines in each branch will have different rotational quantum numbers depending on how fast the molecules are spinning. For the A2Π→X2Σ+ transition the lowest vibrational level transitions are the most prominent, however the A2Π energy level can have a vibration quantum state up to 13. Any higher level and the molecule has too much energy and shakes apart. For each level of vibrational energy there are a number of different rates of rotation that the molecule can sustain. For level 0 the maximum rotational quantum number is 49. Above this rotation rate it would spin so fast it would break apart. Then for subsequently higher vibrational levels from 2 to 13 the number of maximum rotational levels decreasing going through the sequence 47, 44, 42, 39, 36, 33, 30, 27, 23, 19, 15, 11 and 6. The B'2Σ+→X2Σ+ system is a transition from a slightly higher electronic state to the ground state. It also has lines in the visible spectrum that are observable in sunspots. The bands are headless. The (0,0) band is weak compared to the (0,3), (0,4), (0,5), (0,6), (0,7), (1,3), (1,4), (1,7), and (1,8) vibrational bands. The C2Π state has rotational parameters of B = 6.104 cm−1, D = 0.0003176 cm −1, A = 3.843 cm−1, and p = -0.02653 cm−1. It has an energy level of 41242 cm−1. Another 2Δ electronic level has energy 42192 cm−1 and rotation parameters B = 6.2861 cm−1 and A = -0.168 cm−1. The ultraviolet has many more bands due to higher energy electronic states. The UV spectrum contains band heads at 3100 Å due to a vibrational transition (1,0) 2940 Å (2,0) 2720 Å (3,0) 2640 Å (0,1) 2567 Å (1,3). Physical The magnesium monohydride molecule is a simple diatomic molecule with a magnesium atom bonded to a hydrogen atom. The distance between hydrogen and magnesium atoms is 1.7297Å. The ground state of magnesium monohydride is X2Σ+. Due to the simple structure the symmetry point group of the molecule is C∞v. The moment of inertia of one molecule is 4.805263×10−40 g cm2. The bond has significant covalent character. The dipole moment is 1.215 Debye. Bulk properties of the MgH gas include enthalpy of formation of 229.79 kJ mol−1, entropy 193.20 J K−1 mol−1 and heat capacity of 29.59 J K−1 mol−1. The dissociation energy of the molecule is 1.33 eV. Ionization potential is around 7.9 eV with the ion formed when the molecule loses an electron. Dimer In noble gas matrices MgH can form two kinds of dimer: HMgMgH and a rhombic shaped (◊) in which a dihydrogen molecule bridges the bond between two magnesium atoms. MgH also can form a complex with dihydrogen . Photolysis increases reactions which form the dimer. The energy to break up the dimer HMgMgH into two MgH radicals is 197 kJ/mol. has 63 kJ/mol more energy than HMgMgH. In theory gas phase HMgMgH can decompose to and releasing 24 kJ/mol of energy exothermically. The distance between the magnesium atoms in HMgMgH is calculated to be 2.861 Å. HMgMgH can be considered a formal base compound for other substances LMgMgL that have a magnesium to magnesium bond. In these magnesium can be considered to be in oxidation state +1 rather than the normal +2. However these sorts of compounds are not made from HMgMgH. Related ions can be made by protons hitting magnesium, or dihydrogen gas interacting with singly ionized magnesium atoms (). , and are formed from low pressure hydrogen or ammonia over a magnesium cathode. The trihydride ion is produced the most, and in a greater proportion when pure hydrogen is used rather than ammonia. The dihydride ion is produced the least of the three. Related radicals HMgO and HMgS have been theoretically investigated. MgOH and MgSH are lower in energy. Applications The spectrum of MgH in stars can be used to measure the isotope ratio of magnesium, the temperature, and gravity of the surface of the star. In hot stars MgH will be mostly disassociated due to the heat breaking the molecules, but it can be detected in cooler G, K and M type stars. It can also be detected in starspots or sunspots. The MgH spectrum can be used to study the magnetic field and nature of starspots. Some MgH spectral lines show up prominently in the second solar spectrum, that is the fractional linear polarization. The lines belong to the Q1 and Q2 branches. The MgH absorption lines are immune to the Hanle effect where polarization is reduced in the presence of magnetic fields, such as near sunspots. These same absorption lines do not suffer from the Zeeman effect either. The reason that the Q branch shows up in this way is because Q branch lines are four times more polarizable, and twice as intense as P and R branch lines. These lines that are more polarizable are also less subject to magnetic field effects. References Other reading Metal hydrides Magnesium compounds
Magnesium monohydride
Chemistry
2,088
2,298,609
https://en.wikipedia.org/wiki/Tribune%20%28architecture%29
Tribune is an ambiguous – and often misused – architectural term, which can have several meanings. Today, it most often refers to a dais or stage-like platform or, in a vaguer sense, any place from which a speech can be prominently made. Etymology The English term tribune ("raised platform") was derived as early as 1762 from French (tribune) and Italian (tribuna) words. These in turn stemmed from Medieval Latin tribuna and from Classical Latin tribunal, the elevated placing of a tribune's (or other Roman magistrate's) seat for official functions in the manner of a throne. Meanings In ancient Rome, the term was used of a semicircular apse in a Roman basilica, with a raised platform, where a presiding magistrate (a tribune, or others) sat in an official chair. Subsequently, it applied generally to any raised structure from which speeches were delivered, including makeshift wooden structures in the Roman Forum and even the private box of the emperor at the Circus Maximus. In Medieval, and later, ecclesiastical architecture, the term applies to an area within a vaulted or semi-domed apse in a room or church. In this sense a tribune may contain a high altar or bishop's seat (cathedra). These features were particularly common in Roman and Byzantine church architecture. In these Christian basilicas the term is often retained for the semicircular recess behind the choir, as at San Clemente in Rome, Sant'Apollinare in Classe in Ravenna, San Zeno at Verona, or San Miniato near Florence. A secular example is its use for the celebrated octagon room of the Uffizi Palace at Florence. The sense of the term is sometimes extended to any gallery, balcony, or triforium. (Nikolaus Pevsner, in his book series The Buildings of England (1951–74), is at pains to point out that a tribune and a triforium, while often confused, are not the same thing.) In a church, it may refer to an open arcade overlooking the nave of a church – or indeed any large hall – often situated below a clerestory. The term is also loosely applied to various other raised spaces in secular or ecclesiastical buildings – in the latter sometimes in the place of pulpit, as in the Priory of Saint-Martin-des-Champs at Paris. Thus,"tribune" can refer to a dais or stage-like platform, or in a vaguer sense any place in a building from which a speech can be prominently made, which seems a return to the original function of the early Roman tribunal. This is the origin of the common metaphorical use of "tribune" in the names of newspapers, magazines and broadcast news programs. Notes References Architectural elements
Tribune (architecture)
Technology,Engineering
570
4,467,477
https://en.wikipedia.org/wiki/Concurrent%20constraint%20logic%20programming
Concurrent constraint logic programming is a version of constraint logic programming aimed primarily at programming concurrent processes rather than (or in addition to) solving constraint satisfaction problems. Goals in constraint logic programming are evaluated concurrently; a concurrent process is therefore programmed as the evaluation of a goal by the interpreter. Syntactically, concurrent constraint logic programs are similar to non-concurrent programs, the only exception being that clauses include guards, which are constraints that may block the applicability of the clause under some conditions. Semantically, concurrent constraint logic programming differs from its non-concurrent versions because a goal evaluation is intended to realize a concurrent process rather than finding a solution to a problem. Most notably, this difference affects how the interpreter behaves when more than one clause is applicable: non-concurrent constraint logic programming recursively tries all clauses; concurrent constraint logic programming chooses only one. This is the most evident effect of an intended directionality of the interpreter, which never revise a choice it has previously taken. Other effects of this are the semantical possibility of having a goal that cannot be proved while the whole evaluation does not fail, and a particular way for equating a goal and a clause head. Constraint handling rules can be seen as a form of concurrent constraint logic programming, but are used for programming a constraint simplifier or solver rather than concurrent processes. Description In constraint logic programming, the goals in the current goal are evaluated sequentially, usually proceeding in a LIFO order in which newer goals are evaluated first. The concurrent version of logic programming allows for evaluating goals in parallel: every goal is evaluated by a process, and processes run concurrently. These processes interact via the constraint store: a process can add a constraint to the constraint store while another one checks whether a constraint is entailed by the store. Adding a constraint to the store is done like in regular constraint logic programming. Checking entailment of a constraint is done via guards to clauses. Guards require a syntactic extension: a clause of concurrent constraint logic programming is written as H :- G | B where G is a constraint called the guard of the clause. Roughly speaking, a fresh variant of this clause can be used to replace a literal in the goal only if the guard is entailed by the constraint store after the equation of the literal and the clause head is added to it. The precise definition of this rule is more complicated, and is given below. The main difference between non-concurrent and concurrent constraint logic programming is that the first is aimed at search, while the second is aimed at implementing concurrent processes. This difference affects whether choices can be undone, whether processes are allowed not to terminate, and how goals and clause heads are equated. The first semantical difference between regular and concurrent constraint logic programming is about the condition when more than one clause can be used for proving a goal. Non-concurrent logic programming tries all possible clauses when rewriting a goal: if the goal cannot be proved while replacing it with the body of a fresh variant of a clause, another clause is proved, if any. This is because the aim is to prove the goal: all possible ways to prove the goal are tried. On the other hand, concurrent constraint logic programming aims at programming parallel processes. In general concurrent programming, if a process makes a choice, this choice cannot be undone. The concurrent version of constraint logic programming implements processes by allowing them to take choices, but committing to them once they have been taken. Technically, if more than one clause can be used to rewrite a literal in the goal, the non-concurrent version tries in turn all clauses, while the concurrent version chooses a single arbitrary clause: contrary to the non-concurrent version, the other clauses will never be tried. These two different ways for handling multiple choices are often called "don't know nondeterminism" and "don't care nondeterminism". When rewriting a literal in the goal, the only considered clauses are those whose guard is entailed by the union of the constraint store and the equation of the literal with the clause head. The guards provide a way for telling which clauses are not to be considered at all. This is particularly important given the commitment to a single clause of concurrent constraint logic programming: once a clause has been chosen, this choice will be never reconsidered. Without guards, the interpreter could choose a "wrong" clause to rewrite a literal, while other "good" clauses exist. In non-concurrent programming, this is less important, as the interpreter always tries all possibilities. In concurrent programming, the interpreter commits to a single possibility without trying the other ones. A second effect of the difference between the non-concurrent and the concurrent version is that concurrent constraint logic programming is specifically designed to allow processes to run without terminating. Non-terminating processes are common in general in concurrent processing; the concurrent version of constraint logic programming implements them by not using the condition of failure: if no clause is applicable for rewriting a goal, the process evaluating this goal stops instead of making the whole evaluation fail like in non-concurrent constraint logic programming. As a result, the process evaluating a goal may be stopped because no clause is available to proceed, but at the same time the other processes keep running. Synchronization among processes that are solving different goals is achieved via the use of guards. If a goal cannot be rewritten because all clauses that could be used have a guard that is not entailed by the constraint store, the process solving this goal is blocked until the other processes add the constraints that are necessary to entail the guard of at least one of the applicable clauses. This synchronization is subject to deadlocks: if all goals are blocked, no new constraints will be added and therefore no goal will ever be unblocked. A third effect of the difference between concurrent and non-concurrent logic programming is in the way a goal is equated to the head of a fresh variant of a clause. Operationally, this is done by checking whether the variables in the head can be equated to terms in such a way the head is equal to the goal. This rule differs from the corresponding rule for constraint logic programming in that it only allows adding constraints in the form variable=term, where the variable is one of the head. This limitation can be seen as a form of directionality, in that the goal and the clause head are treated differently. Precisely, the rule telling whether a fresh variant H:-G|B of a clause can be used to rewrite a goal A is as follows. First, it is checked whether A and H have the same predicate. Second, it is checked whether there exists a way for equating with given the current constraint store; contrary to regular logic programming, this is done under one-sided unification, which only allows a variable of the head to be equal to a term. Third, the guard is checked for entailment from the constraint store and the equations generated in the second step; the guard may contain variables that are not mentioned in the clause head: these variables are interpreted existentially. This method for deciding the applicability of a fresh variant of a clause for replacing a goal can be compactly expressed as follows: the current constraint store entails that there exists an evaluation of the variables of the head and the guard such that the head is equal to the goal and the guard is entailed. In practice, entailment may be checked with an incomplete method. An extension to the syntax and semantics of concurrent logic programming is the atomic tell. When the interpreter uses a clause, its guard is added to the constraint store. However, also added are the constraints of the body. Due to commitment to this clause, the interpreter does not backtrack if the constraints of the body are inconsistent with the store. This condition can be avoided by the use of atomic tell, which is a variant in which the clause contain a sort of "second guard" that is only checked for consistency. Such a clause is written H :- G:D|B. This clause is used to rewrite a literal only if G is entailed by the constraint store and D is consistent with it. In this case, both G and D are added to the constraint store. History The study of concurrent constraint logic programming started at the end of the 1980s, when some of the principles of concurrent logic programming were integrated into constraint logic programming by Michael J. Maher. The theoretical properties of concurrent constraint logic programming were later studied by various authors, including Martin Rinard and Vijay A. Saraswat. See also Curry, a logic functional programming language, which allows programming concurrent systems . ToonTalk Janus Alice References Bibliography Constraint logic programming Programming paradigms Concurrent computing Logic programming
Concurrent constraint logic programming
Technology
1,787
39,940,749
https://en.wikipedia.org/wiki/NGC%203185
NGC 3185 is a spiral galaxy located 20.4 Mpc away in the Leo constellation. NGC 3185 is a member of a four-galaxy group called HCG 44. It is also a member of the NGC 3190 Group of galaxies, which is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster. References External links Barred spiral galaxies 3190 05554 030059 +04-24-024 10148+2156 Leo (constellation) 185001??
NGC 3185
Astronomy
118
31,237,981
https://en.wikipedia.org/wiki/MESAdb
mESAdb is a database for the analysis of sequences and expression of microRNA See also MiRTarBase microRNA References External links http://konulab.fen.bilkent.edu.tr/mirna/ Biological databases RNA MicroRNA
MESAdb
Biology
54
59,190,719
https://en.wikipedia.org/wiki/Dorothy%20Hill%20Medal
The Dorothy Hill Medal is awarded annually and honours the contributions of Dorothy Hill to Australian Earth science and her work in opening up tertiary science education to women. The award supports research in the Earth sciences by female researchers up to 10 years post doctorate for research carried out mainly in Australia. Prior to 2018 the award was known as the Dorothy Hill Award. Recipients Source: Australian Academy of Science See also List of earth sciences awards References Earth sciences awards Australian Academy of Science Awards Australian science and technology awards Awards established in 2002 Science awards honoring women
Dorothy Hill Medal
Technology
106
15,931,374
https://en.wikipedia.org/wiki/Low-flush%20toilet
A low-flush toilet (or low-flow toilet or high-efficiency toilet) is a flush toilet that uses significantly less water than traditional high-flow toilets. Before the early 1990s in the United States, standard flush toilets typically required at least 3.5 gallons (13.2 litres) per flush and they used float valves that often leaked, increasing their total water use. In the early 1990s, because of concerns about water shortages, and because of improvements in toilet technology, some states and then the federal government began to develop water-efficiency standards for appliances, including toilets, mandating that new toilets use less water. The first standards required low-flow toilets of 1.6 gallons (6.0 litres) per flush. Further improvements in the technology to overcome concerns about the initial poor performance of early models have further cut the water use of toilets and while federal standards stagnate at 1.6 gallons per flush, certain states' standards toughened up to require that new toilets use no more than 1.28 gallons (4.8 litres) per flush, while working far better than older models. Low-flush toilets include single-flush models and dual-flush toilets, which typically use 1.6 US gallons per flush for the full flush and 1.28 US gallons or less for a reduced flush. Water savings The US Environmental Protection Agency's WaterSense program provides certification that toilets meet the goal of using less than 1.6 US gallons per flush. Units that meet or exceed this standard can carry the WaterSense sticker. The EPA estimates that the average US home will save US$90 per year, and $2,000 over the lifetime of the toilets. Dry toilets can lead to even more water savings in private homes as they use no water for flushing. Problems The early low-flush toilets in the US often had a poor design that required more than one flush to rid the bowl of solid waste, resulting in limited water savings. In response, US Congressman Joe Knollenberg from Michigan tried to get Congress to repeal the law but was unsuccessful, and the industry worked to redesign and improve toilet functioning. Some reduction in sewer flows have caused slight backups or required redesign of wastewater pipes, but overall, very substantial residential water savings have resulted from the change over time to more efficient toilets. History In 1988 Massachusetts became the first state in the US to mandate the use of low-flush toilets in new construction and remodeling. In 1992 US President George H. W. Bush signed the Energy Policy Act. This law made 1.6 gallons per flush a mandatory federal maximum for new toilets. This law went into effect on January 1, 1994, for residential buildings and January 1, 1997, for commercial buildings. The first generation of low-flush toilets were simple modifications of traditional toilets. A valve would open and the water would passively flow into the bowl. The resulting water pressure was often inadequate to carry away waste. Improvements in design now make modern models not only more water-efficient but more effective than old models. In addition to tank-type toilets that "pull" waste down, there are also now pressure-assist models, which use water pressure to effectively "push" waste. See also Low-flow fixtures Dual flush toilet Sewer dosing unit Waterless urinal Residential water use in the U.S. and Canada References Toilets Toilet types Water conservation Water conservation tools Sustainable products Bathrooms
Low-flush toilet
Biology
689
50,402,709
https://en.wikipedia.org/wiki/La%20Compagnie%20des%20Lampes
La Compagnie des Lampes ("The Lamp Company") was a name used by several French companies all in the area of electrical products particularly lighting. La Compagnie des Lampes (1888) The original Compagnie des Lampes was set up at Ivry-sur-Seine in 1888. The plant was subsequently attached to the CGE (Compagnie Générale d'Electricité) on its acquisition in 1898. The plant is classified as a historical monument. In 1915, the plant was the second to start manufacturing TM triodes ("Télégraphie Militaire") in France, under their Métal brand (the first was E.C.&A. Grammont (Lyon) under their Radio Fotos brand). Later they made tubes for domestic AC transformer heating such as the BW604 and BW1010 under their Métal-Secteur brand. La Compagnie des Lampes (1911) Founded in 1911 by Paul Blavier, La Compagnie des Lampes was a light bulb factory workshop, located in Saint-Pierre-Montlimart, near Cholet. The company changed its name in 1918 to become Manufacture de lampes à incandescence, la Française. It was associated with the Thomson group in the 1950s. La Compagnie des Lampes (1921) In 1921 CFTH (Compagnie Française Thomson-Houston) and CGE (Compagnie Générale d'Electricité) jointly created a new Compagnie des Lampes. It later became a major player in the field of lighting in France, notably through its brand MAZDA. Between 1924 and 1939, it was part of the Phoebus cartel, an oligopoly that dominated the market for light bulbs while putting in place an agreement on the principle of planned obsolescence for their products. Besides light bulbs (and like British Ediswan), CdL (1921) also made vacuum tubes under the Mazda brand, for example 6H8G (1947), 3T100A1 (1949), E1 (1950); since 1953 as LAMPE MAZDA: 2G21 (1953), 927 (1954), EL183 (1959), EF816 (1962). Many of their tubes were also available from Compagnie Industrielle Française des Tubes Electroniques (CIFTE) under their Mazda-Belvu brand (originating from Societé Radio Belvu, which sold Grammont's Fotos tubes). References Electrical engineering companies of France Lighting brands Vacuum tubes
La Compagnie des Lampes
Physics
535
38,448,973
https://en.wikipedia.org/wiki/Tadahiko%20Mizuno
is a Japanese nuclear-chemist known for his work on cold fusion. He was a former assistant professor teaching the Atomic Power Environmental Materials program at Hokkaido University. He was also a member of Energy Environmental Institute of Engineering at Hokkaido University until 2009. Early life Mizuno graduated from the Department of Applied Physics, Hokkaido University, Faculty of Engineering in March 1968. In March 1970, he graduated with a master's degree from the Department of Applied Physics, Hokkaido University, Faculty of Engineering. In April 1972 he completed his doctorate degree in Engineering at Hokkaido University, Faculty of Engineering, Department of Engineering. In March 1976, he received his doctorate in Engineering for "Study on formation process of hydride on the surface of Ti by d, n reaction” Teaching; Atomic Engineering, Corrosion, X-rays analysis, Electron microscope, Exercise: Mathematics, Physical Engineering. Awards He was awarded The International Society for Condensed Matter Nuclear Science Prizes (Giuliano Preparata medal) in 2004 from The International Society for Condensed Matter Nuclear Science (ISCMNS). The ISCMNS is the organizer of a conference and a workshop on cold fusion and related topics. Publications "Sorption of Hydrogen On and In Hydrogen Absorbing Metal in Electrochemical Environments" (Plenum Press) "An understanding of the environment, Global environment and the human life” (Sankyo Publishing, in 2006) "Low Energy Nuclear Reactions Sourcebook" (American Chemical Society, in 2008) "Nuclear Transmutation: The Reality of Cold Fusion" (Infinite Energy Press, in 1998) Academic societies The International Society for Condensed Matter Nuclear Science International Hydrogen Energy Society International Institute of Aeronautics and Astronautics Atomic Energy Society of Japan Japan Society of Applied Physics Japan Cold Fusion Research Society Research activities Electrochemical, metallurgy, nuclear reaction in condensed matter, elucidation of the peculiar behavior of hydrogen in the metal, hydrogen penetration in metals, hydrogen embrittlement, hydrogen production, hydrogen separation and purification, power conversion of hydrogen, elucidation of hydrogen behavior, development of unique methods using hydrogen isotopes, studying the behavior of hydrogen on metal. Mizuno has also written Numerous books representing the interaction between hydrogen and the metals. Extramural activities Mizuno was involved in anti-terrorism measures as part of international safety measures for Hakodate Customs of Ministry of Finance. See also International Conference on Condensed Matter Nuclear Science References External links Atomic Energy Society of Japan Japan Society of Applied Physics Living people 1945 births 21st-century Japanese chemists Cold fusion Hokkaido University alumni
Tadahiko Mizuno
Physics,Chemistry
513
42,543,067
https://en.wikipedia.org/wiki/New%20Mexico%20Exoplanet%20Spectroscopic%20Survey%20Instrument
The New Mexico Exoplanet Spectroscopic Survey Instrument (NESSI) is a ground-based near-infrared spectrographic system specifically designed to study the atmospheres of exoplanets. The NESSI instrument was mounted in 2014 on a 2.4 meter telescope at the Magdalena Ridge Observatory in Socorro County, New Mexico, USA, achieving first light on 7 April 2014. Overview NESSI, a $3.5 million instrument, is the first purpose-built device for the analysis of exoplanet atmospheres, and is expected to have a powerful impact on the field of exoplanet characterization. The Principal Investigator is Michelle Creech-Eakman at the New Mexico Institute of Mining and Technology, working with seven co-investigators from New Mexico Tech, Magdalena Ridge Observatory, and NASA JPL. It is partly funded by NASA's Experimental Program to Stimulate Competitive Research, in partnership with the New Mexico Institute of Mining and Technology. The NESSI spectroscope was mounted on the institute's 2.4 meter telescope at the Magdalena Ridge Observatory in Socorro County, New Mexico, USA, and its first exoplanet observations began on April 7, 2014. In 2016 a contract was established with JPL to retrofit NESSI with new foreoptics and a mounting collar for use on the Hale Telescope at the Palomar Observatory. NESSI achieved first light on the Hale Telescope in Feb, 2018 and was undertaking a series of observations to establish its sensitivity and precision for exoplanet spectroscopy. NESSI will capture the spectra of both the star and the planet during the transit and then allow scientists to deduct the composition of the planet's atmosphere. The novel technology is expected to achieve high definition readings by using algorithms to calibrate and compensate for time-variable telluric features and instrumental variability throughout an observation. Scientific goals NESSI will be able to detect and study a wide range of wavelengths in the near-infrared region of the light spectrum. NESSI will be used to study about 100 exoplanets, ranging from massive 'super-Earths' to gas giants. It uses a technique called transit spectroscopy, in which a planet is observed as it crosses in front of, then behind, its parent star. The observed light is beamed through a spectrometer that breaks it apart, ultimately revealing chemicals that make up the planet's atmosphere. NESSI is expected to devote about 50 nights per year for surveying exoplanets via infrared spectroscopy. See also HARPS spectrograph List of extrasolar planets List of exoplanet search projects References External links NESSI Home page at the New Mexico Institute of Mining and Technology. NESSI Specifications Astronomical surveys Exoplanet search projects
New Mexico Exoplanet Spectroscopic Survey Instrument
Astronomy
559
7,585,091
https://en.wikipedia.org/wiki/Squawk%20virtual%20machine
Squawk is a Java micro edition virtual machine for embedded system and small devices. Most virtual machines for the Java platform are written in low level native languages such as C/C++ and assembler; what makes Squawk different is that Squawk's core is mostly written in Java (this is called a meta-circular interpreter). A Java implementation provides ease of portability, and integration of virtual machine and application resources such as objects, threads, and operating-system interfaces. The Squawk Virtual Machine figure can be simplified as: Write as much of the VM in Java as possible Targeting small, resource constrained devices Enable Java for micro-embedded development The research project was inspired by Squeak. Squawk has a Java ME heritage and features a small memory footprint. It was developed to be simple with minimal external dependencies. Its simplicity made it portable and easy to debug and maintain. Squawk also provides an isolated mechanism by which an application is represented as an object. In Squawk, one or more applications can run in the single JVM. Conceptually, each application is completely isolated from all other applications. See also Sun SPOT Jikes RVM, another JVM written mostly in Java Rubinius, a VM for Ruby written in Ruby MicroEJ VEE, another JVM written mostly in (an extended version of) Java List of Java virtual machines References External links SunSPOTs and Squawk technology Podcast A Java Virtual Machine Architecture for Very Small Devices The Squawk Virtual Machine: Java(TM) on the Bare Metal Javaone 2006 Squawk for Wireless Sensor Networks Application-Driven Customization of an Embedded Java Virtual Machine Ahead of time deployment in ROM of a Java-OS Project Sun Spot Squawk Poster Discontinued Java virtual machines Sun Microsystems software Free software programmed in C Free software programmed in Java (programming language) Software using the GNU General Public License
Squawk virtual machine
Technology
394
61,479,470
https://en.wikipedia.org/wiki/NGC%204305
NGC 4305 is a dwarf spiral galaxy located about 100 million light-years away in the constellation Virgo. The galaxy was discovered by astronomer John Herschel on May 2, 1829. Although considered to be a member of the Virgo Cluster, its high radial velocity and blue luminosity suggest it is in fact a background galaxy. The galaxy has a nearby major companion; NGC 4306. NGC 4305 exhibits well-defined, smooth spiral arms which terminate well outside its central bulge. This spiral structure appears to have been induced by a tidal interaction with NGC 4306. Such a tidal interaction would also explain its deficiency in neutral hydrogen gas (HI). References External links 4305 040030 Virgo (constellation) Astronomical objects discovered in 1829 Dwarf spiral galaxies 07432 Virgo Supercluster Interacting galaxies
NGC 4305
Astronomy
166
36,479,097
https://en.wikipedia.org/wiki/Micro-compounding
Micro-compounding is the mixing or processing of polymer formulations in the melt on a small scale, typically milliliters. It is popular for research and development because it gives faster, more reliable results with smaller samples and less cost. Its applications include pharmaceutical, biomedical, and nutritional areas. Design Micro-compounding is typically performed with a tabletop, twin screw micro-compounder, or micro-extruder with a working volume of 5 or 15 milliliters. With such small volumes, it is difficult to have sufficient mixing in a continuous extruder. Therefore, micro-compounders typically have a batch mode (recirculation) and a conical shape. The L/D of a continuous twin screw extruder is mimicked in a batch micro-compounder by the recirculation mixing time, which is controlled by a manual valve. With this valve, the recirculation can be interrupted to unload the formulation in either a strand or an injection moulder, a film device or a fiber line. Typical recirculation times are one to three minutes, depending on the ease of dispersive and distributive mixing of the formulation. Benefits Micro-compounding can now produce films, fibers, and test samples (rods, rings, tablets) from mixtures as small as 5 ml in less than ten minutes. The small footprint requires less lab space than for a parallel twin screw extruder. One micro-extruder, developed to test whether drug delivery enabled improved bioavailability of poorly soluble drugs or the sustained release of active ingredients show or require sensitive and water destroying invasives. References Polymer chemistry Chemical processes
Micro-compounding
Chemistry,Materials_science,Engineering
344
17,808,584
https://en.wikipedia.org/wiki/Rs6313
In genetics, rs6313 also called T102C or C102T is a gene variation—a single nucleotide polymorphism (SNP)—in the human HTR2A gene that codes for the 5-HT2A receptor. The SNP is a synonymous substitution located in exon 1 of the gene where it is involved in coding the 34th amino acid as serine. As 5-HT2A is a neuroreceptor the SNP has been investigated in connection with brain functions and neuropsychiatric disorders, and it is perhaps the most investigated SNP for its gene. Two other SNPs in HTR2A have also received much attention: rs6311 and His452Tyr (rs6314). The T102C polymorphism has been shown to be in complete linkage disequilibrium with the rs6311 (A-1438G). A less well investigated SNP of this gene is rs7997012. Meta-analyses seem to indicate that the SNP is directly associated with schizophrenia, with Alzheimer's disease, and two initial studies seem to indicate that it is not associated with Parkinson's disease. There have been multiple studies of the effect of this SNP on clozapine treatment response in schizophrenia. A meta-analysis published in 1998 found an association. Individual studies Many individual studies have been done to investigate possible effects of the rs6313 polymorphism on phenotypes such as personality traits or disorders and their endophenotypes. The C-allele has been associated with higher extraversion personality scores among borderline personality disorder patients and the presence of visual and auditory hallucinations in patients with late-onset Alzheimer's disease. Multiple studies have found that individuals with schizophrenia who are homozygous for the C-allele tend to do worse on working memory tasks than do individuals with a T-allele. Rs6313 has also been shown to be associated with novelty seeking among Italian mood disorder patients and reward dependence in a German population. The SNP may also be associated with rheumatoid arthritis. One study found no association between the SNP and suicidal behavior in a Chinese population, and another found no association with fibromyalgia syndrome. References SNPs on chromosome 13
Rs6313
Biology
487
1,049,599
https://en.wikipedia.org/wiki/Conventional%20landing%20gear
Conventional landing gear, or tailwheel-type landing gear, is an aircraft undercarriage consisting of two main wheels forward of the center of gravity and a small wheel or skid to support the tail. The term taildragger is also used. The term "conventional" persists for historical reasons, but all modern jet aircraft and most modern propeller aircraft use tricycle gear. History In early aircraft, a tailskid made of metal or wood was used to support the tail on the ground. In most modern aircraft with conventional landing gear, a small articulated wheel assembly is attached to the rearmost part of the airframe in place of the skid. This wheel may be steered by the pilot through a connection to the rudder pedals, allowing the rudder and tailwheel to move together. Before aircraft commonly used tailwheels, many aircraft (like a number of First World War Sopwith aircraft, such as the Camel fighter) were equipped with steerable tailskids, which operate similar to a tailwheel. When the pilot pressed the right rudder pedal—or the right footrest of a "rudder bar" in World War I—the skid pivoted to the right, creating more drag on that side of the plane and causing it to turn to the right. While less effective than a steerable wheel, it gave the pilot some control of the direction the craft was moving while taxiing or beginning the takeoff run, before there was enough airflow over the rudder for it to become effective. Another form of control, which is less common now than it once was, is to steer using "differential braking", in which the tailwheel is a simple, freely castering mechanism, and the aircraft is steered by applying brakes to one of the mainwheels in order to turn in that direction. This is also used on some tricycle gear aircraft, with the nosewheel being the freely castering wheel instead. Like the steerable tailwheel/skid, it is usually integrated with the rudder pedals on the craft to allow an easy transition between wheeled and aerodynamic control. Advantages The tailwheel configuration offers several advantages over the tricycle landing gear arrangement, which make tailwheel aircraft less expensive to manufacture and maintain. Due to its position much further from the center of gravity, a tailwheel supports a smaller part of the aircraft's weight allowing it to be made much smaller and lighter than a nosewheel. As a result, the smaller wheel weighs less and causes less parasitic drag. Because of the way airframe loads are distributed while operating on rough ground, tailwheel aircraft are better able to sustain this type of use over a long period of time, without cumulative airframe damage occurring. If a tailwheel fails on landing, the damage to the aircraft will be minimal. This is not the case in the event of a nosewheel failure, which usually results in a prop strike. Due to the increased propeller clearance on tailwheel aircraft, less stone chip damage will result from operating a conventionally geared aircraft on rough or gravel airstrips, making them well suited to bush flying. Tailwheel aircraft are more suitable for operation on skis. Tailwheel aircraft are easier to fit into and maneuver inside some hangars. Disadvantages The conventional landing gear arrangement has disadvantages compared to nosewheel aircraft. Tailwheel aircraft are more subject to "nose-over" accidents due to incorrect application of brakes by the pilot. Conventional geared aircraft are much more susceptible to ground looping. A ground loop occurs when directional control is lost on the ground and the tail of the aircraft passes the nose, swapping ends, in some cases completing a full circle. This event can result in damage to the aircraft's undercarriage, tires, wingtips, propeller and engine. Ground-looping occurs because whereas a nosewheel aircraft is steered from ahead of the center of gravity, a taildragger is steered from behind (much like driving a car backwards at high speed), so that on the ground a taildragger is inherently unstable, whereas a nosewheel aircraft will self-center if it swerves on landing. In addition, some tailwheel aircraft must transition from using the rudder to steer to using the tailwheel while passing through a speed range when neither is wholly effective due to the nose high angle of the aircraft and lack of airflow over the rudder. Avoiding ground loops requires more pilot training and skill. Tailwheel aircraft generally suffer from poorer forward visibility on the ground, compared to nose wheel aircraft. Often this requires continuous "S" turns on the ground to allow the pilot to see where they are taxiing. Tailwheel aircraft are more difficult to taxi during high wind conditions, due to the higher angle of attack on the wings which can then develop more lift on one side, making control difficult or impossible. They also suffer from lower crosswind capability and in some wind conditions may be unable to use crosswind runways or single-runway airports. Due to the nose-high attitude on the ground, propeller-powered taildraggers are more adversely affected by P-factor – asymmetrical thrust caused by the propeller's disk being angled to the direction of travel, which causes the blades to produce more lift when going down than when going up due to the difference in angle the blade experiences when passing through the air. The aircraft will then pull to the side of the upward blade. Some aircraft lack sufficient rudder authority in some flight regimes (particularly at higher power settings on takeoff) and the pilot must compensate before the aircraft starts to yaw. Some aircraft, particularly older, higher powered aircraft such as the P-51 Mustang, cannot use full power on takeoff and still safely control their direction of travel. On landing this is less of a factor, however opening the throttle to abort a landing can induce severe uncontrollable yaw unless the pilot is prepared for it. Jet-powered tailwheel aircraft Jet aircraft generally cannot use conventional landing gear, as this orients the engines at a high angle, causing their jet blast to bounce off the ground and back into the air, preventing the elevators from functioning properly. This problem occurred with the third, or "V3" prototype of the German Messerschmitt Me 262 jet fighter. After the first four prototype Me 262 V-series airframes were built with retracting tailwheel gear, the fifth prototype was fitted with fixed tricycle landing gear for trials, with the sixth prototype onwards getting fully retracting tricycle gear. A number of other experimental and prototype jet aircraft had conventional landing gear, including the first successful jet, the Heinkel He 178, the Ball-Bartoe Jetwing research aircraft, and a single Vickers VC.1 Viking, which was modified with Rolls-Royce Nene engines to become the world's first jet airliner. Rare examples of jet-powered tailwheel aircraft that went into production and saw service include the British Supermarine Attacker naval fighter and the Soviet Yakovlev Yak-15. Both first flew in 1946 and owed their configurations to being developments of earlier propeller powered aircraft. The Attacker's tailwheel configuration was a result of it using the Supermarine Spiteful's wing, avoiding expensive design modification or retooling. The engine exhaust was behind the elevator and tailwheel, reducing problems. The Yak-15 was based on the Yakovlev Yak-3 propeller fighter. Its engine was mounted under the forward fuselage. Despite its unusual configuration, the Yak-15 was easy to fly. Although a fighter, it was mainly used as a trainer aircraft to prepare Soviet pilots for flying more advanced jet fighters. Monowheel undercarriage A variation of the taildragger layout is the monowheel landing gear. To minimize drag, many modern gliders have a single wheel, retractable or fixed, centered under the fuselage, which is referred to as monowheel gear or monowheel landing gear. Monowheel gear is also used on some powered aircraft, where drag reduction is a priority, such as the Europa XS. Monowheel power aircraft use retractable wingtip legs (with small castor wheels attached) to prevent the wingtips from striking the ground. A monowheel aircraft may have a tailwheel (like the Europa) or a nosewheel (like the Schleicher ASK 23 glider). Training Taildragger aircraft require more training time for student pilots to master. This was a large factor in the 1950s switch by most manufacturers to nosewheel-equipped trainers, and for many years nosewheel aircraft have been more popular than taildraggers. As a result, most Private Pilot Licence (PPL) pilots now learn to fly in tricycle gear aircraft (e.g. Cessna 172 or Piper Cherokee) and only later transition to taildraggers. Techniques Landing a conventional geared aircraft can be accomplished in two ways. Normal landings are done by touching all three wheels down at the same time in a three-point landing. This method does allow the shortest landing distance but can be difficult to carry out in crosswinds, as rudder control may be reduced severely before the tailwheel can become effective. The alternative is the wheel landing. This requires the pilot to land the aircraft on the mainwheels while maintaining the tailwheel in the air with elevator to keep the angle of attack low. Once the aircraft has slowed to a speed that can ensure control will not be lost, but above the speed at which rudder effectiveness is lost, then the tailwheel is lowered to the ground. Examples Examples of tailwheel aircraft include: Airplanes de Havilland Canada DHC-2 Beaver Douglas DC-3 Maule M-7 Messerschmitt Bf 109 Piper J-3 Cub Supermarine Spitfire Helicopters Boeing AH-64 Apache - Attack helicopter Sikorsky SH-3 Sea King - Anti-submarine helicopter Modifications of tricycle gear aircraft Several aftermarket modification companies offer kits to convert many popular nose-wheel equipped aircraft to conventional landing gear. Aircraft for which kits are available include: Cessna 150 Cessna 152 Cessna 172 Cessna 175 Cessna 182 Piper PA-22 Tri-Pacer References Citations Bibliography Boyne, Walter J. "Goering's Big Bungle". Air Force Magazine, Vol. 91, No. 11, November 2008. Aviation Publishers Co. Limited, From the Ground Up, page 11 (27th revised edition) Aircraft undercarriage Aircraft configurations
Conventional landing gear
Engineering
2,100
66,600,395
https://en.wikipedia.org/wiki/Robert%20Carpick
Robert William Carpick is a Canadian mechanical engineer. He is currently director of diversity, equity, and inclusion and John Henry Towne Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He is best known for his work in tribology, particularly nanotribology. Education Carpick received his bachelor's degree in physics from the University of Toronto in 1991, and his master's degree and Doctor of Philosophy in physics from the University of California, Berkeley, in 1997. His thesis was entitled "The Study of Contact, Adhesion and Friction at the Atomic Scale by Atomic Force Microscopy". His PhD supervisor was Miquel Salmeron, who pioneered the use of Atomic Force Microscopy (AFM) in tribology. During his PhD, Carpick devised a method to obtain reproducible and quantitative friction measurements using AFM. Research career After his PhD, he spent two years as a postdoctoral appointee at Sandia National Laboratory in the Surface and Interface Science Department, and then the Biomolecular Materials and Interfaces Department, where he worked under the supervision of Dr Alan R. Burns. In 2000, he joined the faculty at the University of Wisconsin-Madison in the Engineering Physics Department. Carpick moved to the University of Pennsylvania in January 2007. He has made a number of important discoveries in the field of nanotribology using AFM. These include that the friction of lamellar 2D-materials (e.g. graphene, molybdenum disulfide, niobium diselenide, and hexagonal boron nitride) increases as the number of layers decreases. He has shown that frictional ageing of the contacts between rock surfaces arises from the formation of interfacial chemical bonds. He found that the wear of AFM tips cannot be adequately described by macroscale models and instead is driven by nanoscale mechanochemical processes. His group has also given important insights into the mechochemical tribofilm formation of the lubricant antiwear additive zinc dialkyldithiophosphate (ZDDP). According to Google Scholar, as of 2021, his work had been cited on over 16,000 occasions. Honours and awards Carpick was named a Fellow of the American Physical Society in 2012, a Fellow of the American Vacuum Society in 2014, a Fellow of the Society of Tribologists and Lubrication Engineers in 2016, a Fellow of the Materials Research Society in 2017, and a Fellow of the American Society of Mechanical Engineers (ASME) in 2019. He received a National Science Foundation CAREER Award in 2001, and was named Outstanding New Mechanics Educator by the American Society for Engineering Education in 2003. In 2009, he was awarded the ASME Burt L. Newkirk Award. Personal life Carpick has been married to his partner since 2003. He is also a fan and practitioner of curling and the organ. References Tribologists University of Pennsylvania faculty Year of birth missing (living people) Living people Canadian LGBTQ scientists Canadian LGBTQ academics 21st-century Canadian LGBTQ people Fellows of the American Physical Society
Robert Carpick
Materials_science
638
40,314,187
https://en.wikipedia.org/wiki/Carlson%20curve
The Carlson curve is a term to describe the rate of DNA sequencing or cost per sequenced base as a function of time. It is the biotechnological equivalent of Moore's law. Carlson predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law. History The term was coined by The Economist and is named after author Rob Carlson. Carlson curves illustrate the rapid (in some cases above exponential growth) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis and a range of physical and computational tools used in protein production and in determining protein structures. Next generation sequencing Moore's Law started being profoundly out-paced in January 2008 when the centers transitioned from Sanger sequencing to newer DNA sequencing technologies: 454 sequencing with average read length=300-400 bases (10-fold) Illumina and SOLiD sequencing with average read length=50-100 bases (30-fold). References DNA sequencing
Carlson curve
Chemistry,Biology
209
72,181,213
https://en.wikipedia.org/wiki/Monterrey%20Foundry
The Monterrey Foundry (In Spanish: Fundidora de Fierro y Acero de Monterrey, S.A.) was a Mexican iron and steel foundry founded in 1900 in the city of Monterrey, becoming the first such foundry in Latin America and, for many years, the most important one in the region. At the end of the 19th century, Vicente Ferrara, aware of the existence of numerous iron and coal deposits in the surroundings of Monterrey, and having obtained experience working in steel foundries in the United States, saw the opportunity to found a similar company in Monterrey. To carry out his vision, he gained the support of an international consortium of entrepreneurs, including Antonio Basagoiti (Spain), Eugene Kelly (US), and Leon Signoret (France). As a capital-intensive industry, the enterprise also required significant investments from some of the wealthiest families of the industrialized north of Mexico at the turn of the twentieth century, including the Milmo, Madero, and Garza-Sada clans. Foreign capitalists, including the Guggenheims, also participated to a more limited extent. The company was successful during the first half of the twentieth century. Many significant engineering projects in Latin America were built with structural steel produced by the Monterrey Foundy. This included Torre Latinoamericana, the world's first major skyscraper successfully built on highly active seismic zone. After many years in private hands, the firm was nationalized by the Mexican government in 1977 and remained operated by the public sector until its bankruptcy in May 1986. Today, the old site of the foundry has become Fundidora Park. For 60 years it was dedicated exclusively to the production of non-flat iron and steel articles, such as railways, wire rods, corrugated rods, structural steel, and train wheels, among others. References Foundries Manufacturing companies based in Monterrey 1986 disestablishments in Mexico
Monterrey Foundry
Chemistry
381
69,046,321
https://en.wikipedia.org/wiki/Deudomperidone
Deudomperidone (developmental code name CIN-102; also known as deuterated domperidone) is a dopamine antagonist medication which is under development in the United States for the treatment of gastroparesis. It acts as a selective dopamine D2 and D3 receptor antagonist and has peripheral selectivity. Deudomperidone is a deuterated form of domperidone, and it is suggested that deudomperidone may have improved efficacy, tolerability, and pharmacokinetics compared to domperidone. As of January 2022, deudomperidone is in phase 2 clinical trials for the treatment of gastroparesis. See also Metopimazine Trazpiroben References External links Deudomperidone - AdisInsight Antiemetics Antigonadotropins Benzimidazoles Chloroarenes D2 antagonists D3 antagonists Deuterated compounds Experimental drugs Motility stimulants Peripherally selective drugs Piperidines Prolactin releasers Ureas
Deudomperidone
Chemistry
230
19,733,067
https://en.wikipedia.org/wiki/Nanodisc
A nanodisc is a synthetic model membrane system which assists in the study of membrane proteins. Nanodiscs are discoidal proteins in which a lipid bilayer is surrounded by molecules that are amphipathic molecules including proteins, peptides, and synthetic polymers. It is composed of a lipid bilayer of phospholipids with the hydrophobic edge screened by two amphipathic proteins. These proteins are called membrane scaffolding proteins (MSP) and align in double belt formation. Nanodiscs are structurally very similar to discoidal high-density lipoproteins (HDL) and the MSPs are modified versions of apolipoprotein A1 (apoA1), the main constituent in HDL. Nanodiscs are useful in the study of membrane proteins because they can solubilise and stabilise membrane proteins and represent a more native environment than liposomes, detergent micelles, bicelles and amphipols. The art of making nanodiscs has progressed past using only the MSPs and lipids to make particles, leading to alternative strategies like peptide nanodiscs that use simpler proteins and synthetic nanodiscs that do not need any proteins for stabilization. MSP nanodisc The original nanodisc was produced by apoA1-derived MSPs from 2002. The size and stability of these discs depend on the size of these proteins, which can be adjusted by truncation and fusion. In general, MSP1 proteins consist of one repeat, and MSP2s are double-sized. Peptide nanodisc In peptide nanodiscs, the lipid bilayer is screened by amphipathic peptides instead of two MSPs. Peptide nanodiscs are structurally similar to MSP nanodiscs and the peptides also align in a double belt. They can stabilise membrane proteins, but have higher polydispersity and are structurally less stable than MSP nanodiscs. Recent studies, however, showed that dimerization and polymerization of the peptides make them more stable. Synthetic/Native nanodisc Another way to mimic the native lipid membrane are synthetic polymers. Styrene-maleic acid co-polymers (SMAs) called SMALPs or Lipodisq and Diisobutylene-maleic acid (DIBMA) are such synthetic polymers (DIBMALPs). They can solubilize membrane proteins directly from cells or raw extract. They also have been used to study the lipid composition of several organisms. It was discovered that all synthetic polymers which contained a styrene and maleic acid group can solubilize proteins. These SMA nanoparticles have also been tested as possible drug delivery vehicle and for the study of folding, post-translational modifications and lipid interactions of membrane proteins by native mass spectrometry. References External links Nanodisc Technology from the Stephen Sligar laboratory Assembled nanodiscs for application with cell-free lysates HDL and Nanodiscs an overview of nanodisc technology at UIUC Phospholipid Bilayer Nanodiscs A summary from the Atkins lab at the University of Washington Purchase the MSP The plasmid for the MSP is available from AddGene SMA native nanodiscs website International research community website using SMA or other polymers (DIBMA for e.g.) as an alternative to conventional detergents and synthetic lipid environment found in MSP-Nanodisc. Membrane biology
Nanodisc
Chemistry
722
40,660,701
https://en.wikipedia.org/wiki/Libdash
libdash is a computer software library which provides an object-oriented interface to the Dynamic Adaptive Streaming over HTTP (DASH) standard. It is also the official reference implementation of the ISO/IEC MPEG-DASH standard, and maintained by the Austrian company bitmovin. The libdash source code is open source, published at GitHub, and licensed under the GNU Lesser General Public License 2.1+. The project contains a Qt-based sample multimedia player that is based on ffmpeg which uses libdash for the playback of DASH streams. References External links Audio compression Video libraries Free video conversion software Free codecs Multimedia frameworks C++ libraries Cross-platform free software Free software programmed in C++ Free computer libraries
Libdash
Engineering
156
67,459,232
https://en.wikipedia.org/wiki/Gowers%27%20theorem
In mathematics, Gowers' theorem, also known as Gowers' Ramsey theorem and Gowers' FINk theorem, is a theorem in Ramsey theory and combinatorics. It is a Ramsey-theoretic result about functions with finite support. Timothy Gowers originally proved the result in 1992, motivated by a problem regarding Banach spaces. The result was subsequently generalised by Bartošová, Kwiatkowska, and Lupini. Definitions The presentation and notation is taken from Todorčević, and is different to that originally given by Gowers. For a function , the support of is defined . Given , let be the set If , have disjoint supports, we define to be their pointwise sum, where . Each is a partial semigroup under . The tetris operation is defined . Intuitively, if is represented as a pile of square blocks, where the th column has height , then is the result of removing the bottom row. The name is in analogy with the video game. denotes the th iterate of . A block sequence in is one such that for every . The theorem Note that, for a block sequence , numbers and indices , the sum is always defined. Gowers' original theorem states that, for any finite colouring of , there is a block sequence such that all elements of the form have the same colour. The standard proof uses ultrafilters, or equivalently, nonstandard arithmetic. Generalisation Intuitively, the tetris operation can be seen as removing the bottom row of a pile of boxes. It is natural to ask what would happen if we tried removing different rows. Bartošová and Kwiatkowska considered the wider class of generalised tetris operations, where we can remove any chosen subset of the rows. Formally, let be a nondecreasing surjection. The induced tetris operation is given by composition with , i.e. . The generalised tetris operations are the collection of for all nondecreasing surjections . In this language, the original tetris operation is induced by the map . Bartošová and Kwiatkowska showed that the finite version of Gowers' theorem holds for the collection of generalised tetris operations. Lupini later extended this result to the infinite case. References Ramsey theory Theorems in combinatorics
Gowers' theorem
Mathematics
496
25,913,116
https://en.wikipedia.org/wiki/Psilocybe%20rzedowskii
Psilocybe rzedowskii is a species of mushroom in the family Hymenogastraceae. The mushroom contains the psychoactive compound psilocybin. See also List of Psilocybin mushrooms Psilocybin mushrooms Psilocybe References Entheogens Psychoactive fungi rzedowskii Psychedelic tryptamine carriers Fungus species
Psilocybe rzedowskii
Biology
74
74,608,764
https://en.wikipedia.org/wiki/JHU-083
JHU-083 is an experimental drug which acts as a glutaminase inhibitor. It is a prodrug which is cleaved in vivo to the active form 6-diazo-5-oxo-L-norleucine. It has been researched for the treatment of various neurological conditions such as depression, Alzheimer's disease, and cerebral malaria, as well as multiple sclerosis, atherosclerosis, hepatitis, and some forms of cancer in which it was found to target senescent cells. References Enzyme inhibitors Experimental psychiatric drugs Diazo compounds Carboxamides Secondary amines Ethyl esters Ketones Dipeptides
JHU-083
Chemistry
138
237,636
https://en.wikipedia.org/wiki/Conjugated%20system
In theoretical chemistry, a conjugated system is a system of connected p-orbitals with delocalized electrons in a molecule, which in general lowers the overall energy of the molecule and increases stability. It is conventionally represented as having alternating single and multiple bonds. Lone pairs, radicals or carbenium ions may be part of the system, which may be cyclic, acyclic, linear or mixed. The term "conjugated" was coined in 1899 by the German chemist Johannes Thiele. Conjugation is the overlap of one p-orbital with another across an adjacent σ bond (in transition metals, d-orbitals can be involved). A conjugated system has a region of overlapping p-orbitals, bridging the interjacent locations that simple diagrams illustrate as not having a π bond. They allow a delocalization of π electrons across all the adjacent aligned p-orbitals. The π electrons do not belong to a single bond or atom, but rather to a group of atoms. Molecules containing conjugated systems of orbitals and electrons are called conjugated molecules, which have overlapping p orbitals on three or more atoms. Some simple organic conjugated molecules are 1,3-butadiene, benzene, and allylic carbocations. The largest conjugated systems are found in graphene, graphite, conductive polymers and carbon nanotubes. Chemical bonding in conjugated systems Conjugation is possible by means of alternating single and double bonds in which each atom supplies a p orbital perpendicular to the plane of the molecule. However, that is not the only way for conjugation to take place. As long as each contiguous atom in a chain has an available p orbital, the system can be considered conjugated. For example, furan is a five-membered ring with two alternating double bonds flanking an oxygen. The oxygen has two lone pairs, one of which occupies a p orbital perpendicular to the ring on that position, thereby maintaining the conjugation of that five-membered ring by overlap with the perpendicular p orbital on each of the adjacent carbon atoms. The other lone pair remains in plane and does not participate in conjugation. In general, any sp2 or sp-hybridized carbon or heteroatom, including ones bearing an empty orbital or lone pair orbital, can participate in conjugated systems. However lone pairs do not always participate in a conjugated system. For example, in pyridine, the nitrogen atom already participates in the conjugated system through a formal double bond with an adjacent carbon, so the lone pair remains in the plane of the ring in an sp2 hybrid orbital and does not participate in the conjugation. A requirement for conjugation is orbital overlap. Thus, the conjugated system must be planar (or nearly so). As a consequence, lone pairs which do participate in conjugated systems will occupy orbitals of pure p character instead of spn hybrid orbitals typical for nonconjugated lone pairs. A common model for the treatment of conjugated molecules is a composite valence bond / Hückel molecular orbital theory (VB/HMOT) treatment, in which the σ framework of the molecule is separated from the π system (or systems) of the molecule (see the article on the sigma-pi and equivalent-orbital models for this model and an alternative treatment). Although σ bonding can be treated using a delocalized approach as well, it is generally the π bonding that is being considered when delocalized bonding is invoked in the context of simple organic molecules. Sigma (σ) framework: The σ framework is described by a strictly localized bonding scheme and consists of σ bonds formed from the interactions between sp3-, sp2-, and sp-hybridized atomic orbitals on the main group elements (and 1s atomic orbitals on hydrogen), together with localized lone pairs derived from filled, nonbonding hybrid orbitals. The interaction that results in σ bonding takes the form of head-to-head overlap of the larger lobe of each hybrid orbital (or the single spherical lobe of a hydrogen 1s orbital). Each atomic orbital contributes one electron when the orbitals overlap pairwise to form two-electron σ bonds, or two electrons when the orbital constitutes a lone pair. These localized orbitals (bonding and non-bonding) are all located in the plane of the molecule, with σ bonds mainly localized between nuclei along the internuclear axis. Pi (π) system or systems: Orthogonal to the σ framework described above, π bonding occurs above and below the plane of the molecule where σ bonding takes place. The π system(s) of the molecule are formed by the interaction of unhybridized p atomic orbitals on atoms employing sp2- and sp-hybridization. The interaction that results in π bonding takes place between p orbitals that are adjacent by virtue of a σ bond joining the atoms and takes the form of side-to-side overlap of the two equally large lobes that make up each p orbital. Atoms that are sp3-hybridized do not have an unhybridized p orbital available for participation in π bonding and their presence necessarily terminates a π system or separates two π systems. A basis p orbital that takes part in a π system can contribute one electron (which corresponds to half of a formal "double bond"), two electrons (which corresponds to a delocalized "lone pair"), or zero electrons (which corresponds to a formally "empty" orbital). Bonding for π systems formed from the overlap of more than two p orbitals is handled using the Hückel approach to obtain a zeroth order (qualitative) approximation of the π symmetry molecular orbitals that result from delocalized π bonding. This simple model for chemical bonding is successful for the description of most normal-valence molecules consisting of only s- and p-block elements, although systems that involve electron-deficient bonding, including nonclassical carbocations, lithium and boron clusters, and hypervalent centers require significant modifications in which σ bonds are also allowed to delocalize and are perhaps better treated with canonical molecular orbitals that are delocalized over the entire molecule. Likewise, d- and f-block organometallics are also inadequately described by this simple model. Bonds in strained small rings (such as cyclopropane or epoxide) are not well-described by strict σ/π separation, as bonding between atoms in the ring consists of "bent bonds" or "banana bonds" that are bowed outward and are intermediate in nature between σ and π bonds. Nevertheless, organic chemists frequently use the language of this model to rationalize the structure and reactivity of typical organic compounds. Electrons in conjugated π systems are shared by all adjacent sp2- and sp-hybridized atoms that contribute overlapping, parallel p atomic orbitals. As such, the atoms and π-electrons involved behave as one large bonded system. These systems are often referred to 'n-center k-electron π-bonds,' compactly denoted by the symbol Π, to emphasize this behavior. For example, the delocalized π electrons in acetate anion and benzene are said to be involved in Π and Π systems, respectively (see the article on three-center four-electron bonding). Generally speaking, these multi-center bonds correspond to the occupation of several molecular orbitals (MOs) with varying degrees of bonding or non-bonding character (filling of orbitals with antibonding character is uncommon). Each one is occupied by one or two electrons in accordance with the Aufbau principle and Hund's rule. Cartoons showing overlapping p orbitals, like the one for benzene below, show the basis p atomic orbitals before they are combined to form molecular orbitals. In compliance with the Pauli exclusion principle, overlapping p orbitals do not result in the formation of one large MO containing more than two electrons. Hückel MO theory is commonly used approach to obtain a zeroth order picture of delocalized π molecular orbitals, including the mathematical sign of the wavefunction at various parts of the molecule and the locations of nodal planes. It is particularly easy to apply for conjugated hydrocarbons and provides a reasonable approximation as long as the molecule is assumed to be planar with good overlap of p orbitals. Stabilization energy The quantitative estimation of stabilization from conjugation is notoriously contentious and depends on the implicit assumptions that are made when comparing reference systems or reactions. The energy of stabilization is known as the resonance energy when formally defined as the difference in energy between the real chemical species and the hypothetical species featuring localized π bonding that corresponds to the most stable resonance form. This energy cannot be measured, and a precise definition accepted by most chemists will probably remain elusive. Nevertheless, some broad statements can be made. In general, stabilization is more significant for cationic systems than neutral ones. For buta-1,3-diene, a crude measure of stabilization is the activation energy for rotation of the C2-C3 bond. This places the resonance stabilization at around 6 kcal/mol. Comparison of heats of hydrogenation of 1,4-pentadiene and 1,3-pentadiene estimates a slightly more modest value of 3.5 kcal/mol. For comparison, allyl cation has a gas-phase rotation barrier of around 38 kcal/mol, a much greater penalty for loss of conjugation. Comparison of hydride ion affinities of propyl cation and allyl cation, corrected for inductive effects, results in a considerably lower estimate of the resonance energy at 20–22 kcal/mol. Nevertheless, it is clear that conjugation stabilizes allyl cation to a much greater extent than buta-1,3-diene. In contrast to the usually minor effect of neutral conjugation, aromatic stabilization can be considerable. Estimates for the resonance energy of benzene range from around 36–73 kcal/mol. Generalizations and related concepts There are also other types of interactions that generalize the idea of interacting p orbitals in a conjugated system. The concept of hyperconjugation holds that certain σ bonds can also delocalize into a low-lying unoccupied orbital of a π system or an unoccupied p orbital. Hyperconjugation is commonly invoked to explain the stability of alkyl substituted radicals and carbocations. Hyperconjugation is less important for species in which all atoms satisfy the octet rule, but a recent computational study supports hyperconjugation as the origin of the increased stability of alkenes with a higher degree of substitution (Zaitsev's rule). Homoconjugation is an overlap of two π-systems separated by a non-conjugating group, such as CH2. Unambiguous examples are comparatively rare in neutral systems, due to a comparatively minor energetic benefit that is easily overridden by a variety of other factors; however, they are common in cationic systems in which a large energetic benefit can be derived from delocalization of positive charge (see the article on homoaromaticity for details.). Neutral systems generally require constrained geometries favoring interaction to produce significant degrees of homoconjugation. In the example below, the carbonyl stretching frequencies of the IR spectra of the respective compounds demonstrate homoconjugation, or lack thereof, in the neutral ground state molecules. Due to the partial π character of formally σ bonds in a cyclopropane ring, evidence for transmission of "conjugation" through cyclopropanes has also been obtained. Two appropriately aligned π systems whose ends meet at right angles can engage in spiroconjugation or in homoconjugation across the spiro atom. Vinylogy is the extension of a functional group through a conjugated organic bonding system, which transmits electronic effects. Conjugated cyclic compounds Cyclic compounds can be partly or completely conjugated. Annulenes, completely conjugated monocyclic hydrocarbons, may be aromatic, nonaromatic or antiaromatic. Aromatic compounds Compounds that have a monocyclic, planar conjugated system containing (4n + 2) π-electrons for whole numbers n are aromatic and exhibit an unusual stability. The classic example benzene has a system of six π electrons, which, together with the planar ring of C–C σ bonds containing 12 electrons and radial C–H σ bonds containing six electrons, forms the thermodynamically and kinetically stable benzene ring, the common core of the benzenoid aromatic compounds. For benzene itself, there are two equivalent conjugated contributing Lewis structures (the so-called Kekulé structures) that predominate. The true electronic structure is therefore a quantum-mechanical combination (resonance hybrid) of these contributors, which results in the experimentally observed C–C bonds which are intermediate between single and double bonds and of equal strength and length. In the molecular orbital picture, the six p atomic orbitals of benzene combine to give six molecular orbitals. Three of these orbitals, which lie at lower energies than the isolated p orbital and are therefore net bonding in character (one molecular orbital is strongly bonding, while the other two are equal in energy but bonding to a lesser extent) are occupied by six electrons, while three destabilized orbitals of overall antibonding character remain unoccupied. The result is strong thermodynamic and kinetic aromatic stabilization. Both models describe rings of π electron density above and below the framework of C–C σ bonds. Nonaromatic and antiaromatic compounds Not all compounds with alternating double and single bonds are aromatic. Cyclooctatetraene, for example, possesses alternating single and double bonds. The molecule typically adopts a "tub" conformation. Because the p orbitals of the molecule do not align themselves well in this non-planar molecule, the π bonds are essentially isolated and not conjugated. The lack of conjugation allows the 8 π electron molecule to avoid antiaromaticity, a destabilizing effect associated with cyclic, conjugated systems containing 4n π (n = 0, 1, 2, ...) electrons. This effect is due to the placement of two electrons into two degenerate nonbonding (or nearly nonbonding) orbitals of the molecule, which, in addition to drastically reducing the thermodynamic stabilization of delocalization, would either force the molecule to take on triplet diradical character, or cause it to undergo Jahn-Teller distortion to relieve the degeneracy. This has the effect of greatly increasing the kinetic reactivity of the molecule. Because of the lack of long-range interactions, cyclooctatetraene takes on a nonplanar conformation and is nonaromatic in character, behaving as a typical alkene. In contrast, derivatives of the cyclooctatetraene dication and dianion have been found to be planar experimentally, in accord with the prediction that they are stabilized aromatic systems with 6 and 10 π electrons, respectively. Because antiaromaticity is a property that molecules try to avoid whenever possible, only a few experimentally observed species are believed to be antiaromatic. Cyclobutadiene and cyclopentadienyl cation are commonly cited as examples of antiaromatic systems. In pigments In a conjugated pi-system, electrons are able to capture certain photons as the electrons resonate along a certain distance of p-orbitals - similar to how a radio antenna detects photons along its length. Typically, the more conjugated (longer) the pi-system is, the longer the wavelength of photon can be captured. Compounds whose molecules contain a sufficient number of conjugated bonds can absorb light in the visible region, and therefore appear colorful to the eye, usually appearing yellow or red. Many dyes make use of conjugated electron systems to absorb visible light, giving rise to strong colors. For example, the long conjugated hydrocarbon chain in beta-carotene leads to its strong orange color. When an electron in the system absorbs a photon of light of the right wavelength, it can be promoted to a higher energy level. A simple model of the energy levels is provided by the quantum-mechanical problem of a one-dimensional particle in a box of length L, representing the movement of a π electron along a long conjugated chain of carbon atoms. In this model the lowest possible absorption energy corresponds to the energy difference between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). For a chain of n C=C bonds or 2n carbon atoms in the molecular ground state, there are 2n π electrons occupying n molecular orbitals, so that the energy gap is Since the box length L increases approximately linearly with the number of C=C bonds n, this means that the energy ΔE of a photon absorbed in the HOMO–LUMO transition is approximately proportional to 1/n. The photon wavelength λ = hc/ΔE is then approximately proportional to n. Although this model is very approximate, λ does in general increase with n (or L) for similar molecules. For example, the HOMO–LUMO absorption wavelengths for conjugated butadiene, hexatriene and octatetraene are 217 nm, 252 nm and 304 nm respectively. However, for good numerical agreement of the particle in a box model with experiment, the single-bond/double-bond bond length alternations of the polyenes must be taken into account. Alternatively, one can use the Hückel method which is also designed to model the electronic structure of conjugated systems. Many electronic transitions in conjugated π-systems are from a predominantly bonding molecular orbital (MO) to a predominantly antibonding MO (π to π*), but electrons from non-bonding lone pairs can also be promoted to a π-system MO (n to π*) as often happens in charge-transfer complexes. A HOMO to LUMO transition is made by an electron if it is allowed by the selection rules for electromagnetic transitions. Conjugated systems of fewer than eight conjugated double bonds absorb only in the ultraviolet region and are colorless to the human eye. With every double bond added, the system absorbs photons of longer wavelength (and lower energy), and the compound ranges from yellow to red in color. Compounds that are blue or green typically do not rely on conjugated double bonds alone. This absorption of light in the ultraviolet to visible spectrum can be quantified using ultraviolet–visible spectroscopy, and forms the basis for the entire field of photochemistry. Conjugated systems that are widely used for synthetic pigments and dyes are diazo and azo compounds and phthalocyanine compounds. Phthalocyanine compounds Conjugated systems not only have low energy excitations in the visible spectral region but they also accept or donate electrons easily. Phthalocyanines, which, like Phthalocyanine Blue BN and Phthalocyanine Green G, often contain a transition metal ion, exchange an electron with the complexed transition metal ion that easily changes its oxidation state. Pigments and dyes like these are charge-transfer complexes. Porphyrins and similar compounds Porphyrins have conjugated molecular ring systems (macrocycles) that appear in many enzymes of biological systems. As a ligand, porphyrin forms numerous complexes with metallic ions like iron in hemoglobin that colors blood red. Hemoglobin transports oxygen to the cells of our bodies. Porphyrin–metal complexes often have strong colors. A similar molecular structural ring unit called chlorin is similarly complexed with magnesium instead of iron when forming part of the most common forms of chlorophyll molecules, giving them a green color. Another similar macrocycle unit is corrin, which complexes with cobalt when forming part of cobalamin molecules, constituting Vitamin B12, which is intensely red. The corrin unit has six conjugated double bonds but is not conjugated all the way around its macrocycle ring. Chromophores Conjugated systems form the basis of chromophores, which are light-absorbing parts of a molecule that can cause a compound to be colored. Such chromophores are often present in various organic compounds and sometimes present in polymers that are colored or glow in the dark. Chromophores often consist of a series of conjugated bonds and/or ring systems, commonly aromatic, which can include C–C, C=C, C=O, or N=N bonds. Conjugated chromophores are found in many organic compounds including azo dyes (also artificial food additives), compounds in fruits and vegetables (lycopene and anthocyanidins), photoreceptors of the eye, and some pharmaceutical compounds such as the following: Conjugated polymer nanoparticles (PDots) are assembled from hydrophobic fluorescent conjugated polymers, along with amphiphilic polymers to provide water solubility. Pdots are important labels for single-molecule fluorescence microscopy, based on high brightness, lack of blinking or dark fraction, and slow photobleaching. See also Conjugated microporous polymer Cross-conjugation Hyperconjugation List of conjugated polymers Metallic bond Polyene Resonance Vinylogy Notes References Physical organic chemistry
Conjugated system
Chemistry
4,496
66,103,223
https://en.wikipedia.org/wiki/Climate%20Action%20Tracker
Climate Action Tracker (CAT) is an independent scientific project with the aim of monitoring government action to achieve their reduction of greenhouse gas emissions with regard to international agreements – specifically the globally agreed Paris Agreement aim of "holding warming well below 2°C, and pursuing efforts to limit warming to 1.5°C.". It is tracking climate action in 39 countries and the EU responsible for over 85% of global emissions. The CAT is the product of two organisations: NewClimate Institute and Climate Analytics . The actions it tracks are: - Effect of climate policies and action on emissions. - Impact of pledges, targets and NDCs on national emissions over the time period to 2030, and where possible beyond. - Comparability of effort against countries' fair share and modelled domestic pathways. COP26 Toward the end of the COP26 climate conference, CAT produced a report concluding that the current "wave of netzero emission goals [are] not matched by action on the ground" and that the world is likely headed for more than 2.4 °C of warming by the end of the century. References External links Climate Action Tracker website Scientific organizations established in 2009 Greenhouse gas emissions
Climate Action Tracker
Chemistry
239
20,558,619
https://en.wikipedia.org/wiki/Bathtub%20refinishing
Bathtub refinishing (also known as bathtub reglazing, bathtub resurfacing, or bathtub re-enameling) is a process of restoring the surface of a bathtub to improve its appearance and durability. It involves applying a new coating or finish on the existing bathtub surface, which can be made of materials such as porcelain, fiberglass, acrylic, or enamel. Bathtub refinishing offers several advantages over traditional bathtub replacement. It is a more cost-effective option, as refinishing is generally less expensive than purchasing and installing a new bathtub. Additionally, the process can be completed relatively quickly, often within a day or two, minimizing disruption to the bathroom and daily routine. Refinishing process Surface preparation The refinishing process starts by assessing any existing damage to the bathtub and determining necessary repairs. Bathtub refinishing is only a cosmetic fix and does not fix any underlying issues. It is important to search the bathtub thoroughly and assess how severe the cracks and wear are to determine whether the tub should be replaced or refinished. A bathtub with serious rust, corrosion, or structural cracks most likely cannot be repaired properly and should be replaced. However, a bathtub with surface-level web-shaped cracks can be refinished and last many more years. If there is rust on the bathtub, then that part should be removed and treated as necessary; otherwise, the rust can spread. If the bathtub cannot hold water, chips or cracks can be filled with a polyester putty such as Bondo. Since porcelain, enamel, and fiberglass surfaces are nonporous, they do not provide a good substrate for the new coating to attach to. Therefore, the bathtub surface is prepped with an acid etching or wet sanding, which cleans and creates a porous surface that enables mechanical adhesion. Another possible method is to apply an adhesion-promoting bonding agent like silane to the surface before applying the coating. These two methods can be used in unison or independently. The greatest adhesion is generally achieved by using both methods together; however, some newer refinishing processes claim they do not require etching, by relying on silane alone. It is important to properly prepare the bathtub and the surrounding area to ensure no refinishing products are left behind. The chemicals used in the refinishing process can be very harmful to people. Most refinishers protect themselves and their clients by completely masking the area prior to spraying any chemical coatings. They also set up a professional exhaust system rated to work with the type of coating system being applied. By using at least a 1200-cfm (cubic feet per minute) exhaust unit, the refinisher can see better and may limit the overspray and settling on the surface. The refinisher uses a NIOSH-rated fresh-air supplied breathing apparatus and spray suit and gloves to protect themselves from the chemicals. After spraying is complete, the masking is removed, a new caulk line can be installed, and the drain replaced. Coatings Coatings used to create a new bathtub finish can be epoxies, urethanes, hybrid polyester-polyurethane, or polymers. Generally, a catalyzed two-component cross-link synthetic white coating is applied, but this coating lacks the durability or abrasive tolerance of the original glass enamel coating of a factory-new bathtub. Coatings may be rolled, brushed, or sprayed on. In most cases, the coating should be between thick when cured to provide the best, long-term results. This is typically achieved by spraying two coats of primer, followed by three coats of topcoat. A very experienced refinisher may be able to accomplish this with fewer coats depending on conditions. In general, a professionally refinished surface will act as a new tub surface and be very slippery when wet. Therefore, a slip-resistant area can be added to the bottom of the tub during the refinishing process. Alternatively, a semi-permanent mat can be used; however, these may be more difficult to clean and do not have the life expectancy of many coating systems. Rubber mats are almost always discouraged by the manufacturers, and their use may void the warranty. Cost The cost of refinishing a bathtub averages $480 (2023), which is much cheaper than buying a brand-new bathtub. The cost varies depending on the bathtub material and damage. A still cheaper option is do-it-yourself (DIY) kits which contain all of the necessary equipment for the bathtub owner to do the refinishing themselves. Typically, reglazing kits cost about $100. This approach allows the owner to have more control over how and what is done in the process, as well as save money. Despite these benefits, there are some tradeoffs such as fewer color choices, the time spent working on the bathtub, and the risk of not doing the process correctly. It ultimately comes down to the owner's decision on how much time and money they are willing to spend working on their bathtub. Hazards Findings from the Fatality Assessment and Control Evaluation (FACE) program have identified at least 14 worker deaths since 2000 related to the usage of methylene chloride for bathtub refinishing. Products containing high percentages of methylene chloride are used as stripping agents to remove the old bathtub coating. In an unventilated setting, overexposure to methylene chloride vapors can affect brain function and result in death in the short term, with possible carcinogenic effects in the long term. Once a person can smell the methylene chloride they have already been overexposed to the chemicals. Measures to prevent overexposure to methylene chloride include using stripping agents that rely on other chemicals, implementing adequate local exhaust ventilation, and using appropriate personal protective equipment (such as respirators). Local exhaust ventilation is necessary, as opening nearby windows and using bathroom fans will not provide enough ventilation. Using long-handled tools can also decrease workers' proximity from the product, with beneficial effects. Professional refinishers will often provide the end-user with after-care instructions and warranty or guaranty information. Pay attention to the cleaning and care recommendations to avoid warranty issues. Cleaning in most cases can be best accomplished using a mild dish-soap degreaser (like Dawn); avoid cleaners with acids or other chemicals specified in instructions. Fill the tub with enough water to cover the bottom, mix some cleaner in, and agitate; then rub the surface with the sponge. To clean especially dirty tubs, let the cleaner sit on the surface for up to twenty minutes. This allows the cleaner to break down the dirt and oils to be more easily removed. Waxing the surface around the drain, and in other areas that are not stood or sat on, can also assist in keeping the surface like new and improve coating life. Take care not to rub on the caulk line to the point that it separates from the tub or wall surface—this will allow water to become trapped and cause peeling. Damaged caulk should always be replaced immediately. See also Bathroom Bathtub Home improvement Home repair Hot tub Jacuzzi Shower References External links OSHA/NIOSH Hazard Alert: Methylene Chloride Hazards for Bathtub Refinishers OSHA/NIOSH Hazard Alert: Bathpoints for Bathtub Refinishers Home improvement Maintenance Reuse
Bathtub refinishing
Engineering
1,546
7,729,301
https://en.wikipedia.org/wiki/Debye%E2%80%93H%C3%BCckel%20theory
The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas. It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions. Overview In the chemistry of electrolyte solutions, an ideal solution is a solution whose colligative properties are proportional to the concentration of the solute. Real solutions may show departures from this kind of ideality. In order to accommodate these effects in the thermodynamics of solutions, the concept of activity was introduced: the properties are then proportional to the activities of the ions. Activity, a, is proportional to concentration, c. The proportionality constant is known as an activity coefficient, . In an ideal electrolyte solution the activity coefficients for all the ions are equal to one. Ideality of an electrolyte solution can be achieved only in very dilute solutions. Non-ideality of more concentrated solutions arises principally (but not exclusively) because ions of opposite charge attract each other due to electrostatic forces, while ions of the same charge repel each other. In consequence ions are not randomly distributed throughout the solution, as they would be in an ideal solution. Activity coefficients of single ions cannot be measured experimentally because an electrolyte solution must contain both positively charged ions and negatively charged ions. Instead, a mean activity coefficient, is defined. For example, with the electrolyte NaCl In general, the mean activity coefficient of a fully dissociated electrolyte of formula AnBm is given by Activity coefficients are themselves functions of concentration as the amount of inter-ionic interaction increases as the concentration of the electrolyte increases. Debye and Hückel developed a theory with which single ion activity coefficients could be calculated. By calculating the mean activity coefficients from them the theory could be tested against experimental data. It was found to give excellent agreement for "dilute" solutions. The model A description of Debye–Hückel theory includes a very detailed discussion of the assumptions and their limitations as well as the mathematical development and applications. A snapshot of a 2-dimensional section of an idealized electrolyte solution is shown in the picture. The ions are shown as spheres with unit electrical charge. The solvent (pale blue) is shown as a uniform medium, without structure. On average, each ion is surrounded more closely by ions of opposite charge than by ions of like charge. These concepts were developed into a quantitative theory involving ions of charge z1e+ and z2e−, where z can be any integer. The principal assumption is that departure from ideality is due to electrostatic interactions between ions, mediated by Coulomb's law: the force of interaction between two electric charges, separated by a distance, r in a medium of relative permittivity εr is given by It is also assumed that The solute is completely dissociated; it is a strong electrolyte. Ions are spherical and are not polarized by the surrounding electric field. Solvation of ions is ignored except insofar as it determines the effective sizes of the ions. The solvent plays no role other than providing a medium of constant relative permittivity (dielectric constant). There is no electrostriction. Individual ions surrounding a "central" ion can be represented by a statistically averaged cloud of continuous charge density, with a minimum distance of closest approach. The last assumption means that each cation is surrounded by a spherically symmetric cloud of other ions. The cloud has a net negative charge. Similarly each anion is surrounded by a cloud with net positive charge. Mathematical development The deviation from ideality is taken to be a function of the potential energy resulting from the electrostatic interactions between ions and their surrounding clouds. To calculate this energy two steps are needed. The first step is to specify the electrostatic potential for ion j by means of Poisson's equation ψ(r) is the total potential at a distance, r, from the central ion and ρ(r) is the averaged charge density of the surrounding cloud at that distance. To apply this formula it is essential that the cloud has spherical symmetry, that is, the charge density is a function only of distance from the central ion as this allows the Poisson equation to be cast in terms of spherical coordinates with no angular dependence. The second step is to calculate the charge density by means of a Boltzmann distribution. where kB is Boltzmann constant and T is the temperature. This distribution also depends on the potential ψ(r) and this introduces a serious difficulty in terms of the superposition principle. Nevertheless, the two equations can be combined to produce the Poisson–Boltzmann equation. Solution of this equation is far from straightforward. Debye and Hückel expanded the exponential as a truncated Taylor series to first order. The zeroth order term vanishes because the solution is on average electrically neutral (so that Σ ni zi = 0), which leaves us with only the first order term. The result has the form of the Helmholtz equation , which has an analytical solution. This equation applies to electrolytes with equal numbers of ions of each charge. Nonsymmetrical electrolytes require another term with ψ2. For symmetrical electrolytes, this reduces to the modified spherical Bessel equation The coefficients and are fixed by the boundary conditions. As , must not diverge, so . At , which is the distance of the closest approach of ions, the force exerted by the charge should be balanced by the force of other ions, imposing , from which is found, yielding The electrostatic potential energy, , of the ion at is This is the potential energy of a single ion in a solution. The multiple-charge generalization from electrostatics gives an expression for the potential energy of the entire solution. The mean activity coefficient is given by the logarithm of this quantity as follows where I is the ionic strength and a0 is a parameter that represents the distance of closest approach of ions. For aqueous solutions at 25 °C A = 0.51 mol−1/2dm3/2 and B = 3.29 nm−1mol−1/2dm3/2 is a constant that depends on temperature. If is expressed in terms of molality, instead of molarity (as in the equation above and in the rest of this article), then an experimental value for of water is at 25 °C. It is common to use a base-10 logarithm, in which case we factor , so A is . The multiplier before in the equation is for the case when the dimensions of are . When the dimensions of are , the multiplier must be dropped from the equation The most significant aspect of this result is the prediction that the mean activity coefficient is a function of ionic strength rather than the electrolyte concentration. For very low values of the ionic strength the value of the denominator in the expression above becomes nearly equal to one. In this situation the mean activity coefficient is proportional to the square root of the ionic strength. This is known as the Debye–Hückel limiting law. In this limit the equation is given as follows The excess osmotic pressure obtained from Debye–Hückel theory is in cgs units: Therefore, the total pressure is the sum of the excess osmotic pressure and the ideal pressure . The osmotic coefficient is then given by Nondimensionalization Taking the differential equation from earlier (as stated above, the equation only holds for low concentrations): Using the Buckingham π theorem on this problem results in the following dimensionless groups: is called the reduced scalar electric potential field. is called the reduced radius. The existing groups may be recombined to form two other dimensionless groups for substitution into the differential equation. The first is what could be called the square of the reduced inverse screening length, . The second could be called the reduced central ion charge, (with a capital Z). Note that, though is already dimensionless, without the substitution given below, the differential equation would still be dimensional. To obtain the nondimensionalized differential equation and initial conditions, use the groups to eliminate in favor of , then eliminate in favor of while carrying out the chain rule and substituting , then eliminate in favor of (no chain rule needed), then eliminate in favor of , then eliminate in favor of . The resulting equations are as follows: For table salt in 0.01 M solution at 25 °C, a typical value of is 0.0005636, while a typical value of is 7.017, highlighting the fact that, in low concentrations, is a target for a zero order of magnitude approximation such as perturbation analysis. Unfortunately, because of the boundary condition at infinity, regular perturbation does not work. The same boundary condition prevents us from finding the exact solution to the equations. Singular perturbation may work, however. Limitations and extensions This equation for gives satisfactory agreement with experimental measurements for low electrolyte concentrations, typically less than 10−3 mol/L. Deviations from the theory occur at higher concentrations and with electrolytes that produce ions of higher charges, particularly unsymmetrical electrolytes. Essentially these deviations occur because the model is oversimplified, so there is little to be gained making small adjustments to the model. The individual assumptions can be challenged in turn. Complete dissociation. Ion association may take place, particularly with ions of higher charge. This was followed up in detail by Niels Bjerrum. The Bjerrum length is the separation at which the electrostatic interaction between two ions is comparable in magnitude to kT. Weak electrolytes. A weak electrolyte is one that is not fully dissociated. As such it has a dissociation constant. The dissociation constant can be used to calculate the extent of dissociation and hence, make the necessary correction needed to calculate activity coefficients. Ions are spherical, not point charges and are not polarized. Many ions such as the nitrate ion, NO3−, are not spherical. Polyatomic ions are also polarizable. Role of the solvent. The solvent is not a structureless medium but is made up of molecules. The water molecules in aqueous solution are both dipolar and polarizable. Both cations and anions have a strong primary solvation shell and a weaker secondary solvation shell. Ion–solvent interactions are ignored in Debye–Hückel theory. Moreover, ionic radius is assumed to be negligible, but at higher concentrations, the ionic radius becomes comparable to the radius of the ionic atmosphere. Most extensions to Debye–Hückel theory are empirical in nature. They usually allow the Debye–Hückel equation to be followed at low concentration and add further terms in some power of the ionic strength to fit experimental observations. The main extensions are the Davies equation, Pitzer equations and specific ion interaction theory. One such extended Debye–Hückel equation is given by: where as its common logarithm is the activity coefficient, is the integer charge of the ion (1 for H+, 2 for Mg2+ etc.), is the ionic strength of the aqueous solution, and is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, a is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3Å for ions such as H+, Cl−, CN−, and HCOO−. The effective diameter for the hydronium ion is 9Å. and are constants with values of respectively 0.5085 and 0.3281 at 25 °C in water . The extended Debye–Hückel equation provides accurate results for μ ≤ 0.1. For solutions of greater ionic strengths, the Pitzer equations should be used. In these solutions the activity coefficient may actually increase with ionic strength. The Debye–Hückel equation cannot be used in the solutions of surfactants where the presence of micelles influences on the electrochemical properties of the system (even rough judgement overestimates γ for ~50%). Electrolytes mixtures The theory can be applied also to dilute solutions of mixed electrolytes. Freezing point depression measurements has been used to this purpose. Conductivity The treatment given so far is for a system not subject to an external electric field. When conductivity is measured the system is subject to an oscillating external field due to the application of an AC voltage to electrodes immersed in the solution. Debye and Hückel modified their theory in 1926 and their theory was further modified by Lars Onsager in 1927. All the postulates of the original theory were retained. In addition it was assumed that the electric field causes the charge cloud to be distorted away from spherical symmetry. After taking this into account, together with the specific requirements of moving ions, such as viscosity and electrophoretic effects, Onsager was able to derive a theoretical expression to account for the empirical relation known as Kohlrausch's Law, for the molar conductivity, Λm. is known as the limiting molar conductivity, K is an empirical constant and c is the electrolyte concentration. Limiting here means "at the limit of the infinite dilution"). Onsager's expression is where A and B are constants that depend only on known quantities such as temperature, the charges on the ions and the dielectric constant and viscosity of the solvent. This is known as the Debye–Hückel–Onsager equation. However, this equation only applies to very dilute solutions and has been largely superseded by other equations due to Fuoss and Onsager, 1932 and 1957 and later. Summary of Debye and Hückel's first article on the theory of dilute electrolytes The English title of the article is "On the Theory of Electrolytes. I. Freezing Point Depression and Related Phenomena". It was originally published in 1923 in volume 24 of a German-language journal . An English translation of the article is included in a book of collected papers presented to Debye by "his pupils, friends, and the publishers on the occasion of his seventieth birthday on March 24, 1954". Another English translation was completed in 2019. The article deals with the calculation of properties of electrolyte solutions that are under the influence of ion-induced electric fields, thus it deals with electrostatics. In the same year they first published this article, Debye and Hückel, hereinafter D&H, also released an article that covered their initial characterization of solutions under the influence of electric fields called "On the Theory of Electrolytes. II. Limiting Law for Electric Conductivity", but that subsequent article is not (yet) covered here. In the following summary (as yet incomplete and unchecked), modern notation and terminology are used, from both chemistry and mathematics, in order to prevent confusion. Also, with a few exceptions to improve clarity, the subsections in this summary are (very) condensed versions of the same subsections of the original article. Introduction D&H note that the Guldberg–Waage formula for electrolyte species in chemical reaction equilibrium in classical form is where is a notation for multiplication, is a dummy variable indicating the species, is the number of species participating in the reaction, is the mole fraction of species , is the stoichiometric coefficient of species , K is the equilibrium constant. D&H say that, due to the "mutual electrostatic forces between the ions", it is necessary to modify the Guldberg–Waage equation by replacing with , where is an overall activity coefficient, not a "special" activity coefficient (a separate activity coefficient associated with each species)—which is what is used in modern chemistry . The relationship between and the special activity coefficients is Fundamentals D&H use the Helmholtz and Gibbs free entropies and to express the effect of electrostatic forces in an electrolyte on its thermodynamic state. Specifically, they split most of the thermodynamic potentials into classical and electrostatic terms: where is Helmholtz free entropy, is entropy, is internal energy, is temperature, is Helmholtz free energy. D&H give the total differential of as where is pressure, is volume. By the definition of the total differential, this means that which are useful further on. As stated previously, the internal energy is divided into two parts: where indicates the classical part, indicates the electric part. Similarly, the Helmholtz free entropy is also divided into two parts: D&H state, without giving the logic, that It would seem that, without some justification, Without mentioning it specifically, D&H later give what might be the required (above) justification while arguing that , an assumption that the solvent is incompressible. The definition of the Gibbs free entropy is where is Gibbs free energy. D&H give the total differential of as At this point D&H note that, for water containing 1 mole per liter of potassium chloride (nominal pressure and temperature aren't given), the electric pressure amounts to 20 atmospheres. Furthermore, they note that this level of pressure gives a relative volume change of 0.001. Therefore, they neglect change in volume of water due to electric pressure, writing and put D&H say that, according to Planck, the classical part of the Gibbs free entropy is where is a species, is the number of different particle types in solution, is the number of particles of species i, is the particle specific Gibbs free entropy of species i, is the Boltzmann constant, is the mole fraction of species i. Species zero is the solvent. The definition of is as follows, where lower-case letters indicate the particle specific versions of the corresponding extensive properties: D&H don't say so, but the functional form for may be derived from the functional dependence of the chemical potential of a component of an ideal mixture upon its mole fraction. D&H note that the internal energy of a solution is lowered by the electrical interaction of its ions, but that this effect can't be determined by using the crystallographic approximation for distances between dissimilar atoms (the cube root of the ratio of total volume to the number of particles in the volume). This is because there is more thermal motion in a liquid solution than in a crystal. The thermal motion tends to smear out the natural lattice that would otherwise be constructed by the ions. Instead, D&H introduce the concept of an ionic atmosphere or cloud. Like the crystal lattice, each ion still attempts to surround itself with oppositely charged ions, but in a more free-form manner; at small distances away from positive ions, one is more likely to find negative ions and vice versa. The potential energy of an arbitrary ion solution Electroneutrality of a solution requires that where is the total number of ions of species i in the solution, is the charge number of species i. To bring an ion of species i, initially far away, to a point within the ion cloud requires interaction energy in the amount of , where is the elementary charge, and is the value of the scalar electric potential field at . If electric forces were the only factor in play, the minimal-energy configuration of all the ions would be achieved in a close-packed lattice configuration. However, the ions are in thermal equilibrium with each other and are relatively free to move. Thus they obey Boltzmann statistics and form a Boltzmann distribution. All species' number densities are altered from their bulk (overall average) values by the corresponding Boltzmann factor , where is the Boltzmann constant, and is the temperature. Thus at every point in the cloud Note that in the infinite temperature limit, all ions are distributed uniformly, with no regard for their electrostatic interactions. The charge density is related to the number density: When combining this result for the charge density with the Poisson equation from electrostatics, a form of the Poisson–Boltzmann equation results: This equation is difficult to solve and does not follow the principle of linear superposition for the relationship between the number of charges and the strength of the potential field. It has been solved analyticallt by the Swedish mathematician Thomas Hakon Gronwall and his collaborators physical chemists V. K. La Mer and Karl Sandved in a 1928 article from Physikalische Zeitschrift dealing with extensions to Debye–Huckel theory. However, for sufficiently low concentrations of ions, a first-order Taylor series expansion approximation for the exponential function may be used ( for ) to create a linear differential equation. D&H say that this approximation holds at large distances between ions, which is the same as saying that the concentration is low. Lastly, they claim without proof that the addition of more terms in the expansion has little effect on the final solution. Thus The Poisson–Boltzmann equation is transformed to because the first summation is zero due to electroneutrality. Factor out the scalar potential and assign the leftovers, which are constant, to . Also, let be the ionic strength of the solution: So, the fundamental equation is reduced to a form of the Helmholtz equation: Today, is called the Debye screening length. D&H recognize the importance of the parameter in their article and characterize it as a measure of the thickness of the ion atmosphere, which is an electrical double layer of the Gouy–Chapman type. The equation may be expressed in spherical coordinates by taking at some arbitrary ion: The equation has the following general solution (keep in mind that is a positive constant): where , , and are undetermined constants The electric potential is zero at infinity by definition, so must be zero. In the next step, D&H assume that there is a certain radius , beyond which no ions in the atmosphere may approach the (charge) center of the singled out ion. This radius may be due to the physical size of the ion itself, the sizes of the ions in the cloud, and any water molecules that surround the ions. Mathematically, they treat the singled out ion as a point charge to which one may not approach within the radius . The potential of a point charge by itself is D&H say that the total potential inside the sphere is where is a constant that represents the potential added by the ionic atmosphere. No justification for being a constant is given. However, one can see that this is the case by considering that any spherical static charge distribution is subject to the mathematics of the shell theorem. The shell theorem says that no force is exerted on charged particles inside a sphere (of arbitrary charge). Since the ion atmosphere is assumed to be (time-averaged) spherically symmetric, with charge varying as a function of radius , it may be represented as an infinite series of concentric charge shells. Therefore, inside the radius , the ion atmosphere exerts no force. If the force is zero, then the potential is a constant (by definition). In a combination of the continuously distributed model which gave the Poisson–Boltzmann equation and the model of the point charge, it is assumed that at the radius , there is a continuity of and its first derivative. Thus By the definition of electric potential energy, the potential energy associated with the singled out ion in the ion atmosphere is Notice that this only requires knowledge of the charge of the singled out ion and the potential of all the other ions. To calculate the potential energy of the entire electrolyte solution, one must use the multiple-charge generalization for electric potential energy: The additional electric term to the thermodynamic potential Experimental verification of the theory To verify the validity of the Debye–Hückel theory, many experimental ways have been tried, measuring the activity coefficients: the problem is that we need to go towards very high dilutions. Typical examples are: measurements of vapour pressure, freezing point, osmotic pressure (indirect methods) and measurement of electric potential in cells (direct method). Going towards high dilutions good results have been found using liquid membrane cells, it has been possible to investigate aqueous media 10−4 M and it has been found that for 1:1 electrolytes (as NaCl or KCl) the Debye–Hückel equation is totally correct, but for 2:2 or 3:2 electrolytes it is possible to find negative deviation from the Debye–Hückel limit law: this strange behavior can be observed only in the very dilute area, and in more concentrate regions the deviation becomes positive. It is possible that Debye–Hückel equation is not able to foresee this behavior because of the linearization of the Poisson–Boltzmann equation, or maybe not: studies about this have been started only during the last years of the 20th century because before it wasn't possible to investigate the 10−4 M region, so it is possible that during the next years new theories will be born. See also Electrolyte Chemical activity Ionic strength Poisson-Boltzmann equation Debye length Bjerrum length Bates-Guggenheim Convention Ionic atmosphere Electrical double layer Ion association Davies equation Pitzer equation Specific ion Interaction Theory References Thermodynamic models Electrochemistry Equilibrium chemistry Peter Debye
Debye–Hückel theory
Physics,Chemistry
5,339
34,382,035
https://en.wikipedia.org/wiki/Phases%20of%20clinical%20research
The phases of clinical research are the stages in which scientists conduct experiments with a health intervention to obtain sufficient evidence for a process considered effective as a medical treatment. For drug development, the clinical phases start with testing for drug safety in a few human subjects, then expand to many study participants (potentially tens of thousands) to determine if the treatment is effective. Clinical research is conducted on drug candidates, vaccine candidates, new medical devices, and new diagnostic assays. Description Clinical trials testing potential medical products are commonly classified into four phases. The drug development process will normally proceed through all four phases over many years. When expressed specifically, a clinical trial phase is capitalized both in name and Roman numeral, such as "Phase I" clinical trial. If the drug successfully passes through Phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV trials are 'post-marketing' or 'surveillance' studies conducted to monitor safety over several years. Preclinical studies Before clinical trials are undertaken for a candidate drug, vaccine, medical device, or diagnostic assay, the product candidate is tested extensively in preclinical studies. Such studies involve in vitro (test tube or cell culture) and in vivo (animal model) experiments using wide-ranging doses of the study agent to obtain preliminary efficacy, toxicity and pharmacokinetic information. Such tests assist the developer to decide whether a drug candidate has scientific merit for further development as an investigational new drug. Phase 0 Phase 0 is a designation for optional exploratory trials, originally introduced by the United States Food and Drug Administration's (FDA) 2006 Guidance on Exploratory Investigational New Drug (IND) Studies, but now generally adopted as standard practice. Phase 0 trials are also known as human microdosing studies and are designed to speed up the development of promising drugs or imaging agents by establishing very early on whether the drug or agent behaves in human subjects as was expected from preclinical studies. Distinctive features of Phase 0 trials include the administration of single subtherapeutic doses of the study drug to a small number of subjects (10 to 15) to gather preliminary data on the agent's pharmacokinetics (what the body does to the drugs). A Phase 0 study gives no data on safety or efficacy, being by definition a dose too low to cause any therapeutic effect. Drug development companies carry out Phase 0 studies to rank drug candidates to decide which has the best pharmacokinetic parameters in humans to take forward into further development. They enable go/no-go decisions to be based on relevant human models instead of relying on sometimes inconsistent animal data. Phase I Phase I trials were formerly referred to as "first-in-man studies" but the field generally moved to the gender-neutral language phrase "first-in-humans" in the 1990s; these trials are the first stage of testing in human subjects. They are designed to test the safety, side effects, best dose, and formulation method for the drug. Phase I trials are not randomized, and thus are vulnerable to selection bias. Normally, a small group of 20–100 healthy volunteers will be recruited. These trials are often conducted in a clinical trial clinic, where the subject can be observed by full-time staff. These clinical trial clinics are often run by contract research organization (CROs) who conduct these studies on behalf of pharmaceutical companies or other research investigators. The subject who receives the drug is usually observed until several half-lives of the drug have passed. This phase is designed to assess the safety (pharmacovigilance), tolerability, pharmacokinetics, and pharmacodynamics of a drug. Phase I trials normally include dose-ranging, also called dose escalation studies, so that the best and safest dose can be found and to discover the point at which a compound is too poisonous to administer. The tested range of doses will usually be a fraction of the dose that caused harm in animal testing. Phase I trials most often include healthy volunteers. However, there are some circumstances when clinical patients are used, such as patients who have terminal cancer or HIV and the treatment is likely to make healthy individuals ill. These studies are usually conducted in tightly controlled clinics called Central Pharmacological Units, where participants receive 24-hour medical attention and oversight. In addition to the previously mentioned unhealthy individuals, "patients who have typically already tried and failed to improve on the existing standard therapies" may also participate in Phase I trials. Volunteers are paid a variable inconvenience fee for their time spent in the volunteer center. Before beginning a Phase I trial, the sponsor must submit an Investigational New Drug application to the FDA detailing the preliminary data on the drug gathered from cellular models and animal studies. Phase I trials can be further divided: Phase Ia Single ascending dose (Phase Ia): In single ascending dose studies, small groups of subjects are given a single dose of the drug while they are observed and tested for a period of time to confirm safety. Typically, a small number of participants, usually three, are entered sequentially at a particular dose. If they do not exhibit any adverse side effects, and the pharmacokinetic data are roughly in line with predicted safe values, the dose is escalated, and a new group of subjects is then given a higher dose. If unacceptable toxicity is observed in any of the three participants, an additional number of participants, usually three, are treated at the same dose. This is continued until pre-calculated pharmacokinetic safety levels are reached, or intolerable side effects start showing up (at which point the drug is said to have reached the maximum tolerated dose (MTD)). If an additional unacceptable toxicity is observed, then the dose escalation is terminated and that dose, or perhaps the previous dose, is declared to be the maximally tolerated dose. This particular design assumes that the maximally tolerated dose occurs when approximately one-third of the participants experience unacceptable toxicity. Variations of this design exist, but most are similar. Phase Ib Multiple ascending dose (Phase Ib): Multiple ascending dose studies investigate the pharmacokinetics and pharmacodynamics of multiple doses of the drug, looking at safety and tolerability. In these studies, a group of patients receives multiple low doses of the drug, while samples (of blood, and other fluids) are collected at various time points and analyzed to acquire information on how the drug is processed within the body. The dose is subsequently escalated for further groups, up to a predetermined level. Food effect A short trial designed to investigate any differences in absorption of the drug by the body, caused by eating before the drug is given. These studies are usually run as a crossover study, with volunteers being given two identical doses of the drug while fasted, and after being fed. Phase II Once a dose or range of doses is determined, the next goal is to evaluate whether the drug has any biological activity or effect. Phase II trials are performed on larger groups (50–300 individuals) and are designed to assess how well the drug works, as well as to continue Phase I safety assessments in a larger group of volunteers and patients. Genetic testing is common, particularly when there is evidence of variation in metabolic rate. When the development process for a new drug fails, this usually occurs during Phase II trials when the drug is discovered not to work as planned, or to have toxic effects. Phase II studies are sometimes divided into Phase IIa and Phase IIb. There is no formal definition for these two sub-categories, but generally: Phase IIa studies are usually pilot studies designed to find an optimal dose and assess safety ('dose finding' studies). Phase IIb studies determine how well the drug works in subjects at a given dose to assess efficacy ('proof of concept' studies). Trial design Some Phase II trials are designed as case series, demonstrating a drug's safety and activity in a selected group of participants. Other Phase II trials are designed as randomized controlled trials, where some patients receive the drug/device and others receive placebo/standard treatment. Randomized Phase II trials have far fewer patients than randomized Phase III trials. Example: cancer design In the first stage, the investigator attempts to rule out drugs that have no or little biologic activity. For example, the researcher may specify that a drug must have some minimal level of activity, say, in 20% of participants. If the estimated activity level is less than 20%, the researcher chooses not to consider this drug further, at least not at that maximally tolerated dose. If the estimated activity level exceeds 20%, the researcher will add more participants to get a better estimate of the response rate. A typical study for ruling out a 20% or lower response rate enters 14 participants. If no response is observed in the first 14 participants, the drug is considered not likely to have a 20% or higher activity level. The number of additional participants added depends on the degree of precision desired, but ranges from 10 to 20. Thus, a typical cancer phase II study might include fewer than 30 people to estimate the response rate. Efficacy vs effectiveness When a study assesses efficacy, it is looking at whether the drug given in the specific manner described in the study is able to influence an outcome of interest (e.g. tumor size) in the chosen population (e.g. cancer patients with no other ongoing diseases). When a study is assessing effectiveness, it is determining whether a treatment will influence the disease. In an effectiveness study, it is essential that participants are treated as they would be when the treatment is prescribed in actual practice. That would mean that there should be no aspects of the study designed to increase compliance above those that would occur in routine clinical practice. The outcomes in effectiveness studies are also more generally applicable than in most efficacy studies (for example does the patient feel better, come to the hospital less or live longer in effectiveness studies as opposed to better test scores or lower cell counts in efficacy studies). There is usually less rigid control of the type of participant to be included in effectiveness studies than in efficacy studies, as the researchers are interested in whether the drug will have a broad effect in the population of patients with the disease. Success rate Phase II clinical programs historically have experienced the lowest success rate of the four development phases. In 2010, the percentage of Phase II trials that proceeded to Phase III was 18%, and only 31% of developmental candidates advanced from Phase II to Phase III in a study of trials over 2006–2015. Phase III This phase is designed to assess the effectiveness of the new intervention and, thereby, its value in clinical practice. Phase III studies are randomized controlled multicenter trials on large patient groups (300–3,000 or more depending upon the disease/medical condition studied) and are aimed at being the definitive assessment of how effective the drug is, in comparison with current 'gold standard' treatment. Because of their size and comparatively long duration, Phase III trials are the most expensive, time-consuming and difficult trials to design and run, especially in therapies for chronic medical conditions. Phase III trials of chronic conditions or diseases often have a short follow-up period for evaluation, relative to the period of time the intervention might be used in practice. This is sometimes called the "pre-marketing phase" because it actually measures consumer response to the drug. It is common practice that certain Phase III trials will continue while the regulatory submission is pending at the appropriate regulatory agency. This allows patients to continue to receive possibly lifesaving drugs until the drug can be obtained by purchase. Other reasons for performing trials at this stage include attempts by the sponsor at "label expansion" (to show the drug works for additional types of patients/diseases beyond the original use for which the drug was approved for marketing), to obtain additional safety data, or to support marketing claims for the drug. Studies in this phase are by some companies categorized as "Phase IIIB studies." While not required in all cases, it is typically expected that there be at least two successful Phase III trials, demonstrating a drug's safety and efficacy, to obtain approval from the appropriate regulatory agencies such as FDA (US), or the EMA (European Union). Once a drug has proved satisfactory after Phase III trials, the trial results are usually combined into a large document containing a comprehensive description of the methods and results of human and animal studies, manufacturing procedures, formulation details, and shelf life. This collection of information makes up the "regulatory submission" that is provided for review to the appropriate regulatory authorities in different countries. They will review the submission, and if it is acceptable, give the sponsor approval to market the drug. Most drugs undergoing Phase III clinical trials can be marketed under FDA norms with proper recommendations and guidelines through a New Drug Application (NDA) containing all manufacturing, preclinical, and clinical data. In case of any adverse effects being reported anywhere, the drugs need to be recalled immediately from the market. While most pharmaceutical companies refrain from this practice, it is not abnormal to see many drugs undergoing Phase III clinical trials in the market. Adaptive design The design of individual trials may be altered during a trial – usually during Phase II or III – to accommodate interim results for the benefit of the treatment, adjust statistical analysis, or to reach early termination of an unsuccessful design, a process called an "adaptive design". Examples are the 2020 World Health Organization Solidarity trial, European Discovery trial, and UK RECOVERY Trial of hospitalized people with severe COVID-19 infection, each of which applies adaptive designs to rapidly alter trial parameters as results from the experimental therapeutic strategies emerge. Adaptive designs within ongoing Phase II–III clinical trials on candidate therapeutics may shorten trial durations and use fewer subjects, possibly expediting decisions for early termination or success, and coordinating design changes for a specific trial across its international locations. Success rate For vaccines, the probability of success ranges from 7% for non-industry-sponsored candidates to 40% for industry-sponsored candidates. A 2019 review of average success rates of clinical trials at different phases and diseases over the years 2005–15 found a success range of 5–14%. Separated by diseases studied, cancer drug trials were on average only 3% successful, whereas ophthalmology drugs and vaccines for infectious diseases were 33% successful. Trials using disease biomarkers, especially in cancer studies, were more successful than those not using biomarkers. A 2010 review found about 50% of drug candidates either fail during the Phase III trial or are rejected by the national regulatory agency. Cost of trials by phases In the early 21st century, a typical Phase I trial conducted at a single clinic in the United States ranged from $1.4 million for pain or anesthesia studies to $6.6 million for immunomodulation studies. Main expense drivers were operating and clinical monitoring costs of the Phase I site. The amount of money spent on Phase II or III trials depends on numerous factors, with therapeutic area being studied and types of clinical procedures as key drivers. Phase II studies may cost as low as $7 million for cardiovascular projects, and as much as $20 million for hematology trials. Phase III trials for dermatology may cost as low as $11 million, whereas a pain or anesthesia Phase III trial may cost as much as $53 million. An analysis of Phase III pivotal trials leading to 59 drug approvals by the US Food and Drug Administration over 2015–16 showed that the median cost was $19 million, but some trials involving thousands of subjects may cost 100 times more. Across all trial phases, the main expenses for clinical trials were administrative staff (about 20% of the total), clinical procedures (about 19%), and clinical monitoring of the subjects (about 11%). Phase IV A Phase IV trial is also known as a postmarketing surveillance trial or drug monitoring trial to assure long-term safety and effectiveness of the drug, vaccine, device or diagnostic test. Phase IV trials involve the safety surveillance (pharmacovigilance) and ongoing technical support of a drug after it receives regulatory approval to be sold. Phase IV studies may be required by regulatory authorities or may be undertaken by the sponsoring company for competitive (finding a new market for the drug) or other reasons (for example, the drug may not have been tested for interactions with other drugs, or on certain population groups such as pregnant women, who are unlikely to subject themselves to trials). The safety surveillance is designed to detect any rare or long-term adverse effects over a much larger patient population and longer time period than was possible during the Phase I-III clinical trials. Harmful effects discovered by Phase IV trials may result in a drug being withdrawn from the market or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). Overall cost The entire process of developing a drug from preclinical research to marketing can take approximately 12 to 18 years and often costs well over $1 billion. References Clinical research Design of experiments Life sciences industry
Phases of clinical research
Biology
3,523
25,078,080
https://en.wikipedia.org/wiki/Rhizomucor%20miehei
Rhizomucor miehei (also: Mucor miehei ) is a species of fungus. It is commercially used to produce enzymes which can be used to produce a microbial rennet to curd milk and produce cheese. Under experimental conditions, this species grows particularly well at temperatures between 24 and 55°C, and their growth becomes negligible below 21°C or above 57°C. It is also used to produce lipases for interesterification of fats. See also Industrial enzymes References External links Index Fungorum listing Fungi
Rhizomucor miehei
Biology
116
77,546,073
https://en.wikipedia.org/wiki/NGC%207808
NGC 7808 is an lenticular galaxy in the constellation of Cetus. Its velocity with respect to the cosmic microwave background is 8521 ± 24 km/s, which corresponds to a Hubble distance of 125.67 ± 8.80 Mpc (∼410 million light-years). It was discovered by American astronomer Frank Muller in 1886. NGC 7808 is an active Seyfert 1 galaxy. One supernova has been observed in NGC 7808: SN2023qnz (type Ia, mag 20.14) was discovered by Pan-STARRS on 22 August 2023. Star-forming ring NGC 7808 contains an outer star-forming ring, observed in ultraviolet rays. According to a 2019 study, the star formation is only above one solar mass per year. It is expected to decrease overtime. Nevertheless, star-forming rings like in NGC 7808 still contain enigmatic features and can help astronomers to learn more about the evolutionary processes taken by these galaxies. See also List of NGC objects (7001–7840) References External links 7808 000243 -02-01-013 F00009-1101 Cetus Astronomical objects discovered in 1886 Discoveries by Frank Muller (astronomer) Lenticular galaxies
NGC 7808
Astronomy
252
1,160,012
https://en.wikipedia.org/wiki/Lacteal
A lacteal is a lymphatic capillary that absorbs dietary fats in the villi of the small intestine. Triglycerides are emulsified by bile and hydrolyzed by the enzyme lipase, resulting in a mixture of fatty acids, di- and monoglycerides. These then pass from the intestinal lumen into the enterocyte, where they are re-esterified to form triglyceride. The triglyceride is then combined with phospholipids, cholesterol ester, and apolipoprotein B48 to form chylomicrons. These chylomicrons then pass into the lacteals, forming a milky substance known as chyle. The lacteals merge to form larger lymphatic vessels that transport the chyle to the thoracic duct where it is emptied into the bloodstream at the subclavian vein. At this point, the fats are in the bloodstream in the form of chylomicrons. Once in the blood, chylomicrons are subject to delipidation by lipoprotein lipase. Eventually, enough lipid has been lost and additional apolipoproteins gained, that the resulting particle (now referred to as a chylomicron remnant) can be taken up by the liver. From the liver, the fat released from chylomicron remnants can be re-exported to the blood as the triglyceride component of very low-density lipoproteins. Very low-density lipoproteins are also subject to delipidation by vascular lipoprotein lipase, and deliver fats to tissues throughout the body. In particular, the released fatty acids can be stored in adipose cells as triglycerides. As triglycerides are lost from very low-density lipoproteins, the lipoprotein particles become smaller and denser (since protein is denser than lipid) and ultimately become low-density lipoproteins. LDL particles are highly atherogenic. In contrast to any other route of absorption from the small intestine, the lymphatic system avoids first pass metabolism. References External links - "117. Digestive System: Alimentary Canal jejunum, central lacteals " Digestive system Lymphatic system Lymphatic tissue
Lacteal
Biology
500
28,713,922
https://en.wikipedia.org/wiki/William%20James%20Hubard
William James Hubard (1807 – February 1862) was a British-born artist who worked in England and the United States in the 19th century. He specialized in silhouette and painted portraits. Biography Hubard arrived in the United States from England in 1824. In 1825–1826 he worked in Boston, Massachusetts, setting up an exhibition known as the "Hubard Gallery" at Julien Hall (corner Congress and Milk Streets). At the time Hubard would have been about 18 or 19 years old. A local newspaper reported "there is a great variety of pictures—likenesses, groups of animals, landscape scenery, caricatures, &c.—all cut with a simple pair of scissors, without the aid of any machinery whatever, and which a spectator might, at a hasty glance, take for painting." He received raves in the press: "He exercises his scissors with so much dexterity and skill, that an accurate profile, even of the most 'unmeaning face,' can be procured in twenty-five seconds, without the use of steam." Local resident John George Metcalf visited the gallery in 1825, and wrote in his diary: Hubard Gallery. This is a collection of cuttings of black paper of all the shapes and figures that can possibly be imagined. The figures after being cut out, are arranged and pasted on white paper which are skilfully and tastefully placed about the Hall. This Astonishing genius is a native of Shropshire in England and is now about fifteen years of age. Here, and all done with only a pair of common scissors, you can see the stately structures of Westminster Abbey, the Catholic Church at Glasgow and others all with their due proportion of light and shade. Here Napoleon has burst from the cearments of the grave and is upon his warhorse, as when on the bloody fields of Austerlitz and Marengo. Franklin too has come back, and stands for the patriot and Philosopher as when at the court of London he said "his Master shall pay for it." Kings and princes have left their gilded mausoleums, and at the will of Master Hubard are set up to be gazed at by clown and cobler. Besides these graver scenes we have the lighter ones of Life. Here Doctor Syntax and his whole Tour can be found and all his scenes of fun and merriment stand forth to be looked and laughed at. Fiddlers, Beggars, Bellmen, Irishmen and others ad infinitum, all as natural as life, all the creation of a pair of common scissors, attract the attention and excite the admiration of many a gazer. Horses and Dogs, pigs and pussies, and all that "sort o' thing," can here be found from the size of a thumb-nail to that of a platter. In fine here any one, if he is not made by one of Nature's journeymen, can find fun and frolic enough to last a week. Hubard later moved to Richmond, Virginia where he married Maria Mason Tabb, the daughter of wealthy clients in nearby Gloucester County. He also became friends with Mann S. Valentine, II who supported and promoted his work. On January 14, 1853, he was given exclusive license by the Virginia General Assembly to make bronze copies of the marble statue of George Washington by French sculptor Jean-Antoine Houdon, producing them as of 1856, with a total of six in all. In February 1862, he was killed in an accidental explosion while making munitions in Richmond for the Confederate States of America during the American Civil War. Works by Hubard reside in the collections of Historic New England, the Smithsonian, and The Valentine in Richmond. Selected works References Further reading Louise F. Catterall. "Tabb-Hubard Letters." Virginia Magazine of History and Biography, Vol. 56, No. 1 (Jan., 1948), pp. 57–65 William James Hubard, 1807–1862: A concurrent survey and exhibition, January, 1948. Virginia Museum of Fine Arts, 1848 Albert Ten Eyck Gardner. "Southern Monuments: Charles Carroll and William James Hubard." Metropolitan Museum of Art Bulletin, New Series, Vol. 17, No. 1 (Summer, 1958), pp. 19–23. Penley Knipe. Shades and Shadow-Pictures: The Materials and Techniques of American Portrait Silhouettes. 1999. http://cool.conservation-us.org/coolaic/sg/bpg/annual/v18/bp18-07.html External links WorldCat http://www.apva.org/marshall/collection/ldr_hubard.php Time (magazine) Museum of Fine Arts, Boston. Margaret Oliver Colt and Mary Devereux Colt in the Gardens at "Green Mount," Baltimore, 1830. By Hubard. http://digitalgallery.nypl.org/nypldigital/id?EM12221 http://collections.si.edu/search/results.jsp?q=record_ID:npg_NPG.78.266 Metropolitan Museum of Art, NY. Portrait of Charles Carroll of Carrollton, c. 1830 http://richmondthenandnow.com/Newspaper-Articles/William-James-Hubard-Silhouette.html 1807 births 1862 deaths English portrait painters American portrait painters 19th-century English painters English male painters 19th-century American painters American male painters Silhouettists Accidental deaths in Virginia English emigrants to the United States Artists from Richmond, Virginia Painters from Virginia Industrial accident deaths 19th-century American male artists 19th-century English male artists Deaths from explosion
William James Hubard
Chemistry
1,165
6,326,469
https://en.wikipedia.org/wiki/Lifting%20hook
A lifting hook is a device for grabbing and lifting loads by means of a device such as a hoist or crane. A lifting hook is usually equipped with a safety latch to prevent the disengagement of the lifting wire rope sling, chain or rope to which the load is attached. A hook may have one or more built-in pulley sheaves as a block and tackle to multiply the lifting force. See also References American Society of Mechanical Engineers: ASME B30.10 "Hooks" (2014). Lifting equipment
Lifting hook
Physics,Technology
110
56,028,417
https://en.wikipedia.org/wiki/Optical%20cluster%20state
Optical cluster states are a proposed tool to achieve quantum computational universality in linear optical quantum computing (LOQC). As direct entangling operations with photons often require nonlinear effects, probabilistic generation of entangled resource states has been proposed as an alternative path to the direct approach. Creation of the cluster state On a silicon photonic chip, one of the most common platforms for implementing LOQC, there are two typical choices for encoding quantum information, though many more options exist. Photons have useful degrees of freedom in the spatial modes of the possible photon paths or in the polarization of the photons themselves. The way in which a cluster state is generated varies with which encoding has been chosen for implementation. Storing information in the spatial modes of the photon paths is often referred to as dual rail encoding. In a simple case, one might consider the situation where a photon has two possible paths, a horizontal path with creation operator and a vertical path with creation operator , where the logical zero and one states are then represented by and . Single qubit operations are then performed by beam splitters, which allow manipulation of the relative superposition weights of the modes, and phase shifters, which allow manipulation of the relative phases of the two modes. This type of encoding lends itself to the Nielsen protocol for generating cluster states. In encoding with photon polarization, logical zero and one can be encoded via the horizontal and vertical states of a photon, e.g. and . Given this encoding, single qubit operations can be performed using waveplates. This encoding can be used with the Browne-Rudolph protocol. Nielsen protocol In 2004, Nielsen proposed a protocol to create cluster states, borrowing techniques from the Knill-Laflamme-Milburn protocol (KLM protocol) to probabilistically create controlled-Z connections between qubits which, when performed on a pair of states (normalization being ignored), forms the basis for cluster states. While the KLM protocol requires error correction and a fairly large number of modes in order to get very high probability two-qubit gate, Nielsen's protocol only requires a success probability per gate of greater than one half. Given that the success probability for a connection using ancilla photons is , relaxation of the success probability from nearly one to anything over one half presents a major advantage in resources, as well as simply reducing the number of required elements in the photonic circuit. To see how Nielsen brought about this improvement, consider the photons being generated for qubits as vertices on a two dimensional grid, and the controlled-Z operations being probabilistically added edges between nearest neighbors. Using results from percolation theory, it can be shown that as long as the probability of adding edges is above a certain threshold, there will exist a complete grid as a sub-graph with near unit probability. Because of this, Nielsen's protocol doesn't rely on every individual connection being successful, just enough of them that the connections between photons allow a grid. Yoran-Reznik protocol Among the first proposals of utilizing resource states for optical quantum computing was the Yoran-Reznik protocol in 2003. While the proposed resource in this protocol was not exactly a cluster state, it brought many of the same key concepts to the attention of those considering the possibilities of optical quantum computing and still required connecting multiple separate one-dimensional chains of entangled photons via controlled-Z operations. This protocol is somewhat unique in that it utilizes both the spatial mode degree of freedom along with the polarization degree of freedom to help entanglement between qubits. Given a horizontal path, denoted by , and a vertical path, denoted by , a 50:50 beam splitter connecting the paths followed by a -phase shifter on path , we can perform the transformations where denotes a photon with polarization on path . In this way, we have the path of the photon entangled with its polarization. This is sometimes referred to as hyperentanglement, a situation in which the degrees of freedom of a single particle are entangled with each other. This, paired with the Hong-Ou-Mandel effect and projective measurements on the polarization state, can be used to create path entanglement between photons in a linear chain. These one-dimensional chains of entangled photons still need to be connected via controlled-Z operations, similar to the KLM protocol. These controlled-Z connection s between chains are still probabilistic, relying on measurement dependent teleportation with special resource states. However, due to the fact that this method does not include Fock measurements on the photons being used for computation as the KLM protocol does, the probabilistic nature of implementing controlled-Z operations presents much less of a problem. In fact, as long as connections occur with probability greater than one half, the entanglement present between chains will be enough to perform useful quantum computation, on average. Browne-Rudolph protocol An alternative approach to building cluster states that focuses entirely on photon polarization is the Browne-Rudolph protocol. This method rests on performing parity checks on a pair of photons to stitch together already entangled sets of photons, meaning that this protocol requires entangled photon sources. Browne and Rudolph proposed two ways of doing this, called type-I and type-II fusion. Type-I fusion In type-I fusion, photons with either vertical or horizontal polarization are injected into modes and , connected by a polarizing beam splitter. Each of the photons sent into this system is part of a Bell pair that this method will try to entangle. Upon passing through the polarizing beam splitter, the two photons will go opposite ways if they have the same polarization or the same way if they have the opposite polarization, e.g. or Then on one of these modes, a projective measurement onto the basis is performed. If the measurement is successful, i.e. if it detects anything, then the detected photon is destroyed, but the remaining photons from the Bell pairs become entangled. Failure to detect anything results in an effective loss of the involved photons in a way that breaks any chain of entangled photons they were on. This can make attempting to make connections between already developed chains potentially risky. Type-II fusion Type-II fusion works similarly to type-I fusion, with the differences being that a diagonal polarizing beam splitter is used and the pair of photons is measured in the two-qubit Bell basis. A successful measurement here involves measuring the pair to be in a Bell state with no relative phase between the superposition of states (e.g. as opposed to ). This again entangles any two clusters already formed. A failure here performs local complementation on the local subgraph, making an existing chain shorter rather than cutting it in half. In this way, while it requires the use of more qubits in combining entangled resources, the potential loss for attempts to connect two chains together are not as expensive for type-II fusion as they are for type-I fusion. Computing with cluster states Once a cluster state has been successfully generated, computation can be done with the resource state directly by applying measurements to the qubits on the lattice. This is the model of measurement-based quantum computation (MQC), and it is equivalent to the circuit model. Logical operations in MQC come about from the byproduct operators that occur during quantum teleportation. For example, given a single qubit state , one can connect this qubit to a plus state via a two-qubit controlled-Z operation. Then, upon measuring the first qubit (the original ) in the Pauli-X basis, the original state of the first qubit is teleported to the second qubit with a measurement outcome dependent extra rotation, which one can see from the partial inner product of the measurement acting on the two-qubit state: . for denoting the measurement outcome as either the eigenstate of Pauli-X for or the eigenstate for . A two qubit state connected by a pair of controlled-Z operations to the state yields a two-qubit operation on the teleported state after measuring the original qubits: . for measurement outcomes and . This basic concept extends to arbitrarily many qubits, and thus computation is performed by the byproduct operators of teleportation down a chain. Adjusting the desired single-qubit gates is simply a matter of adjusting the measurement basis on each qubit, and non-Pauli measurements are necessary for universal quantum computation. Experimental Implementations Spatial encoding Path-entangled two qubit states have been generated in laboratory settings on silicon photonic chips in recent years, making important steps in the direction of generating optical cluster states. Among methods of doing this, it has been shown experimentally that spontaneous four-wave mixing can be used with the appropriate use of microring resonators and other waveguides for filtering to perform on-chip generation of two-photon Bell states, which are equivalent to two-qubit cluster states up to local unitary operations. To do this, a short laser pulse is injected into an on-chip waveguide that splits into two paths. This forces the pulse into a superposition of the possible directions it could go. The two paths are coupled to microring resonators that allow circulation of the laser pulse until spontaneous four-wave mixing occurs, taking two photons from the laser pulse and converting them into a pair of photons, called the signal and idler with different frequencies in a way that conserves energy. In order to prevent the generation of multiple photon pairs at once, the procedure takes advantage of the conservation of energy and ensures that there is only enough energy in the laser pulse to create a single pair of photons. Because of this restriction, spontaneous four-wave mixing can only occur in one of the microring resonators at a time, meaning that the superposition of paths that the laser pulse could take is converted into a superposition of paths the two photons could be on. Mathematically, if denotes the laser pulse, the paths are labeled as and , the process can be written as where is the representation of having of photon on path . With the state of the two photons being in this kind of superposition, they are entangled, which can be verified by tests of Bell inequalities. Polarization encoding Polarization entangled photon pairs have also been produced on-chip. The setup involves a silicon wire waveguide that is split in half by a polarization rotator. This process, like the entanglement generation described for the dual rail encoding, makes use of the nonlinear process of spontaneous four-wave mixing, which can occur in the silicon wire on either side of the polarization rotator. However, the geometry of these wires are designed such that horizontal polarization is preferred in the conversion of laser pump photons to signal and idler photons. Thus when the photon pair is generated, both photons should have the same polarization, i.e. . The polarization rotator is then designed with the specific dimensions such that horizontal polarization is switched to vertical polarization. Thus any pairs of photons generated before the rotator exit the waveguide with vertical polarization and any pairs generated on the other end of the wire exit the waveguide still having horizontal polarization. Mathematically, the process is, up to overall normalization, . Assuming that equal space on each side of the rotator makes spontaneous four-wave mixing equally likely one each side, the output state of the photons is maximally entangled: . States generated this way could potentially be used to build a cluster state using the Browne-Rudolph protocol. References Quantum information science Quantum optics
Optical cluster state
Physics
2,424
43,616,736
https://en.wikipedia.org/wiki/Rangil%20water%20treatment%20plant
Rangil water treatment plant is situated on rangil mountain of Ganderbal district about 21 km from commercial centre of Kashmir. The water project was inaugurated by Farooq Abdullah on 15 March 2010 and its cost was estimated as Rs 31 crore. Construction The project was completed by Economic Reconstruction Agency (ERA) and was handed over to Public Health Engineering department of Kashmir on the same day. The main aim of the project was to provide fresh drinking water to the people of Srinagar. The construction company under Rangil water supply project has liad 51.63 km long pipeline which has been tested for efficiency across the city . Features The filtration plant has water storing capacity of 10 MGD (million gallons per day). The water in the plant comes from Sind River which runs through the heart of Ganderbal district. The water treatment plant is also installed with electromagnetic and mechanical flow meters at the control room of the plant and at bifurcation points to regulate water supply as per demand. Treatment process The water entering the plant at the first place undergoes screening and after that alum is added to aid sedimentation. The setter water enters 10 filtration units from where it is transported for disinfection using chlorine in disinfection unit. The water thus obtained is stored in storage reserviours for supply purposes. See also 1 Gallon=3.785 Litres References Ganderbal district Water treatment facilities
Rangil water treatment plant
Chemistry
292
6,301,946
https://en.wikipedia.org/wiki/Mont%20M%C3%A9gantic%20Observatory
The Mont Mégantic Observatory (, ; OMM) is an astronomical observatory owned and operated jointly by the Université de Montréal (UdeM) and the Université Laval (ULaval). Founded in 1978, the observatory houses the second largest telescope in Eastern Canada after David Dunlap Observatory near Toronto. It is situated at the summit of Mont Mégantic, the highest point of Eastern Canada accessible by car. OMM is about east of Sherbrooke and east of Montreal. The asteroid 4843 Mégantic is named for the observatory. Telescope The Ritchey-Chrétien telescope is equipped with a complement of modern instruments. Imaging, spectroscopy, and polarimetry are routinely conducted at both visible and infrared wavelengths. Light pollution Efforts to control local light pollution, about one-quarter of which is due to the nearby city of Sherbrooke, have led to the establishment of the world's first International Dark-Sky Association (IDA) Dark Sky preserve around the observatory, covering some 5500 square km (2123 square miles). ASTROLab ASTROLab is an astronomy activity centre operated by the Parc national du Mont-Mégantic. There are interactive displays about the history of the Universe, the Earth and life. Visitors can take guided daytime tours of ASTROLab and the Mount Megantic Observatory. There are also astronomy evenings, an astronomy festival, and the Perseid Festival. Solar eclipse of April 8th 2024 The observatory is near the center path of totality of the solar eclipse of April 8, 2024. See also List of astronomical observatories List of largest optical reflecting telescopes References External links Homepage of the Observatoire du Mont-Mégantic Mont Mégantic Observatory Clear Sky Clock Forecasts of observing conditions. Astrolab du parc national du Mont-Mégantic Information about visiting the ASTROlab during the day. 360 interactive panorama featuring Mont Mégantic Observatory Astronomical observatories in Canada Science museums in Canada Museums in Estrie Dark-sky preserves in Canada Tourist attractions in Estrie Buildings and structures in Estrie
Mont Mégantic Observatory
Astronomy
415
37,987,767
https://en.wikipedia.org/wiki/Vertical%20electrical%20sounding
Vertical electrical sounding (VES) is a geophysical method for investigation of a geological medium. The method is based on the estimation of the electrical conductivity or resistivity of the medium. The estimation is performed based on the measurement of voltage of electrical field induced by the distant grounded electrodes (current electrodes). Measurements Figures 1–4 show the possible configuration of the measurement setup. The electrodes A and B are current electrodes which are connected to a current source; N and M are potential electrodes which are used for the voltage measurements. As source, the direct current or low frequency alternating current is used. The interpretation of the measurements can be performed based on the apparent resistivity values. The depth of investigation depends on the distance between the current electrodes. In order to obtain the apparent resistivity as the function of depth, the measurements for each position are performed with several different distances between current electrodes. The apparent resistivity is calculated as here, k is a geometric factor, — voltage between electrodes М and N, — current in the line AB. The geometric factor is defined by here r is the distance between electrodes. Interpretation of gathered data is performed based on the dependency ρk(AB/2). The application of large electrode arrays allows for reconstructing complex 3D structure of geological media (see Electrical resistivity tomography). However, the interpretation of such measurement is rather difficult. In this case, advanced interpretation techniques based on numerical methods can be applied. Numerical calculation freeware Solution of inverse problem Solution of forward problem See also Electrical resistivity tomography (ERT) Magnetotellurics Seismo-electromagnetics Telluric current References Geophysical imaging Inverse problems
Vertical electrical sounding
Mathematics
347
59,866,656
https://en.wikipedia.org/wiki/Building%20consent%20authority
Building consent authorities (BCAs) are officials who enforce New Zealand's regulatory building control system. The New Zealand Building Act 2004 sets out a registration and accreditation scheme and technical reviews. The Act creates operational roles for BCAs. Authorities The following are the approved building consent authorities listed on the MBIE Register: Note that the register lists 80 BCAs but some of these are former territorial authorities that have been amalgamated into Auckland Council (such as Franklin District Council and North Shore City Council). Building consents on the Chatham Islands are contracted out to Wellington City Council and large dams on the Chatham's to Environment Canterbury. In addition to the regional and territorial authorities, Housing New Zealand made a decision in 2019 to establish Consentium, a national BCA in Kāinga Ora, that is responsible for building consents for public housing (up to and including four storeys) across New Zealand that Kāinga Ora intends to retain. Consentium achieved Accreditation in November 2020 and Registration in March 2021. Ashburton District Council Auckland Council Banks Peninsula District Council Buller District Council Carterton District Council Central Hawkes Bay District Council Central Otago District Council Christchurch City Council Clutha District Council Consentium, a division of Kāinga Ora Dunedin City Council Environment Canterbury Environment Waikato Far North District Council Gisborne District Council Gore District Council Grey District Council Hamilton City Council Hastings District Council Hauraki District Council Horowhenua District Council Hurunui District Council Hutt City Council Invercargill City Council Kaikōura District Council Kaipara District Council Kapiti Coast District Council Kawerau District Council MacKenzie District Council Manawatu District Council Marlborough District Council Masterton District Council Matamata-Piako District Council Napier City Council Nelson City Council New Plymouth District Council Northland District Council Opotiki District Council Otago Regional Council Otorohanga District Council Palmerston North City Council Porirua City Council Queenstown Lakes District Council Rangitikei District Council Rotorua District Council Ruapehu District Council Selwyn District Council South Taranaki District Council South Waikato District Council South Wairarapa District Council Southland District Council Stratford District Council Tararua District Council Tasman District Council Taupo District Council Tauranga City Council Thames-Coromandel District Council Timaru District Council Upper Hutt City Council Waikato District Council Waikato Regional Council Waimakariri District Council Waimate District Council Waipa District Council Wairoa District Council Waitaki District Council Waitomo District Council Wellington City Council Western Bay of Plenty District Council Westland District Council Whakatane District Council Whanganui District Council Whangarei District Council References Local government in New Zealand Urban planning
Building consent authority
Engineering
539
52,572,062
https://en.wikipedia.org/wiki/Legion%20Hacktivist%20Group
Legion is a hacktivist group that has attacked some rich and powerful people in India by hacking their twitter handlers. The group claims to have access to many email servers in India and has the encryption keys used by Indian banks over the Internet. History India attacks (2019) Legion came into news when it launched its series of attacks starting with Rahul Gandhi, the member of Indian National Congress. Reports say that not only Rahul's Twitter handler was hacked but his mail server was also hacked. The very next day, INC's Twitter handler was also hacked and tweeted irrelevant content. The group then hacked Twitter handlers of Vijay Mallya, Barkha Dutt and Ravish Kumar. Hacking of Russian government (2021). Because the Russian government tried to censor Telegram in 2018-2020, the Legion Hacker group hacked a sub-domain belonging to Federal Antimonopoly Service. They didn't cause big harm, but they posted a message to the Russian government stating that "The vandalism and destruction Roskomnadzor has caused to internet privacy and Russian anonymity has made them a target of Legion." - This text document was removed after 16 hours but it is still available via Wayback Machine. References Advocacy groups Internet-based activism Internet terminology 2000s neologisms Culture jamming techniques Hacker culture Hacker groups
Legion Hacktivist Group
Technology
275
48,393,600
https://en.wikipedia.org/wiki/Foturan
Foturan (notation of the manufacturer: FOTURAN) is a photosensitive glass by SCHOTT Corporation developed in 1984. It is a technical glass-ceramic which can be structured without photoresist when it is exposed to shortwave radiation such as ultraviolet light and subsequently etched. In February 2016, Schott announced the introduction of Foturan II at Photonics West. Foturan II is characterized by higher homogeneity of the photosensitivity which allows finer microstructures. Composition and Properties Foturan is a lithium aluminosilicate glass system doped with small amounts of silver oxides and cerium oxides. Processing Foturan can be structured via UV-exposure, tempering and etching: Crystal nucleation grow in Foturan when exposed to UV and heat treated afterwards. The crystalized areas react much faster to hydrofluoric acid than the surrounding vitreous material, resulting in very fine microstructures, tight tolerance and high aspect ratio. Exposure If Foturan is exposed to light in the ultra-violet-range with a wavelength of 320 nm (eventually via photomask, contact lithography or proximity lithography to expose certain patterns), a chemical reaction is started in the exposed areas: The containing Ce3+ transforms into Ce4+ and frees an electron. Tempering During the nucleation tempering (~ 500 °C), the Silver-ion Ag+ will be transferred into Ag0 by scavenging the electron released from Ce3+. This activates the agglomeration of atomic silver to form nanometer-scale silver clusters During the subsequent crystallization tempering (~560-600 °C), lithium metasilicates (Li2SiO3 glass-ceramic) forms on the silver cluster nucleation in the exposed areas. The unexposed glass, otherwise amorphous, remains unchanged. Etching After tempering, the crystallized areas can be etched with hydrofluoric acid 20 times faster than the unexposed, still amorphous glass. Thus, structures with an aspect ratio of ca. 10:1 can be created. Ceramization (Optional) After etching, a ceramization of the entire substrate after a 2nd UV-exposure and thermal treatment is possible. The crystalline phase in this stage is lithium disilicate Li2Si2O5. Product characteristics Small structure size: Structure sizes of ~ 25 μm are possible High aspect ratio: Etchingratios of > 20:1 make aspect ratio of > 10:1 and a wall angle of ~ 1-2° possible High optical transmission in visible and non-visible spectrum: More than 90% transmission (substrate thickness 1 mm) between 350 nm and 2.700 nm High temperature resistance: Tg > 450°Celsius Pore-free: Suitable for biotech / microfluidics application Low self fluorescence Hydrolytic resistance (acc. to DIN ISO 719): HGB 4 Acid resistance (acc. to DIN 12116): S 1 Alkali resistance (acc. to DIN ISO 695): A 2 Foturan in the scientific community Foturan is a widely known material in the material science community. As of October 30, 2015, Google Scholar showed more than 1.000 results of Foturan in scholarly literatures across an array of publishing formats and disciplines. Many of those deal with topics such as Micromachining Foturan 3D / laser direct writing in Foturan Using Foturan for optical waveguides Using Foturan for volume gratings Processing Foturan via excimer / femtosecond laser Applications Foturan is mainly used for microstructure applications, where small and complex structures have to be created out of a solid and robust base material. Overall there are five main areas for which Foturan is used: Microfluidics / Biotech (such as lab-on-a-chip or organ-on-a-chip components, micro mixer, micro reactor, printheads, titer plates, chip electrophoresis) Semiconductor (such FED spacer, packaging elements or interposer for IC components, CMOS or memory modules) Sensors (such as flow- or temperature sensors, gyroscopes or accelerometers) RF / MEMS (such as substrates or packaging elements for antennas, capacitors, filter, duplexers, switches or oscillators) Telecom (such as optical alignment chips, optical waveguides or optical interconnects) By thermal diffusion bonding it is possible to bond multiple Foturan layers on top of each other to create complex 3-dimensional microstructures. References External links Glass types Glass-ceramics Glass trademarks and brands Transparent materials German brands
Foturan
Physics
1,013
51,793,263
https://en.wikipedia.org/wiki/Air-jet%20loom
An air-jet loom is a shuttleless loom that uses a jet of air to propel the weft yarn through the warp shed. It is one of two types of fluid-jet looms, the other being a water-jet loom, which was developed previously. Fluid-jet looms can operate at a faster speed than predecessor looms such as rapier looms, but they are not as common. The machinery used in fluid-jet weaving consists of a main nozzle, auxiliary nozzles or relay nozzles, and a profile reed. Air-jet looms are capable of producing standard household and apparel fabrics for items such as shirts, denim, sheets, towels, and sports apparel, as well as industrial products such as printed circuit board cloths. Heavier yarns are more suitable for air-jet looms than lighter yarns. Air-jet looms are capable of weaving plaids, as well as dobby and jacquard fabrics. Method In an air-jet loom, yarn is pulled from the supply package, and the measuring disc removes a length of yarn of the width of fabric being woven. A clamp holds the yarn and an auxiliary air nozzle forms it into the shape of a hairpin. The main nozzle blows the yarn, the clamp opens, and the yarn is carried through the shed. At the end of the insertion cycle, the clamp closes, the yarn is beaten in and cut, and the shed is closed. The jets are electronically controlled, with an integrated database. Research has been done to analyze factors that contribute to compressed air use, a major source of energy consumption, in air-jet looms. History and production The air-jet loom was invented in Czechoslovakia in the 1950 by Vladimír Svatý and was later refined by Swiss, Dutch, and Japanese companies. Companies that produce air-jet looms include Toyota Industries and Tsudakoma, both based in Japan; RIFA (PICKWELL in India), based in China; Picanol, based in Belgium; Dornier, based in Germany; and RIFA, based in China; and Itema, based in Italy. References Further reading - reprinted 1992, 2010 Gas technologies Shuttleless looms
Air-jet loom
Engineering
461
50,659,312
https://en.wikipedia.org/wiki/John%20Hadley%20%28philosopher%29
John Hadley (born 27 September 1966) is an Australian philosopher whose research concerns moral and political philosophy, including animal ethics, environmental ethics, and metaethics. He is currently a senior lecturer in philosophy in the School of Humanities and Communication Arts at Western Sydney University. He has previously taught at Charles Sturt University and the University of Sydney, where he studied as an undergraduate and doctoral candidate. In addition to a variety of articles in peer-reviewed journals and edited collections, he is the author of the 2015 monograph Animal Property Rights (Lexington Books) and the 2019 monograph Animal Neopragmatism (Palgrave Macmillan). He is also the co-editor, with Elisa Aaltola, of the 2015 collection Animal Ethics and Philosophy (Rowman & Littlefield International). Hadley is known for his account of animal property rights theory. He proposes that wild animals be offered property rights over their territories, and that guardians be appointed to represent their interests in decision-making procedures. He suggests that this account could be justified directly, on the basis of the interests of the animals concerned, or indirectly, so that natural environments are protected. The theory has received discussion in popular and academic contexts, with critical responses from farming groups and mixed responses from moral and political theorists. Other work has included a defence of a neopragmatist approach to animal ethics, along with criticism of the metaethical and metaphilosophical assumptions of mainstream animal ethicists. Hadley has also conducted research on normative issues related to animal rights extremism, the aiding of others, and utilitarianism. Career Hadley read for a Bachelor of Arts and doctorate in philosophy at the University of Sydney (USYD). His doctoral thesis was supervised by Caroline West, in USYD's Department of Philosophy, and was submitted in 2006 under the title of Animal Property: Reconciling Ecological Communitarianism and Species-egalitarian Liberalism. During his doctoral research, the "basic elements" of his animal property rights theory were "first assembled", leading to the publication of "Nonhuman Animal Property: Reconciling Environmentalism and Animal Rights" in the Journal of Social Philosophy. During this time, he also published in the Journal of Value Inquiry, Philosophy in the Contemporary World, and the Journal of Applied Philosophy, as well as working as a lecturer in the USYD philosophy department and a guest lecturer for the USYD Laboratory Animal Services. After his PhD, Hadley worked as a lecturer in communication ethics in the Charles Sturt University (CSU) School of Communication and a lecturer in philosophy at the CSU School of Humanities and Social Sciences. He then joined the University of Western Sydney School of Humanities and Communication Arts, first as a lecturer in philosophy, and then as a senior lecturer in philosophy. Animal Ethics and Philosophy: Questioning the Orthodoxy, a collection edited by Hadley with the Finnish philosopher Elisa Aaltola, was published in 2015 by Rowman & Littlefield International. The book aimed to move debate in animal ethics beyond developing extensionist accounts and to examine the metaphilosophical and metaethical problems with extensionist accounts. Hadley's own contribution drew attention to a perceived inconsistent triad in animal rights philosophy: the idea that moral status is determined by psychological factors (like sentience), and not species; that human and nonhuman animals are of the same kind; and that genomic plasticity offers the best explanation for change in natural selection. In the same year, Hadley published a monograph with Lexington Books entitled Animal Property Rights: A Theory of Territory Rights for Wild Animals. The book, partially building upon his doctoral research, presents a large amount of new material on Hadley's animal property rights theory. A second monograph, Animal Neopragmatism, was published in 2019 by Palgrave Macmillan. This presented a neopragmatist approach to animal ethics. Research Animal property rights Hadley is known for his theory of animal property rights, according to which animals should be afforded property rights over their territory. Hadley has developed his theory of animal property rights through his doctoral research, his 2015 monograph, and other academic works. In addition, he has authored popular articles on the subject for The Guardian, The Conversation and The Ethics Centre. He also discussed the topic on Knowing Animals, a podcast series produced by Siobhan O'Sullivan. His proposal has received attention in the popular press, with strong criticism from farmers' groups and journalists writing on rural affairs. The practical side of Hadley's proposal rests on two key principles: a guardianship system, according to which knowledgeable guardians would be appointed to represent animal property holders in land management decision-making, and the use of animals' territory-marking behaviour to determine the limits of their property. Hadley rejects first occupancy and labour-mixing accounts of appropriation, and instead suggests that there are two ways that his account might fruitfully be justified. First, it might be justified directly, with reference to the interests of animals. This relies upon the fact that wild animals require their territory in order to satisfy their basic needs and the claim that this results in an interest in territory strong enough to ground a right. If animals have a right to use their territory, Hadley claims, then they necessarily have a property right in that territory. Second, it might be justified indirectly, as animals (of some species, at least) might be given property rights as a means of protecting natural environments. Hadley presents his proposal against the backdrop of an explicit pragmatism, and holds that animal property rights theory has the potential to reconcile animal and environmental ethics. Hadley's proposal has been placed in the context of the "political turn" in animal ethics; the emergence of animal ethics literature focused on justice. Another academic who has proposed that wild animals be afforded property rights over their habitats is the British philosopher Steve Cooke. Like Hadley, he utilises an interest-based account of animal rights, but, unlike Hadley, he suggests that sovereignty would be an appropriate tool to protect animals' interest in their habitat if property fails. Other theorists exploring the normative aspects of human relationships with wild animals explicitly deny that they are extending property rights to animals. The US-based ethicist Clare Palmer, for instance, argues for a duty to respect wild animals' space, but claims that arguing for a property right for these animals would be "difficult", and instead bases her account on the fact that human actions can make animals "painful, miserable and vulnerable". The Canadian theorists Sue Donaldson and Will Kymlicka are critical of Hadley's proposal to extend property rights to animals, claiming that property rights are insufficient to protect animals' interests. Instead, they argue that animals should be considered sovereign over their territories. They write that It is one thing to say that a bird has a property right in its nest, or that a wolf has a property right in its den – specific bits of territory used exclusively by one animal family. But the habitat that animals need to survive extends far beyond such specific and exclusive bits of territory – animals often need to fly or roam over vast territories shared by many other animals. Protecting a bird's nest is of little help if the nearby watering holes are polluted, or if tall buildings block its flight path. It's not clear how ideas of property rights can help here. They also compare the possibility of extending property rights to animals to the approach of European colonists, who were prepared to extend property, but not sovereignty, rights to native peoples, resulting in oppression. Hadley, however, is himself critical of Donaldson and Kymlicka's sovereignty proposal, though the British philosopher Josh Milburn suggests that the proposals may not be as far apart as the authors indicate. The British political theorist Alasdair Cochrane also questions the extension of property rights to animals in his Animal Rights Without Liberation. Though describing Hadley's proposal as "ingenious", he criticises it on two grounds. First, he questions Hadley's claim of a relationship between property and basic needs, and, second, denies that animal property rights would appease environmentalists, given that they would allow the destruction of environments which do not contain sentient animals. However, in his Sentientist Politics, Cochrane includes animal property rights as part of his critique of Donaldson and Kymlicka's sovereignty model, writing that it "seems perfectly possible to argue, as John Hadley and others have, that wild animals ought to be granted habitat or property rights over their territories". In a book review, Milburn stresses the significance of Hadley's theory, but questions the extent to which the implementation of animal property rights would be desirable without the achievement of other animal rights and the extent to which Hadley's account is genuinely about property rights. Animal neopragmatism Having published a number of papers critical of the metaethical and metaphilosophical stances of mainstream animal ethicists in the 2010s, in 2019, Hadley published Animal Neopragmatism. In the book, Hadley sets out a neopragmatist approach to animal ethics. This theory responds to both the "political problem of welfare" and the "philosophical problem of welfare". The former is a perceived difficulty with the democratic legitimacy of animal welfare law, given that folk understanding of welfare stretches beyond the measurable suffering with which a policy approach is concerned. The latter is that, given metatheoretical assumptions of contemporary animal ethicists (especially moral realism), any attempt to extend discussion of welfare beyond feelings is met with the accusation that the subject is being changed: hence Hadley's earlier exploration of the "changing the subject problem". In response to these problems, Hadley outlines his vision of "relational hedonism", according to which a concern for the pain of animals underlies a broader concern that extends beyond a narrow sense of animal welfare, and endorses both experiential pluralism (welfare can be affected by things other than pleasure and pain) and expressivism. The theory of "animal neopragmatism", Hadley argues, is able to overcome metalevel problems in mainstream animal rights theory. Other research Hadley has considered the ethics of humans' relationships with wild animals and environments beyond his property rights theory. He argues that there is a duty to aid wild animals in need, and that these duties are essentially no different to humans' duties to aid distant strangers who are severely cognitively impaired. He argues that libertarian property rights, consistent with Robert Nozick's interpretation of the Lockean proviso, should limit the right to destroy human-owned natural environments, and has elsewhere explored libertarian theory's denial of moral powers (including the power to acquire property) to animals. Hadley has conducted research on animal rights extremism, concluding that the phenomenon is a complex one, and that a full understanding of individual extremists' intentions and targets are necessary to understand the ethical acceptability of extremist acts and whether such acts are appropriately classified as terrorism. He holds that while direct action should be tolerated in liberal democracies, this toleration should not extend to certain campaigning tactics used by extremists, such as threat-making. With O'Sullivan, Hadley has conducted research on utilitarianism and the relationship between obligations to animals and obligations to needy humans. The scholars argue that there is a conflict in Singer's philosophy between the obligation to aid needy humans and to protect animals, and that Westerners who own pets should, rather than spending large amounts of money extending the lives of their companions, euthanise severely ill animals and instead donate money to aiding those in the developing world. Hadley has been critical of the views of Tibor Machan and J. Baird Callicott. He has also written on J. M. Coetzee, the ethics of "disenhancing" animals, the ethics of animal testing, the relationship of self-defence theory to abortion and animal ethics, and the ethics of street photography. Selected publications References Cited texts Hadley, John (2019). Animal Neopragmatism. Basingstoke, United Kingdom: Palgrave Macmillan Further reading External links John Hadley at the University of Western Sydney John Hadley at Academia.edu John Hadley at Google Scholar John Hadley on The Conversation 1966 births Living people Animal ethicists Australian animal rights scholars Australian ethicists 20th-century Australian philosophers 21st-century Australian philosophers Environmental ethicists Academics from Sydney University of Sydney alumni Academic staff of Western Sydney University Pragmatists
John Hadley (philosopher)
Environmental_science
2,544
24,081,784
https://en.wikipedia.org/wiki/List%20of%20types%20of%20poison
The following is a list of types of poison by intended use: Biocide – a chemical substance capable of killing living organisms, usually in a selective way Fungicide – a chemical compound or biological organism used to kill or inhibit fungi or fungal spores Microbicide – any compound or substance whose purpose is to reduce the infectivity of microbes Germicide – a disinfectant Bactericide – a substance that kills bacteria Viricide – a chemical agent which "kills" viruses outside the body Herbicide – a substance used to kill unwanted plants Parasiticide – any substance used to kill parasites Pesticide – a substance or mixture of substances used to kill a pest Acaricide – pesticides that kill mites Insecticide – a pesticide used against insects Molluscicide – pesticides against molluscs Nematocide – a type of chemical pesticide used to kill parasitic nematodes (roundworms) Rodenticide – a category of pest control chemicals intended to kill rodents Spermicide – a substance that kills sperm Poisons
List of types of poison
Environmental_science
213
669,992
https://en.wikipedia.org/wiki/Star%20height
In theoretical computer science, more precisely in the theory of formal languages, the star height is a measure for the structural complexity of regular expressions and regular languages. The star height of a regular expression equals the maximum nesting depth of stars appearing in that expression. The star height of a regular language is the least star height of any regular expression for that language. The concept of star height was first defined and studied by Eggan (1963). Formal definition More formally, the star height of a regular expression E over a finite alphabet A is inductively defined as follows: , , and for all alphabet symbols a in A. Here, is the special regular expression denoting the empty set and ε the special one denoting the empty word; E and F are arbitrary regular expressions. The star height h(L) of a regular language L is defined as the minimum star height among all regular expressions representing L. The intuition is here that if the language L has large star height, then it is in some sense inherently complex, since it cannot be described by means of an "easy" regular expression, of low star height. Examples While computing the star height of a regular expression is easy, determining the star height of a language can be sometimes tricky. For illustration, the regular expression over the alphabet A = {a,b} has star height 2. However, the described language is just the set of all words ending in an a: thus the language can also be described by the expression which is only of star height 1. To prove that this language indeed has star height 1, one still needs to rule out that it could be described by a regular expression of lower star height. For our example, this can be done by an indirect proof: One proves that a language of star height 0 contains only finitely many words. Since the language under consideration is infinite, it cannot be of star height 0. The star height of a group language is computable: for example, the star height of the language over {a,b} in which the number of occurrences of a and b are congruent modulo 2n is n. Eggan's theorem In his seminal study of the star height of regular languages, established a relation between the theories of regular expressions, finite automata, and of directed graphs. In subsequent years, this relation became known as Eggan's theorem, cf. . We recall a few concepts from graph theory and automata theory. In graph theory, the cycle rank r(G) of a directed graph (digraph) is inductively defined as follows: If G is acyclic, then . This applies in particular if G is empty. If G is strongly connected and E is nonempty, then where is the digraph resulting from deletion of vertex and all edges beginning or ending at . If G is not strongly connected, then r(G) is equal to the maximum cycle rank among all strongly connected components of G. In automata theory, a nondeterministic finite automaton with ε-transitions (ε-NFA) is defined as a 5-tuple, (Q, Σ, δ, q0, F), consisting of a finite set of states Q a finite set of input symbols Σ a set of labeled edges δ, referred to as transition relation: Q × (Σ ∪{ε}) × Q. Here ε denotes the empty word. an initial state q0 ∈ Q a set of states F distinguished as accepting states F ⊆ Q. A word w ∈ Σ* is accepted by the ε-NFA if there exists a directed path from the initial state q0 to some final state in F using edges from δ, such that the concatenation of all labels visited along the path yields the word w. The set of all words over Σ* accepted by the automaton is the language accepted by the automaton A. When speaking of digraph properties of a nondeterministic finite automaton A with state set Q, we naturally address the digraph with vertex set Q induced by its transition relation. Now the theorem is stated as follows. Eggan's Theorem: The star height of a regular language L equals the minimum cycle rank among all nondeterministic finite automata with ε-transitions accepting L. Proofs of this theorem are given by , and more recently by . Generalized star height The above definition assumes that regular expressions are built from the elements of the alphabet A using only the standard operators set union, concatenation, and Kleene star. Generalized regular expressions are defined just as regular expressions, but here also the set complement operator is allowed (the complement is always taken with respect to the set of all words over A). If we alter the definition such that taking complements does not increase the star height, that is, we can define the generalized star height of a regular language L as the minimum star height among all generalized regular expressions representing L. It is an open problem whether some languages can only be expressed with a generalized star height greater than one: this is the generalized star-height problem. Note that, whereas it is immediate that a language of (ordinary) star height 0 can contain only finitely many words, there exist infinite languages having generalized star height 0. For instance, the regular expression which we saw in the example above, can be equivalently described by the generalized regular expression , since the complement of the empty set is precisely the set of all words over A. Thus the set of all words over the alphabet A ending in the letter a has star height one, while its generalized star height equals zero. Languages of generalized star height zero are also called star-free languages. It can be shown that a language L is star-free if and only if its syntactic monoid is aperiodic (). See also Star height problem Generalized star height problem References Formal languages
Star height
Mathematics
1,204
24,151,083
https://en.wikipedia.org/wiki/C12H19NO2
{{DISPLAYTITLE:C12H19NO2}} The molecular formula C12H19NO2 (molar mass: 209.28 g/mol, exact mass: 209.141579) may refer to: Bamethan 2CD-5EtO 2C-E 2C-G Crisugabalin Dimethoxymethamphetamine 2,5-Dimethoxy-4-methylamphetamine Methyl-DMA Mirogabalin Octyl cyanoacrylate 2-Octyl cyanoacrylate Psi-DOM
C12H19NO2
Chemistry
122
78,174,380
https://en.wikipedia.org/wiki/Electrojet%20Zeeman%20Imaging%20Explorer
Electrojet Zeeman Imaging Explorer (EZIE) is a planned NASA heliophysics mission, that will study the Sun and space weather near Earth. It will consist of three CubeSats that will study the auroral electrojets, "by exploring a phenomenon called Zeeman splitting, which is the splitting of a molecule’s light spectrum when placed near a magnetic field". EZIE is expected to be launched in 2025. Further reading References External links Official website 2025 in spaceflight NASA space probes Solar space observatories
Electrojet Zeeman Imaging Explorer
Astronomy
112
6,031,554
https://en.wikipedia.org/wiki/Attenuation%20distortion
Attenuation distortion is the distortion of an analog signal that occurs during transmission when the transmission medium does not have a flat frequency response across the bandwidth of the medium or the frequency spectrum of the signal. Attenuation distortion occurs when some frequencies are attenuated more than other frequencies. When an analog signal of constant amplitude across its frequency spectrum suffers attenuation distortion, some frequencies of the received signal arrive being greater in amplitude (louder), relative to other frequencies. To overcome the effects of attenuation distortion, communications circuits have special equalization equipment attached at the ends of the circuit or in between, designed to attenuate the signal evenly across the frequency spectrum, or to allow the signal to be received at equal amplitude for all frequencies. Attenuation distortion can still occur in a properly equipped circuit if this equalization filter is not properly maintained or adjusted. In DSL circuits, echoes due to impedance mismatch often cause attenuation distortion so severe that some frequencies must be automatically mapped out and not used. References Telecommunications
Attenuation distortion
Technology
211
6,974,596
https://en.wikipedia.org/wiki/Land%20cover
Land cover is the physical material at the land surface of Earth. Land covers include flora, concrete, built structures, bare ground, and temporary water. Earth cover is the expression used by ecologist Frederick Edward Clements that has its closest modern equivalent being vegetation. The expression continues to be used by the United States Bureau of Land Management. There are two primary methods for capturing information on land cover: field survey, and analysis of remotely sensed imagery. Land change models can be built from these types of data to assess changes in land cover over time. One of the major land cover issues (as with all natural resource inventories) is that every survey defines similarly named categories in different ways. For instance, there are many definitions of "forest"—sometimes within the same organisation—that may or may not incorporate a number of different forest features (e.g., stand height, canopy cover, strip width, inclusion of grasses, and rates of growth for timber production). Areas without trees may be classified as forest cover "if the intention is to re-plant" (UK and Ireland), while areas with many trees may not be labelled as forest "if the trees are not growing fast enough" (Norway and Finland). Distinction from "land use" "Land cover" is distinct from "land use", despite the two terms often being used interchangeably. Land use is a description of how people utilize the land and of socio-economic activity. Urban and agricultural land uses are two of the most commonly known land use classes. At any one point or place, there may be multiple and alternate land uses, the specification of which may have a political dimension. The origins of the "land cover/land use" couplet and the implications of their confusion are discussed in Fisher et al. (2005). Types Following table is Land Cover statistics by Food and Agriculture Organization (FAO) with 14 classes. Mapping Land cover change detection using remote sensing and geospatial data provides baseline information for assessing the climate change impacts on habitats and biodiversity, as well as natural resources, in the target areas. Land cover change detection and mapping is a key component of interdisciplinary land change science, which uses it to determine the consequences of land change on climate. Application of land cover mapping Local and regional planning Disaster management Vulnerability and Risk Assessments Ecological management Monitoring the effects of climate change Wildlife management Alternative landscape futures and conservation Environmental forecasting Environmental impact assessment Policy development See also Geo-Wiki Land change modeling Pedosphere Cryosphere Hydrosphere References Further reading Ivan Balenovic; et al. (2015). "Quality assessment of high density digital surface model over different land cover classes".PERIODICUM BIOLOGORUM. VOL. 117, No 4, pp. 459–470, 2015 External links Global land cover maps for 2015 with a spatial resolution of 100 metres based on data from the Copernicus programme Annual Regional Land Cover Monitoring System or Hindu Kush Himalaya with a spatial resolution of 30 metres based on Landsat images Biogeography Natural resources
Land cover
Biology
615
378,667
https://en.wikipedia.org/wiki/Mechanical%20toy
Mechanical toys are toys powered by mechanical energy. Depending on the mechanism used they can perform a range of motions, from simple to very complex. Types The types of mechanical energy used to power mechanical toys include rubber bands, springs, and flywheels. Mechanical toys use 4 types of different movements, rotary (going around in a circle), linear (moving in a straight line then stopping), reciprocating (moving backwards and forwards continuously in a straight line) and oscillating (moving backwards and forwards in a curve). Mechanical toys use several types of mechanisms, because Cam toys are powered by a very large cam and even bigger cam follower which transfers the cam rotation to the working area of the toy. The cam is unevenly rotated by placing the rotator out of the ideal center. This transforms the circular motion into motion that moves up and powers the toy. Crank toys are internally based on cams too but allow more complicated motions. Single rotation of the crank leads to single action in the working area of the toy, and moving the crank forward and backward can result in reversed motion. Some toys have cameras in them which makes them very expensive. Gear toys use gear wheels to transfer the power in the toy, to change the speed and direction of motion. They can be powered by hand (with a cram or cam and cam follower) or by wind-up mechanism. The different number of teeth in the gear wheels determine the speed transition from wheel to wheel. By chaining and distributing number of gear wheels this type of mechanical toys allows very complex motions. Lever toys are mechanical toys that use the mechanical advantage of a lever to transmit and transform movement. Lever toys can use cranks and cams too. Pulley toys. Pulleys are very similar to gear wheels but two elements are connected by a metal chain or belt from elastic strong material (for example rubber.) Pulleys allow transferring power on distance much easier and with less losses that using number of gear wheels. Using pulleys in mechanical toys also allows to change the angle, the speed and the direction of the motion. Wind-up toys typically are powered by a metal spring that is tightened by turning it. Then gear wheels and pulleys can transfer the power and control the toy motion. History One of the first mechanical toys is the flying pigeon by Archytas of Tarentum created 400-350 BC. In 16th Century Leonardo da Vinci created his mechanical lion as a present for Louis XII. The lion could walk and reveal a cluster of lilies from his chest. In 1738 Jacques de Vaucanson created a mechanical robot duck that was able to drink and eat. Pierre Jaquet-Droz created The Writer, The Draftsman and The Musician - toys that are still present in the museum of Art and History in Switzerland. In education The potential educational value of mechanical toys in teaching transversal skills has been recognised by the European Union education project Clockwork objects, enhanced learning: Automata Toys Construction (CLOHE). Also they play a valid part in teaching young children motor skills and are used in some schools to do this Mechanical toys were the subject of the Academy Award-winning 1972 short Dutch documentary, This Tiny World. See also Automaton List of Toy Mechanisms Reference Mechanical toys
Mechanical toy
Physics,Technology
665
1,749,665
https://en.wikipedia.org/wiki/List%20of%20finite%20simple%20groups
In mathematics, the classification of finite simple groups states that every finite simple group is cyclic, or alternating, or in one of 16 families of groups of Lie type, or one of 26 sporadic groups. The list below gives all finite simple groups, together with their order, the size of the Schur multiplier, the size of the outer automorphism group, usually some small representations, and lists of all duplicates. Summary The following table is a complete list of the 18 families of finite simple groups and the 26 sporadic simple groups, along with their orders. Any non-simple members of each family are listed, as well as any members duplicated within a family or between families. (In removing duplicates it is useful to note that no two finite simple groups have the same order, except that the group A8 = A3(2) and A2(4) both have order 20160, and that the group Bn(q) has the same order as Cn(q) for q odd, n > 2. The smallest of the latter pairs of groups are B3(3) and C3(3) which both have order 4585351680.) There is an unfortunate conflict between the notations for the alternating groups An and the groups of Lie type An(q). Some authors use various different fonts for An to distinguish them. In particular, in this article we make the distinction by setting the alternating groups An in Roman font and the Lie-type groups An(q) in italic. In what follows, n is a positive integer, and q is a positive power of a prime number p, with the restrictions noted. The notation (a,b) represents the greatest common divisor of the integers a and b. Cyclic groups, Zp Simplicity: Simple for p a prime number. Order: p Schur multiplier: Trivial. Outer automorphism group: Cyclic of order p − 1. Other names: Z/pZ, Cp Remarks: These are the only simple groups that are not perfect. Alternating groups, An, n > 4 Simplicity: Solvable for n ≤ 4, otherwise simple. Order: n!/2 when n > 1. Schur multiplier: 2 for n = 5 or n > 7, 6 for n = 6 or 7; see Covering groups of the alternating and symmetric groups Outer automorphism group: In general 2. Exceptions: for n = 1, n = 2, it is trivial, and for n = 6, it has order 4 (elementary abelian). Other names: Altn. Isomorphisms: A1 and A2 are trivial. A3 is cyclic of order 3. A4 is isomorphic to A1(3) (solvable). A5 is isomorphic to A1(4) and to A1(5). A6 is isomorphic to A1(9) and to the derived group B2(2)′. A8 is isomorphic to A3(2). Remarks: An index 2 subgroup of the symmetric group of permutations of n points when n > 1. Groups of Lie type Notation: n is a positive integer, q > 1 is a power of a prime number p, and is the order of some underlying finite field. The order of the outer automorphism group is written as d⋅f⋅g, where d is the order of the group of "diagonal automorphisms", f is the order of the (cyclic) group of "field automorphisms" (generated by a Frobenius automorphism), and g is the order of the group of "graph automorphisms" (coming from automorphisms of the Dynkin diagram). The outer automorphism group is often, but not always, isomorphic to the semidirect product where all these groups are cyclic of the respective orders d, f, g, except for type , odd, where the group of order is , and (only when ) , the symmetric group on three elements. The notation (a,b) represents the greatest common divisor of the integers a and b. Chevalley groups, An(q), Bn(q) n > 1, Cn(q) n > 2, Dn(q) n > 3 Chevalley groups, E6(q), E7(q), E8(q), F4(q), G2(q) Steinberg groups, 2An(q2) n > 1, 2Dn(q2) n > 3, 2E6(q2), 3D4(q3) Suzuki groups, 2B2(22n+1) Simplicity: Simple for n ≥ 1. The group 2B2(2) is solvable. Order: q2 (q2 + 1) (q − 1), where q = 22n+1. Schur multiplier: Trivial for n ≠ 1, elementary abelian of order 4 for 2B2(8). Outer automorphism group: 1⋅f⋅1, where f = 2n + 1. Other names: Suz(22n+1), Sz(22n+1). Isomorphisms: 2B2(2) is the Frobenius group of order 20. Remarks: Suzuki group are Zassenhaus groups acting on sets of size (22n+1)2 + 1, and have 4-dimensional representations over the field with 22n+1 elements. They are the only non-cyclic simple groups whose order is not divisible by 3. They are not related to the sporadic Suzuki group. Ree groups and Tits group, 2F4(22n+1) Simplicity: Simple for n ≥ 1. The derived group 2F4(2)′ is simple of index 2 in 2F4(2), and is called the Tits group, named for the Belgian mathematician Jacques Tits. Order: q12 (q6 + 1) (q4 − 1) (q3 + 1) (q − 1), where q = 22n+1. The Tits group has order 17971200 = 211 ⋅ 33 ⋅ 52 ⋅ 13. Schur multiplier: Trivial for n ≥ 1 and for the Tits group. Outer automorphism group: 1⋅f⋅1, where f = 2n + 1. Order 2 for the Tits group. Remarks: Unlike the other simple groups of Lie type, the Tits group does not have a BN pair, though its automorphism group does so most authors count it as a sort of honorary group of Lie type. Ree groups, 2G2(32n+1) Simplicity: Simple for n ≥ 1. The group 2G2(3) is not simple, but its derived group 2G2(3)′ is a simple subgroup of index 3. Order: q3 (q3 + 1) (q − 1), where q = 32n+1 Schur multiplier: Trivial for n ≥ 1 and for 2G2(3)′. Outer automorphism group: 1⋅f⋅1, where f = 2n + 1. Other names: Ree(32n+1), R(32n+1), E2∗(32n+1) . Isomorphisms: The derived group 2G2(3)′ is isomorphic to A1(8). Remarks: 2G2(32n+1) has a doubly transitive permutation representation on 33(2n+1) + 1 points and acts on a 7-dimensional vector space over the field with 32n+1 elements. Sporadic groups Mathieu groups, M11, M12, M22, M23, M24 Janko groups, J1, J2, J3, J4 Conway groups, Co1, Co2, Co3 Fischer groups, Fi22, Fi23, Fi24′ Higman–Sims group, HS Order: 29 ⋅ 32 ⋅ 53 ⋅ 7 ⋅ 11 = 44352000 Schur multiplier: Order 2. Outer automorphism group: Order 2. Remarks: It acts as a rank 3 permutation group on the Higman Sims graph with 100 points, and is contained in Co2 and in Co3. McLaughlin group, McL Order: 27 ⋅ 36 ⋅ 53 ⋅ 7 ⋅ 11 = 898128000 Schur multiplier: Order 3. Outer automorphism group: Order 2. Remarks: Acts as a rank 3 permutation group on the McLaughlin graph with 275 points, and is contained in Co2 and in Co3. Held group, He Order: 210 ⋅ 33 ⋅ 52 ⋅ 73 ⋅ 17 = 4030387200 Schur multiplier: Trivial. Outer automorphism group: Order 2. Other names: Held–Higman–McKay group, HHM, F7, HTH Remarks: Centralizes an element of order 7 in the monster group. Rudvalis group, Ru Order: 214 ⋅ 33 ⋅ 53 ⋅ 7 ⋅ 13 ⋅ 29 = 145926144000 Schur multiplier: Order 2. Outer automorphism group: Trivial. Remarks: The double cover acts on a 28-dimensional lattice over the Gaussian integers. Suzuki sporadic group, Suz Order: 213 ⋅ 37 ⋅ 52 ⋅ 7 ⋅ 11 ⋅ 13 = 448345497600 Schur multiplier: Order 6. Outer automorphism group: Order 2. Other names: Sz Remarks: The 6 fold cover acts on a 12-dimensional lattice over the Eisenstein integers. It is not related to the Suzuki groups of Lie type. O'Nan group, O'N Order: 29 ⋅ 34 ⋅ 5 ⋅ 73 ⋅ 11 ⋅ 19 ⋅ 31 = 460815505920 Schur multiplier: Order 3. Outer automorphism group: Order 2. Other names: O'Nan–Sims group, O'NS, O–S Remarks: The triple cover has two 45-dimensional representations over the field with 7 elements, exchanged by an outer automorphism. Harada–Norton group, HN Order: 214 ⋅ 36 ⋅ 56 ⋅ 7 ⋅ 11 ⋅ 19 = 273030912000000 Schur multiplier: Trivial. Outer automorphism group: Order 2. Other names: F5, D Remarks: Centralizes an element of order 5 in the monster group. Lyons group, Ly Order: 28 ⋅ 37 ⋅ 56 ⋅ 7 ⋅ 11 ⋅ 31 ⋅ 37 ⋅ 67 = 51765179004000000 Schur multiplier: Trivial. Outer automorphism group: Trivial. Other names: Lyons–Sims group, LyS Remarks: Has a 111-dimensional representation over the field with 5 elements. Thompson group, Th Order: 215 ⋅ 310 ⋅ 53 ⋅ 72 ⋅ 13 ⋅ 19 ⋅ 31 = 90745943887872000 Schur multiplier: Trivial. Outer automorphism group: Trivial. Other names: F3, E Remarks: Centralizes an element of order 3 in the monster. Has a 248-dimensional representation which, when reduced modulo 3, leads to containment in E8(3). Baby Monster group, B Order:    241 ⋅ 313 ⋅ 56 ⋅ 72 ⋅ 11 ⋅ 13 ⋅ 17 ⋅ 19 ⋅ 23 ⋅ 31 ⋅ 47 = 4154781481226426191177580544000000 Schur multiplier: Order 2. Outer automorphism group: Trivial. Other names: F2 Remarks: The double cover is contained in the monster group. It has a representation of dimension 4371 over the complex numbers (with no nontrivial invariant product), and a representation of dimension 4370 over the field with 2 elements preserving a commutative but non-associative product. Fischer–Griess Monster group, M Order:    246 ⋅ 320 ⋅ 59 ⋅ 76 ⋅ 112 ⋅ 133 ⋅ 17 ⋅ 19 ⋅ 23 ⋅ 29 ⋅ 31 ⋅ 41 ⋅ 47 ⋅ 59 ⋅ 71 = 808017424794512875886459904961710757005754368000000000 Schur multiplier: Trivial. Outer automorphism group: Trivial. Other names: F1, M1, Monster group, Friendly giant, Fischer's monster. Remarks: Contains all but 6 of the other sporadic groups as subquotients. Related to monstrous moonshine. The monster is the automorphism group of the 196,883-dimensional Griess algebra and the infinite-dimensional monster vertex operator algebra, and acts naturally on the monster Lie algebra. Non-cyclic simple groups of small order (Complete for orders less than 100,000) lists the 56 non-cyclic simple groups of order less than a million. See also List of small groups Notes References Further reading Simple Groups of Lie Type by Roger W. Carter, Conway, J. H.; Curtis, R. T.; Norton, S. P.; Parker, R. A.; and Wilson, R. A.: "Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups." Oxford, England 1985. Daniel Gorenstein, Richard Lyons, Ronald Solomon The Classification of the Finite Simple Groups (volume 1), AMS, 1994 (volume 3), AMS, 1998 Atlas of Finite Group Representations: contains representations and other data for many finite simple groups, including the sporadic groups. Orders of non abelian simple groups up to 1010, and on to 1048 with restrictions on rank. External links Orders of non abelian simple groups up to order 10,000,000,000. Mathematics-related lists Group theory Sporadic groups
List of finite simple groups
Mathematics
2,826
740,540
https://en.wikipedia.org/wiki/Engineering%20physics
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.). History The name and subject have been used since 1861 by the German physics teacher J. Frick in his publications. Definition and Terminology It is notable that in many languages and countries, the term for "Engineering physics" would be directly translated into English as "Technical physics". In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics. In some universities and their institutions, an engineering (or applied) physics major is a discipline or specialization within the scope of engineering science, or applied science. Related Names Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering". Other Meanings A "Physical Design Engineer" or improperly called as "Physical Engineer" is the role of an electrical engineer who is responsible for the design and layout (routing) in CAE, specifically in ASIC/FPGA design. This role could be performed by a person trained in engineering physics if the person has received training in integrated electronics design, but this does not necessarily mean that an engineering physicist is an IC design engineer. Expertise Unlike traditional engineering disciplines, engineering science/physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science/physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis. Degrees In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. Awards There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field. See also Applied physics Engineering Engineering science and mechanics Environmental engineering science Index of engineering science and mechanics articles Notes and references External links "Engineering Physics at Xavier" "The Engineering Physicist Profession" "Engineering Physicist Professional Profile" Society of Engineering Science Inc. Applied and interdisciplinary physics physics ja:基礎工学#『基礎的な工学』としての基礎工学
Engineering physics
Physics,Engineering
817
17,199,168
https://en.wikipedia.org/wiki/Dry%20heat%20sterilization
Dry heat sterilization of an object is one of the earliest forms of sterilization practiced. It uses hot air that is either free from water vapor or has very little of it, where this moisture plays a minimal or no role in the process of sterilization. Process The dry heat sterilization process is accomplished by conduction; that is where heat is absorbed by the exterior surface of an item and then passed inward to the next layer. Eventually, the entire item reaches the proper temperature needed to achieve sterilization. The proper time and temperature for dry heat sterilization is 160 °C (320 °F) for 2 hours or 170 °C (340 °F) for 1 hour, and in the case of High Velocity Hot Air sterilisers, 190°C (375°F) for 6 to 12 minutes. Items should be dry before sterilization since water will interfere with the process. Dry heat destroys microorganisms by causing denaturation of proteins. The presence of moisture, such as in steam sterilization, significantly speeds up heat penetration. There are two types of hot air convection (Convection refers to the circulation of heated air within the chamber of the oven) sterilizers: Gravity convection Mechanical convection Mechanical convection process A mechanical convection oven contains a blower that actively forces heated air throughout all areas of the chamber. The flow created by the blower ensures uniform temperatures and the equal transfer of heat throughout the load. For this reason, the mechanical convection oven is the more efficient of the two processes. High Velocity Hot Air An even more efficient system than convection uses deturbulized hot air forced through a jet curtain at 3000ft/minute. Instruments used for dry heat sterilization Instruments and techniques used for dry heat sterilization include hot air ovens, incinerators, flaming, radiation, and glass bead sterilizers. Effect on microorganisms Dry heat lyses the proteins in any organism, causes oxidative free radical damage, causes drying of cells, and can even burn them to ashes, as in incineration. See also Sterility assurance level References ISO 20857 Notes General References Sterilization (microbiology)
Dry heat sterilization
Chemistry,Biology
447
37,774,736
https://en.wikipedia.org/wiki/WiFi%20Explorer
WiFi Explorer is a wireless network scanner tool for macOS that can help users identify channel conflicts, overlapping and network configuration issues that may be affecting the connectivity and performance of Wi-Fi networks. History WiFi Explorer began as a desktop alternative to WiFi Analyzer, an iPhone app for wireless network scanning that was pulled out from Apple's App Store in March, 2010, due to the use of private frameworks. Since its first release, WiFi Explorer incorporated features that were not included in the last available version of WiFi Analyzer, such as support for 5 GHz networks and 40 MHz channel widths. Starting in version 1.5, WiFi Explorer included support for 802.11ac networks, as well as 80 and 160 MHz channel widths. On June 22, 2017, a professional version of WiFi Explorer, WiFi Explorer Pro, was released. WiFi Explorer Pro offers additional features especially designed for WLAN and IT professionals. The standard version of WiFi Explorer is also available on Setapp. Features Standard Displays various network parameters: Network name (SSID) and MAC address (BSSID) Manufacturer AP name for certain Cisco and Aruba devices Beacon interval Mode (802.11a/b/g/n/ac) Band (2.4 GHz ISM and 5 GHz UNII-1, 2, 2 Extended, and 3) Channel width (20, 40, 80, and 160 MHz) Secondary channel offset Security mode (WEP, WPA, WPA2) Support for Wi-Fi Protected Setup (WPS) Supported basic, min and max data rates Advertised 802.11 Information Elements Graphical visualization of channel allocation, signal strength or Signal-to-noise ratio (SNR) Different sorting and filtering options Displays signal strength and noise values as percentage or dBm Ability to save and load results for later analysis Metrics and network details can be exported to a CSV file format Selectable and sortable columns Adjustable graph timescales Editable column for annotations, comments, etc. Customizable network colors Full screen mode Comprehensive application's help Professional Passive and directed scan modes Spectrum analysis integration Apple's iOS AirPort Utility integration Enhanced filtering Support for remote sensors Support for networks with hidden SSIDs Support for external USB Wi-Fi adapters via the External Adapter Support Environment (EASE) Additional organization options for scan results Dark and light themes Limitations Due to limitations of Apple's CoreWLAN framework, the standard version of WiFi Explorer is unable to detect hidden networks (except when connected to it) and does not support external USB Wi-Fi adapters. The Pro edition supports passive scanning, which can detect hidden networks, and can make use of external adapters via the External Adapter Support Environment (EASE). System requirements macOS 10.10 or higher (64-bit) See also iStumbler - An open-source utility for finding wireless networks and devices in macOS. KisMAC - A wireless network discovery tool for macOS. Netspot - A macOS tool for wireless networks assessment, scanning and surveys. References External links Wireless networking MacOS network-related software el:WiFi Explorer
WiFi Explorer
Technology,Engineering
644
50,652
https://en.wikipedia.org/wiki/Uniform%20convergence
In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set as the function domain if, given any arbitrarily small positive number , a number can be found such that each of the functions differs from by no more than at every point in . Described in an informal way, if converges to uniformly, then how quickly the functions approach is "uniform" throughout in the following sense: in order to guarantee that differs from by less than a chosen distance , we only need to make sure that is larger than or equal to a certain , which we can find without knowing the value of in advance. In other words, there exists a number that could depend on but is independent of , such that choosing will ensure that for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find (i.e., could depend on the values of both and ) such that, for that particular , falls within of whenever (and a different may require a different, larger for to guarantee that ). The difference between uniform convergence and pointwise convergence was not fully appreciated early in the history of calculus, leading to instances of faulty reasoning. The concept, which was first formalized by Karl Weierstrass, is important because several properties of the functions , such as continuity, Riemann integrability, and, with additional hypotheses, differentiability, are transferred to the limit if the convergence is uniform, but not necessarily if the convergence is not uniform. History In 1821 Augustin-Louis Cauchy published a proof that a convergent sum of continuous functions is always continuous, to which Niels Henrik Abel in 1826 found purported counterexamples in the context of Fourier series, arguing that Cauchy's proof had to be incorrect. Completely standard notions of convergence did not exist at the time, and Cauchy handled convergence using infinitesimal methods. When put into the modern language, what Cauchy proved is that a uniformly convergent sequence of continuous functions has a continuous limit. The failure of a merely pointwise-convergent limit of continuous functions to converge to a continuous function illustrates the importance of distinguishing between different types of convergence when handling sequences of functions. The term uniform convergence was probably first used by Christoph Gudermann, in an 1838 paper on elliptic functions, where he employed the phrase "convergence in a uniform way" when the "mode of convergence" of a series is independent of the variables and While he thought it a "remarkable fact" when a series converged in this way, he did not give a formal definition, nor use the property in any of his proofs. Later Gudermann's pupil Karl Weierstrass, who attended his course on elliptic functions in 1839–1840, coined the term gleichmäßig konvergent () which he used in his 1841 paper Zur Theorie der Potenzreihen, published in 1894. Independently, similar concepts were articulated by Philipp Ludwig von Seidel and George Gabriel Stokes. G. H. Hardy compares the three definitions in his paper "Sir George Stokes and the concept of uniform convergence" and remarks: "Weierstrass's discovery was the earliest, and he alone fully realized its far-reaching importance as one of the fundamental ideas of analysis." Under the influence of Weierstrass and Bernhard Riemann this concept and related questions were intensely studied at the end of the 19th century by Hermann Hankel, Paul du Bois-Reymond, Ulisse Dini, Cesare Arzelà and others. Definition We first define uniform convergence for real-valued functions, although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below). Suppose is a set and is a sequence of real-valued functions on it. We say the sequence is uniformly convergent on with limit if for every there exists a natural number such that for all and for all The notation for uniform convergence of to is not quite standardized and different authors have used a variety of symbols, including (in roughly decreasing order of popularity): Frequently, no special symbol is used, and authors simply write to indicate that convergence is uniform. (In contrast, the expression on without an adverb is taken to mean pointwise convergence on : for all , as .) Since is a complete metric space, the Cauchy criterion can be used to give an equivalent alternative formulation for uniform convergence: converges uniformly on (in the previous sense) if and only if for every , there exists a natural number such that . In yet another equivalent formulation, if we define then converges to uniformly if and only if as . Thus, we can characterize uniform convergence of on as (simple) convergence of in the function space with respect to the uniform metric (also called the supremum metric), defined by Symbolically, . The sequence is said to be locally uniformly convergent with limit if is a metric space and for every , there exists an such that converges uniformly on It is clear that uniform convergence implies local uniform convergence, which implies pointwise convergence. Notes Intuitively, a sequence of functions converges uniformly to if, given an arbitrarily small , we can find an so that the functions with all fall within a "tube" of width centered around (i.e., between and ) for the entire domain of the function. Note that interchanging the order of quantifiers in the definition of uniform convergence by moving "for all " in front of "there exists a natural number " results in a definition of pointwise convergence of the sequence. To make this difference explicit, in the case of uniform convergence, can only depend on , and the choice of has to work for all , for a specific value of that is given. In contrast, in the case of pointwise convergence, may depend on both and , and the choice of only has to work for the specific values of and that are given. Thus uniform convergence implies pointwise convergence, however the converse is not true, as the example in the section below illustrates. Generalizations One may straightforwardly extend the concept to functions E → M, where (M, d) is a metric space, by replacing with . The most general setting is the uniform convergence of nets of functions E → X, where X is a uniform space. We say that the net converges uniformly with limit f : E → X if and only if for every entourage V in X, there exists an , such that for every x in E and every , is in V. In this situation, uniform limit of continuous functions remains continuous. Definition in a hyperreal setting Uniform convergence admits a simplified definition in a hyperreal setting. Thus, a sequence converges to f uniformly if for all hyperreal x in the domain of and all infinite n, is infinitely close to (see microcontinuity for a similar definition of uniform continuity). In contrast, pointwise continuity requires this only for real x. Examples For , a basic example of uniform convergence can be illustrated as follows: the sequence converges uniformly, while does not. Specifically, assume . Each function is less than or equal to when , regardless of the value of . On the other hand, is only less than or equal to at ever increasing values of when values of are selected closer and closer to 1 (explained more in depth further below). Given a topological space X, we can equip the space of bounded real or complex-valued functions over X with the uniform norm topology, with the uniform metric defined by Then uniform convergence simply means convergence in the uniform norm topology: . The sequence of functions is a classic example of a sequence of functions that converges to a function pointwise but not uniformly. To show this, we first observe that the pointwise limit of as is the function , given by Pointwise convergence: Convergence is trivial for and , since and , for all . For and given , we can ensure that whenever by choosing , which is the minimum integer exponent of that allows it to reach or dip below (here the upper square brackets indicate rounding up, see ceiling function). Hence, pointwise for all . Note that the choice of depends on the value of and . Moreover, for a fixed choice of , (which cannot be defined to be smaller) grows without bound as approaches 1. These observations preclude the possibility of uniform convergence. Non-uniformity of convergence: The convergence is not uniform, because we can find an so that no matter how large we choose there will be values of and such that To see this, first observe that regardless of how large becomes, there is always an such that Thus, if we choose we can never find an such that for all and . Explicitly, whatever candidate we choose for , consider the value of at . Since the candidate fails because we have found an example of an that "escaped" our attempt to "confine" each to within of for all . In fact, it is easy to see that contrary to the requirement that if . In this example one can easily see that pointwise convergence does not preserve differentiability or continuity. While each function of the sequence is smooth, that is to say that for all n, , the limit is not even continuous. Exponential function The series expansion of the exponential function can be shown to be uniformly convergent on any bounded subset using the Weierstrass M-test. Theorem (Weierstrass M-test). Let be a sequence of functions and let be a sequence of positive real numbers such that for all and If converges, then converges absolutely and uniformly on . The complex exponential function can be expressed as the series: Any bounded subset is a subset of some disc of radius centered on the origin in the complex plane. The Weierstrass M-test requires us to find an upper bound on the terms of the series, with independent of the position in the disc: To do this, we notice and take If is convergent, then the M-test asserts that the original series is uniformly convergent. The ratio test can be used here: which means the series over is convergent. Thus the original series converges uniformly for all and since , the series is also uniformly convergent on Properties Every uniformly convergent sequence is locally uniformly convergent. Every locally uniformly convergent sequence is compactly convergent. For locally compact spaces local uniform convergence and compact convergence coincide. A sequence of continuous functions on metric spaces, with the image metric space being complete, is uniformly convergent if and only if it is uniformly Cauchy. If is a compact interval (or in general a compact topological space), and is a monotone increasing sequence (meaning for all n and x) of continuous functions with a pointwise limit which is also continuous, then the convergence is necessarily uniform (Dini's theorem). Uniform convergence is also guaranteed if is a compact interval and is an equicontinuous sequence that converges pointwise. Applications To continuity If and are topological spaces, then it makes sense to talk about the continuity of the functions . If we further assume that is a metric space, then (uniform) convergence of the to is also well defined. The following result states that continuity is preserved by uniform convergence: This theorem is proved by the " trick", and is the archetypal example of this trick: to prove a given inequality (), one uses the definitions of continuity and uniform convergence to produce 3 inequalities (), and then combines them via the triangle inequality to produce the desired inequality. This theorem is an important one in the history of real and Fourier analysis, since many 18th century mathematicians had the intuitive understanding that a sequence of continuous functions always converges to a continuous function. The image above shows a counterexample, and many discontinuous functions could, in fact, be written as a Fourier series of continuous functions. The erroneous claim that the pointwise limit of a sequence of continuous functions is continuous (originally stated in terms of convergent series of continuous functions) is infamously known as "Cauchy's wrong theorem". The uniform limit theorem shows that a stronger form of convergence, uniform convergence, is needed to ensure the preservation of continuity in the limit function. More precisely, this theorem states that the uniform limit of uniformly continuous functions is uniformly continuous; for a locally compact space, continuity is equivalent to local uniform continuity, and thus the uniform limit of continuous functions is continuous. To differentiability If is an interval and all the functions are differentiable and converge to a limit , it is often desirable to determine the derivative function by taking the limit of the sequence . This is however in general not possible: even if the convergence is uniform, the limit function need not be differentiable (not even if the sequence consists of everywhere-analytic functions, see Weierstrass function), and even if it is differentiable, the derivative of the limit function need not be equal to the limit of the derivatives. Consider for instance with uniform limit . Clearly, is also identically zero. However, the derivatives of the sequence of functions are given by and the sequence does not converge to or even to any function at all. In order to ensure a connection between the limit of a sequence of differentiable functions and the limit of the sequence of derivatives, the uniform convergence of the sequence of derivatives plus the convergence of the sequence of functions at at least one point is required: If is a sequence of differentiable functions on such that exists (and is finite) for some and the sequence converges uniformly on , then converges uniformly to a function on , and for . To integrability Similarly, one often wants to exchange integrals and limit processes. For the Riemann integral, this can be done if uniform convergence is assumed: If is a sequence of Riemann integrable functions defined on a compact interval which uniformly converge with limit , then is Riemann integrable and its integral can be computed as the limit of the integrals of the : In fact, for a uniformly convergent family of bounded functions on an interval, the upper and lower Riemann integrals converge to the upper and lower Riemann integrals of the limit function. This follows because, for n sufficiently large, the graph of is within of the graph of f, and so the upper sum and lower sum of are each within of the value of the upper and lower sums of , respectively. Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integral instead. To analyticity Using Morera's Theorem, one can show that if a sequence of analytic functions converges uniformly in a region S of the complex plane, then the limit is analytic in S. This example demonstrates that complex functions are more well-behaved than real functions, since the uniform limit of analytic functions on a real interval need not even be differentiable (see Weierstrass function). To series We say that converges: With this definition comes the following result: Let x0 be contained in the set E and each fn be continuous at x0. If converges uniformly on E then f is continuous at x0 in E. Suppose that and each fn is integrable on E. If converges uniformly on E then f is integrable on E and the series of integrals of fn is equal to integral of the series of fn. Almost uniform convergence If the domain of the functions is a measure space E then the related notion of almost uniform convergence can be defined. We say a sequence of functions converges almost uniformly on E if for every there exists a measurable set with measure less than such that the sequence of functions converges uniformly on . In other words, almost uniform convergence means there are sets of arbitrarily small measure for which the sequence of functions converges uniformly on their complement. Note that almost uniform convergence of a sequence does not mean that the sequence converges uniformly almost everywhere as might be inferred from the name. However, Egorov's theorem does guarantee that on a finite measure space, a sequence of functions that converges almost everywhere also converges almost uniformly on the same set. Almost uniform convergence implies almost everywhere convergence and convergence in measure. See also Uniform convergence in probability Modes of convergence (annotated index) Dini's theorem Arzelà–Ascoli theorem Notes References Konrad Knopp, Theory and Application of Infinite Series; Blackie and Son, London, 1954, reprinted by Dover Publications, . G. H. Hardy, Sir George Stokes and the concept of uniform convergence; Proceedings of the Cambridge Philosophical Society, 19, pp. 148–156 (1918) Bourbaki; Elements of Mathematics: General Topology. Chapters 5–10 (paperback); Walter Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw–Hill, 1976. Gerald Folland, Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, . William Wade, An Introduction to Analysis, 3rd ed., Pearson, 2005 External links Graphic examples of uniform convergence of Fourier series from the University of Colorado Calculus Mathematical series Topology of function spaces Convergence (mathematics)
Uniform convergence
Mathematics
3,553
37,827,185
https://en.wikipedia.org/wiki/B%C3%A4sk
Bäsk is a Swedish-style liquor flavored with wormwood ("malört" in Swedish) or anise. Sweden is one of the few countries that has never banned absinthe or other wormwood-flavored liquors. Bäsk is an old alternative spelling of the word besk which means "bitter". In the United States, the Chicago-based brand Jeppson's Malört is one of the most well-known versions of the liquor. In Sweden, the most popular brand is Bäska droppar by O.P. Anderson Distillery. Bäsk is said to be good for digestion, and therefore is traditionally associated with fatty foods. See also Vermouth References Distilled drinks Culture of Sweden Bitters Swedish distilled drinks
Bäsk
Chemistry
159
1,844,123
https://en.wikipedia.org/wiki/Linnean%20Medal
The Linnean Medal of the Linnean Society of London was established in 1888, and is awarded annually to alternately a botanist or a zoologist or (as has been common since 1958) to one of each in the same year. The medal was of gold until 1976, and is for the preceding years often referred to as "the Gold Medal of the Linnean Society", not to be confused with the official Linnean Gold Medal which is seldom awarded. The engraver of the medal was Charles Anderson Ferrier of Dundee, a Fellow of the Linnean Society from 1882. On the obverse of the medal is the head of Linnaeus in profile and the words "Carolus Linnaeus", on the reverse are the arms of the society and the legend "Societas Linnaeana optime merenti"; an oval space is reserved for the recipient's name. Linnean medallists 19th century 1888: Sir Joseph D. Hooker and Sir Richard Owen 1889: Alphonse Louis Pierre Pyrame de Candolle 1890: Thomas Henry Huxley 1891: Jean-Baptiste Édouard Bornet 1892: Alfred Russel Wallace 1893: Daniel Oliver 1894: Ernst Haeckel 1895: Ferdinand Julius Cohn 1896: George James Allman 1897: Jacob Georg Agardh 1898: George Charles Wallich 1899: John Gilbert Baker 1900: Alfred Newton 20th century 1901: Sir George King 1902: Albert von Kölliker 1903: Mordecai Cubitt Cooke 1904: Albert C. L. G. Günther 1905: Eduard Strasburger 1906: Alfred Merle Norman 1907: Melchior Treub 1908: Thomas Roscoe Rede Stebbing 1909: Frederick Orpen Bower 1910: Georg Ossian Sars 1911: Hermann Graf zu Solms-Laubach 1912: Robert Cyril Layton Perkins 1913: Heinrich Gustav Adolf Engler 1914: Otto Butschli 1915: Joseph Henry Maiden 1916: Frank Evers Beddard 1917: Henry Brougham Guppy 1918: Frederick DuCane Godman 1919: Sir Isaac Bayley Balfour 1920: Sir Edwin Ray Lankester 1921: Dukinfield Henry Scott 1922: Sir Edward Bagnall Poulton 1923: Thomas Frederic Cheeseman 1924: William Carmichael McIntosh 1925: Francis Wall Oliver 1926: Edgar Johnson Allen 1927: Otto Stapf 1928: Edmund Beecher Wilson 1929: Hugo de Vries 1930: James Peter Hill 1931: Karl Ritter von Goebel 1932: Edwin Stephen Goodrich 1933: Robert Hippolyte Chodat 1934: Sir Sidney Frederic Harmer 1935: Sir David Prain 1936: John Stanley Gardiner 1937: Frederick Frost Blackman 1938: Sir D'Arcy Wentworth Thompson 1939: Elmer Drew Merrill 1940: Sir Arthur Smith Woodward 1941: Sir Arthur George Tansley 1942: Award suspended 1946: William Thomas Calman and Frederick Ernest Weiss 1947: Maurice Jules Gaston Corneille Caullery 1948: Agnes Arber 1949: D. M. S. Watson 1950: Henry Nicholas Ridley 1951: Theodor Mortensen 1952: Isaac Henry Burkill 1953: Patrick Alfred Buxton 1954: Felix Eugene Fritsch 1955: Sir John Graham Kerr 1956: William Henry Lang 1957: Erik Stensiö 1958: Sir Gavin de Beer and William Bertram Turrill 1959: H. M. Fox and Carl Skottsberg 1960: Libbie H. Hyman and Hugh Hamshaw Thomas 1961: and F. S. Russle [sic] 1962: Norman L. Bor and Guillermo Kuschel 1963: Sidnie M. Manton and William H. Pearsall 1964: Richard E. Holttum and Carl Frederick Abel Pantin 1965: John Hutchinson and John Ramsbottom 1966: George Stuart Carter and Sir Harry Godwin 1967: Charles Sutherland Elton and Charles E. Hubbard 1968: and T. M. Harris 1969: Irene Manton and Ethelwynn Trewavas 1970: E. J. H. Corner and Errol I. White 1971: Charles Russell Metcalfe and James Edward Smith 1972: Arthur Roy Clapham and Alfred Romer 1973: G. Ledyard Stebbins and John.Z.Young 1974: E. H. W. Hennig and Josias Braun-Blanquet 1975: A. S. Watt and Philip M Sheppard 1976: William Thomas Stearn 1977: Ernst Mayr and Thomas G. Tutin 1978: and Thomas Stanley Westoll 1979: Robert McNeill Alexander and P. W. Richards 1980: Geoffrey Clough Ainsworth and Roy Crowson 1981: Brian Laurence Burtt and Sir Cyril Astley Clarke 1982: Peter Hadland Davis and Peter H. Greenwood 1983: Cecil T. Ingold and Michael J. D. White 1984: John G. Hawkes and J. S. Kennedy 1985: Arthur Cain and Jeffrey B. Harborne 1986: Arthur Cronquist and Percy C. C. Garnham 1987: Geoffrey Fryer and V. H. Heywood 1988: John L. Harley and Sir Richard Southwood 1989: William Donald Hamilton and Sir David Smith 1990: Sir Ghillean Tolmie Prance and F. Gwendolen Rees 1991: William Gilbert Chaloner and R. M. May 1992: Richard Evans Schultes and Stephen Jay Gould 1993: Barbara Pickersgill and Lincoln Brower 1994: and Sir Alec John Jeffreys 1995: S. M. Walters and John Maynard Smith 1996: Jack Heslop-Harrison and Keith Vickerman 1997: Enrico S. Coen and Rosemary Helen Lowe-McConnell 1998: Mark W. Chase and C. Patterson 1999: Philip Barry Tomlinson and Quentin Bone 2000: Bernard Verdcourt and Michael F. Claridge 21st century 2001: Chris Humphries and 2002: Sherwin Carlquist and 2003: Pieter Baas and Bryan Campbell Clarke 2004: Geoffrey Allen Boxshall and John Dransfield 2005: Paula Rudall and 2006: David Mabberley and Richard A. Fortey 2007: and Thomas Cavalier-Smith 2008: and Stephen Donovan 2009: Peter Shaw Ashton and Michael Akam 2010: Dianne Edwards and Derek Yalden 2011: Brian J. Coppins and H. Charles Godfray 2012: Stephen Blackmore and Peter Holland 2013: Kingsley Wayne Dixon 2014: and 2015: Engkik Soepadmo, , and Rosmarie Honegger 2016: Sandra Knapp and Georgina Mace 2017: and 2018: Kamaljit S Bawa, , and Sophien Kamoun 2019: Vicki Funk and 2020: Ben Sheldon and 2021: Mary Jane West-Eberhard and 2022: Rohan Pethiyagoda and Sebsebe Demissew 2023: Sandra Diaz 2024: Paul Upchurch See also List of biology awards References Biology awards Linnean Society of London British science and technology awards Commemoration of Carl Linnaeus Awards established in 1888 1888 establishments in the United Kingdom
Linnean Medal
Technology
1,391
29,722,340
https://en.wikipedia.org/wiki/Astronomical%20filter
An astronomical filter is a telescope accessory consisting of an optical filter used by amateur astronomers to simply improve the details and contrast of celestial objects, either for viewing or for photography. Research astronomers, on the other hand, use various band-pass filters for photometry on telescopes, in order to obtain measurements which reveal objects' astrophysical properties, such as stellar classification and placement of a celestial body on its Wien curve. Most astronomical filters work by blocking a specific part of the color spectrum above and below a bandpass, significantly increasing the signal-to-noise ratio of the interesting wavelengths, and so making the object gain detail and contrast. While the color filters transmit certain colors from the spectrum and are usually used for observation of the planets and the Moon, the polarizing filters work by adjusting the brightness, and are usually used for the Moon. The broad-band and narrow-band filters transmit the wavelengths that are emitted by the nebulae (by the Hydrogen and Oxygen atoms), and are frequently used for reducing the effects of light pollution. Filters have been used in astronomy at least since the solar eclipse of May 12, 1706. Solar filters White light filters Solar filters block most of the sunlight to avoid any damage to the eyes. Proper filters are usually made from a durable glass or polymer film that transmits only 0.00001% of the light. For safety, solar filters must be securely fitted over the objective of a refracting telescope or aperture of a reflecting telescope so that the body does not heat up significantly. Small solar filters threaded behind eyepieces do not block the radiation entering the scope body, causing the telescope to heat up greatly, and it is not unknown for them to shatter from thermal shock. Therefore, most experts do not recommend such solar filters for eyepieces, and some stockists refuse to sell them or remove them from telescope packages. According to NASA: "Solar filters designed to thread into eyepieces that are often provided with inexpensive telescopes are also unsafe. These glass filters can crack unexpectedly from overheating when the telescope is pointed at the Sun, and retinal damage can occur faster than the observer can move the eye from the eyepiece." Solar filters are used to safely observe and photograph the Sun, which despite being white, may appear as a yellow-orange disk. A telescope with these filters attached can directly and properly view details of solar features, especially sunspots and granulation on the surface, as well as solar eclipses and transits of the inferior planets Mercury and Venus across the solar disk. Narrowband filters The Herschel Wedge is a prism-based device combined with a neutral-density filter that directs most of the heat and ultraviolet rays out of the telescope, generally giving better results than most filter types. The H-alpha filter transmits the H-alpha spectral line for viewing solar flares and prominences invisible through common filters. These H-alpha filters are much narrower than those use for night H-alpha observing (see Nebular filters below), passing only 0.05 nm (0.5 angstrom) for one common model, compared with 3 nm-12 nm or more for night filters. Due to the narrow bandpass and temperature shifts often telescopes like that are tunable within about a ±0.05 nm. NASA included the following filters on the Solar Dynamics Observatory, of which only one is visible to human eyes (450.0 nm): 450.0 nm, 170.0 nm, 160.0 nm, 33.5 nm, 30.4 nm, 19.3 nm, 21.1 nm, 17.1 nm, 13.1 nm, and 9.4 nm. These were chosen for temperature, instead of particular emission lines, as are many narrowband filters such as the H-alpha line mentioned above. Color filters Color filters work by absorption/transmission, and can tell which part of the spectrum they are reflecting and transmitting. Filters can be used to increase contrast and enhance the details of the Moon and planets. All of the visible spectrum colors each have a filter, and every color filter is used to bring a certain lunar and planetary feature; for example, the #8 yellow filter is used to show Mars's maria and Jupiter's belts. The Wratten system is the standard number system used to refer to the color filter types. It was first manufactured by Kodak in 1909. Professional filters are also colored, but their bandpass centers are placed around other midpoints (such as in the UBVRI and Cousins systems). Some of common color filters and their uses are: Chromatic aberration filters: Used for reduction of the purplish halo, caused by chromatic aberration of refracting telescopes. Such halo can obscure features of bright objects, especially Moon and planets. These filters have no effect on observing faint objects. Red: Reduces sky brightness, particularly during daylight and twilight observations. Improves definition of maria, ice, and polar areas of Mars. Improves contrast of blue clouds against background of Jupiter and Saturn. Deep yellow: Improves resolution of atmospheric features of Venus, Jupiter (especially in polar regions), and Saturn. Increases contrast of polar caps, clouds, ice and dust storms on Mars. Enhances comet tails. Dark green: Improves cloud patterns on Venus. Reduces sky brightness during daylight observation of Venus. Increases contrast of ice and polar caps on Mars. Improves visibility of the Great Red Spot on Jupiter and other features in Jupiter atmosphere. Enhances white clouds and polar regions on Saturn. Medium blue: Enhances contrast of Moon. Increases contrast of faint shading of Venus clouds. Enhances surface features, clouds, ice and dust storms on Mars. Enhances definition of boundaries between features in atmospheres of Jupiter and Saturn. Improves definition of comet gas tails. Moon filters Neutral density filters, also known in astronomy as Moon filters, are another approach for contrast enhancement and glare reduction. They work simply by blocking some of the object's light to enhance the contrast. Neutral density filters are mainly used in traditional photography, but are used in astronomy to enhance lunar and planetary observations. Polarizing filters Polarizing filters adjust the brightness of images to a better level for observing, but much less so than solar filters. With these types of filter, the range of transmission varies from 3% to 40%. They are usually used for the observation of the Moon, but may also be used for planetary observation. They consist of two polarizing layers in a rotating aluminum cell, which changes the amount of transmission of the filter by rotating them. This reduction in brightness and improvement in contrast can reveal the lunar surface features and details, especially when it is near full. Polarizing filters should not be used in place of solar filters designed specially for observing the sun. Nebular filters Narrowband Narrow-band filters are astronomical filters which transmit only a narrow band of spectral lines from the spectrum (usually 22 nm bandwidth, or less). They are mainly used for nebulae observation. Emission nebulae mainly radiate the doubly ionized oxygen in the visible spectrum, which emits near 500 nm wavelength. These nebulae also radiate weakly at 486 nm, the Hydrogen-beta line. There are two main types of Narrowband filters: Ultra-high contrast (UHC), and specific emission line(s) filters. Specific Emission line filters Specific emission line (or lines) filters are used to isolate lines of specific elements or molecules to see their distribution within Nebulae. By combining the images from different filters they may also be used to produce false color images. Common filters are often used with the Hubble Space Telescope, forming the so-called HST-palette, with colors assigned as such: Red = S-II; Green = H-alpha; Blue = O-III. These filters are commonly specified with a second figure in nm, which refers to how wide a band is passed, which may cause it to exclude or include other lines. For example, H-alpha at 656 nm, may pick up N-II (at 658–654 nm), some filters will block most of the N-II if they are 3 nm wide. Commonly used lines / filters are: H-Alpha Hα / Ha (656 nm) from the Balmer series is emitted by HII Regions and is one of the stronger sources. H-Beta Hβ / Hb (486 nm) from the Balmer series is visible from stronger sources. O-III (496 nm and 501 nm) filters allow for both of the Oxygen-III lines to pass through. This is strong in many Emission nebulae. S-II (672 nm) filters show the Sulfur-II line. Less common lines/filters: He-II (468 nm) He-I: (587 nm) O-I: (630 nm) Ar-III: (713 nm) CA-II Ca-K/Ca-H: (393 and 396 nm) For solar observing, shows the sun with the K and H Fraunhofer lines N-II (658 nm and 654 nm) Often included in wider H-alpha filters Methane (889 nm) allowing clouds to be seen on the gas giants, Venus and (with filter) the Sun. Ultra-High Contrast filters Known commonly as UHC filters, these filters consist of things which allow multiple strong common emission lines to pass through, which also has the effect of the similar Light Pollution Reduction filters (see below) of blocking most light sources. The UHC filters range from 484 to 506 nm. It transmits both the O-III and H-beta spectral lines, blocks a large fraction of light pollution, and brings the details of planetary nebula and most of emission nebulae under a dark sky. Broadband The broadband, or light pollution reduction (LPR), filters are designed to block the sodium and mercury vapor light, and also block natural skyglow such as the auroral light. This allows observing nebulae from the city and light polluted skies. Broadband filters differ from narrowband with the range of wavelengths transmission. LED lighting is more broadband so it is not blocked, although white LEDs have a considerably lower output around 480 nm, which is close to O III and H-beta wavelength. Broadband filters have a wider range because a narrow transmission range causes a fainter image of sky objects, and since the work of these filters is revealing the details of nebulae from light polluted skies, it has a wider transmission for more brightness. These filters are particularly designed for galaxy observation and photography, and not useful with other deep sky objects such as emission nebulae. However, they can still improve the contrast between the DSOs and the background sky, which may clarify the image. See also Infrared cut-off filter List of telescope parts and construction Optical filter Photographic filter Photometric system UBV photometric system References Optical telescope components Astrophotography Astronomical imaging Optical filters
Astronomical filter
Chemistry,Technology
2,231
56,357
https://en.wikipedia.org/wiki/Linear%20subspace
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces. Definition If V is a vector space over a field K, a subset W of V is a linear subspace of V if it is a vector space over K for the operations of V. Equivalently, a linear subspace of V is a nonempty subset W such that, whenever are elements of W and are elements of K, it follows that is in W. The singleton set consisting of the zero vector alone and the entire vector space itself are linear subspaces that are called the trivial subspaces of the vector space. Examples Example I In the vector space V = R3 (the real coordinate space over the field R of real numbers), take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V. Proof: Given u and v in W, then they can be expressed as and . Then . Thus, u + v is an element of W, too. Given u in W and a scalar c in R, if again, then . Thus, cu is an element of W too. Example II Let the field be R again, but now let the vector space V be the Cartesian plane R2. Take W to be the set of points (x, y) of R2 such that x = y. Then W is a subspace of R2. Proof: Let and be elements of W, that is, points in the plane such that p1 = p2 and q1 = q2. Then ; since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W. Let p = (p1, p2) be an element of W, that is, a point in the plane such that p1 = p2, and let c be a scalar in R. Then ; since p1 = p2, then cp1 = cp2, so cp is an element of W. In general, any subset of the real coordinate space Rn that is defined by a homogeneous system of linear equations will yield a subspace. (The equation in example I was z = 0, and the equation in example II was x = y.) Example III Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. Let C(R) be the subset consisting of continuous functions. Then C(R) is a subspace of RR. Proof: We know from calculus that . We know from calculus that the sum of continuous functions is continuous. Again, we know from calculus that the product of a continuous function and a number is continuous. Example IV Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions. The same sort of argument as before shows that this is a subspace too. Examples that extend these themes are common in functional analysis. Properties of subspaces From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W. The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time. In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals). Descriptions Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n-space that passes through the origin. A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication: This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple. A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space): It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span. Systems of linear equations The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn: For example, the set of all vectors (over real or rational numbers) satisfying the equations is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions. Null space of a matrix In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation: The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix Every subspace of Kn can be described as the null space of some matrix (see below for more). Linear parametric equations The subset of Kn described by a system of homogeneous linear parametric equations is a subspace: For example, the set of all vectors (x, y, z) parameterized by the equations is a two-dimensional subspace of K3, if K is a number field (such as real or rational numbers). Span of vectors In linear algebra, the system of parametric equations can be written as a single vector equation: The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace. In general, a linear combination of vectors v1, v2, ... , vk is any vector of the form The set of all possible linear combinations is called the span: If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk. Example The xz-plane in R3 can be parameterized by the equations As a subspace, the xz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xz-plane can be written as a linear combination of these two: Geometrically, this corresponds to the fact that every point on the xz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1). Column space and row space A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation: In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A. The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below). Independence, basis, and dimension In general, a subspace of Kn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of . In general, vectors v1, ... , vk are called linearly independent if for (t1, t2, ... , tk) ≠ (u1, u2, ... , uk). If are linearly independent, then the coordinates for a vector in the span are uniquely determined. A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more). Example Let S be the subspace of R4 defined by the equations Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors: The subspace S is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1). Operations and relations on subspaces Inclusion The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension). A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ⊂ W, then dim W = k if and only if U = W. Intersection Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V. Proof: Let v and w be elements of U ∩ W. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to U ∩ W. Let v belong to U ∩ W, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W. Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to U ∩ W. For every vector space V, the set {0} and V itself are subspaces of V. Sum If U and W are subspaces, their sum is the subspace For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. The dimension of a direct sum is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. Lattice of subspaces The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation. Orthogonal complements If is an inner product space and is a subset of , then the orthogonal complement of , denoted , is again a subspace. If is finite-dimensional and is a subspace, then the dimensions of and satisfy the complementary relationship . Moreover, no vector is orthogonal to itself, so and is the direct sum of and . Applying orthogonal complements twice returns the original subspace: for every subspace . This operation, understood as negation (), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice). In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces such that . As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra). Algorithms Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties: The reduced matrix has the same null space as the original. Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original. Row reduction does not affect the linear dependence of the column vectors. Basis for a row space Input An m × n matrix A. Output A basis for the row space of A. Use elementary row operations to put A into row echelon form. The nonzero rows of the echelon form are a basis for the row space of A. See the article on row space for an example. If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal. Subspace membership Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v with n components. Output Determines whether v is an element of S Create a (k + 1) × n matrix A whose rows are the vectors b1, ... , bk and v. Use elementary row operations to put A into row echelon form. If the echelon form has a row of zeroes, then the vectors are linearly dependent, and therefore . Basis for a column space Input An m × n matrix A Output A basis for the column space of A Use elementary row operations to put A into row echelon form. Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space. See the article on column space for an example. This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns. Coordinates for a vector Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector Output Numbers t1, t2, ..., tk such that Create an augmented matrix A whose columns are b1,...,bk , with the last column being v. Use elementary row operations to put A into reduced row echelon form. Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers . (These should be precisely the first k entries in the final column of the reduced echelon form.) If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S. Basis for a null space Input An m × n matrix A. Output A basis for the null space of A Use elementary row operations to put A in reduced row echelon form. Using the reduced row echelon form, determine which of the variables are free. Write equations for the dependent variables in terms of the free variables. For each free variable xi, choose a vector in the null space for which and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A. See the article on null space for an example. Basis for the sum and intersection of two subspaces Given two subspaces and of , a basis of the sum and the intersection can be calculated using the Zassenhaus algorithm. Equations for a subspace Input A basis {b1, b2, ..., bk} for a subspace S of Kn Output An (n − k) × n matrix whose null space is S. Create a matrix A whose rows are . Use elementary row operations to put A into reduced row echelon form. Let be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots. This results in a homogeneous system of n − k linear equations involving the variables c1,...,cn. The matrix corresponding to this system is the desired matrix with nullspace S. Example If the reduced row echelon form of A is then the column vectors satisfy the equations It follows that the row vectors of A satisfy the equations In particular, the row vectors of A are a basis for the null space of the corresponding matrix. See also Cyclic subspace Invariant subspace Multilinear subspace learning Quotient space (linear algebra) Signal subspace Subspace topology Notes Citations Sources Textbook Web External links Linear algebra Articles containing proofs Operator theory Functional analysis
Linear subspace
Mathematics
3,886
163,156
https://en.wikipedia.org/wiki/Information%20commissioner
The role of information commissioner differs from nation to nation. Most commonly it is a title given to a government regulator in the fields of freedom of information and the protection of personal data in the widest sense. The office often functions as a specialist ombudsman service. Australia The Office of the Australian Information Commissioner (OAIC) has functions relating to freedom of information and privacy, as well as information policy. The Office of the Privacy Commissioner, which was the national privacy regulator, was integrated into the OAIC on 1 November 2010. There are three independent commissioners in the OAIC: the Australian Information Commissioner, the Freedom of Information Commissioner, and the Privacy Commissioner. Bangladesh The Information Commission of Bangladesh promotes and protects access to information. It is formed under the Right to Information Act, 2009, whose stated object is to empower the citizens by promoting transparency and accountability in the working of the public and private organizations, with the ultimate aim of decreasing corruption and establishing good governance. The Act creates a regime through which the citizens of the country may have access to information under the control of public and other authorities. Canada The Information Commissioner of Canada is an independent ombudsman appointed by the Parliament of Canada who investigates complaints from people who believe they have been denied rights provided under Canada's Access to Information Act. Similar bodies at provincial level include the Information and Privacy Commissioner (Ontario). Germany The Federal Commissioner for Data Protection and Freedom of Information (FfDF) is the federal commissioner not only for data protection but also (since commencement of the German Freedom of Information Act on January 1, 2006) for freedom of information. Hong Kong The Privacy Commissioner for Personal Data (PCPD) is charged with education and enforcement of the Personal Data (Privacy) Ordinance, which first came into force in 1997. The commissioner has the power to investigate and impose fines for violations. Reforms in 2021 gave it powers to investigate and prosecute suspected doxxing incidents. India The Central Information Commission, and State Information Commissions, receive and inquire into complaints from anyone who has been refused access to any information requested under the Right to Information Act, or whose rights under that Act have otherwise been obstructed, for example by being prevented from submitting a data request or being required to pay an excessive fee. Ireland The Office of the Information Commissioner () in Ireland was set up under the terms of the Freedom of Information Act 1997, which came into effect in April 1998. The Information Commissioner may conduct reviews of the decisions of public bodies in relation to requests for access to information. Since its creation, the office has been held simultaneously with that of the Ombudsman. The Information Commissioner also holds the role of Commissioner for Environmental Information. Switzerland The Federal Data Protection and Information Commissioner is responsible for the supervision of federal authorities and private bodies with respect to data protection and freedom of information legislation. United Kingdom In the United Kingdom, the Information Commissioner's Office is responsible for regulating compliance with the Data Protection Act 2018, Freedom of Information Act 2000 and the Environmental Information Regulations 2004. The Freedom of Information (Scotland) Act 2002 is the responsibility of the Scottish Information Commissioner. Other European States All other countries of the European Union and the European Economic Area have equivalent officials created under their versions of Directive 95/46. The Europa website gives links to such bodies around Europe. Cooperation among information commissioners The Global Privacy Enforcement Network is a transnational organization for the coordination of privacy laws among its 59 member states and the European Union. See also Information privacy Freedom of information laws by country Information minister References External links List of national data protection authorities (European Union) International Conference of Data Protection and Privacy Commissioners Europe's Information Society Commissioner Information privacy
Information commissioner
Engineering
744
13,229,499
https://en.wikipedia.org/wiki/Bochner%20identity
In mathematics — specifically, differential geometry — the Bochner identity is an identity concerning harmonic maps between Riemannian manifolds. The identity is named after the American mathematician Salomon Bochner. Statement of the result Let M and N be Riemannian manifolds and let u : M → N be a harmonic map. Let du denote the derivative (pushforward) of u, ∇ the gradient, Δ the Laplace–Beltrami operator, RiemN the Riemann curvature tensor on N and RicM the Ricci curvature tensor on M. Then See also Bochner's formula References External links Differential geometry Mathematical identities
Bochner identity
Mathematics
130
22,679,567
https://en.wikipedia.org/wiki/Helvella%20acetabulum
Helvella acetabulum is a species of fungus in the family Helvellaceae, order Pezizales. This relatively large cup-shaped fungus is characterized by a tan fruit body with prominent branching ribs resembling a cabbage leaf; for this reason it is commonly known as the cabbage leaf Helvella. Other colloquial names include the vinegar cup and the brown ribbed elfin cup. The fruit bodies reaches dimensions of by tall. It is found in Eurasia and North America, where it grows in sandy soils, under both coniferous and deciduous trees. Taxonomy The fungus was first named as Peziza acetabulum by Carl Linnaeus in his 1753 Species Plantarum. It was given its current name by French mycologist Lucien Quélet in 1874 after having been placed in various Peziza segregates: Joachim Christian Timm placed it in Octospora (1788), Samuel Frederick Gray in Macroscyphus (1821), and Leopold Fuckel in Acetabula (1870). The trend continued, with Claude Casimir Gillet with placing it in Aleuria in 1879, and Otto Kuntze in his new Paxina (of which it would later be designated type species) in 1891. Described independently as Peziza sulcata by Persoon in 1801, it was placed under that name in both Paxina and Acetabula—alongside its precursor as both taxa were still considered separate at the time. Finally, Frederic Clements renamed Acetabula as Phleboscyphus in 1903 and improperly reused Fuckel's name as the basionym of his Phleboscyphus vulgaris. The specific epithet acetabulum means "little vinegar cup", and was the Latin word for a small vessel used for storing vinegar (see acetabulum). Common names include the "cabbage leaf Helvella", the "vinegar cup", the "ribbed-stalk cup", and the "brown ribbed elfin cup". Description Helvella acetabulum has a deeply cup-shaped fruit body (technically an apothecium) that is up to in diameter, and deep. The cream-colored stem is typically tall by thick, with ribs extending almost to the top of the fruit body. The fruit body's exterior surface is cream-colored towards the stem and may feel subtly grainy near the margin. The inner spore-bearing surface, the hymenium, is brownish and possibly smooth or slightly wavy. The mushroom's odor and taste are not distinctive. The spores are smooth, elliptical, translucent (hyaline), and contain a single central oil droplet; they have dimensions of 18–20 by 12–14 μm. The spore-bearing cells, the asci, are 350–400 by 15–20 μm, are operculate—meaning they have an apical "lid" that releases the spores. The tips of the asci are inamyloid, so they do not adsorb iodine when stained with Melzer's reagent. The paraphyses are club-shaped, and have a pale brown color, with tips that are up to 10 μm thick. Similar species Helvella queletii has a roughly similar form and appearance, but the ribbing in that species does not extend up the margin as does H. acetabulum. H. griseoalba has ribs that extend halfway up the sides of the fruit body, but the color of the cup is pale to dark gray rather than cream. The ribs of H. solitaria and Dissingia leucomelaena barely touch the cap. H. costifera produces similar fruit bodies but has a grayish to grayish-brown hymenium; like H. acetabulum, it has ribs that extend to most of the outside of the fruit body. There are sometimes intermediate forms between the two species, making them difficult to distinguish. H. robusta is also similar to H. acetabulum, but has a lighter-colored hymenium, a robust stem, and the margin of the fruit body is often bent over the stem at maturity. In contrast, H. acetabulum never has the edge of the fruit body bent over the stem, and the stem is "indistinct or prominent, but never robust". Other similar species include H. leucomelaena and Gyromitra perlata. Distribution and habitat This fungus is widespread in North America and Europe. In North America, the distribution extends north to Alberta, Canada. In Mexico, it has been collected from State of Mexico, Guanajuato, Guerrero, and Tlaxcala. It is also found in Israel, Jordan, Turkey, Iran China (Xinjiang) and Japan. The fruit bodies grows solitary, scattered, or clustered together on soil in both coniferous and deciduous woods, typically in spring and summer. A preference for growing in association with coast live oak (Quercus agrifolia) has been noted for Californian populations. Potential toxicity Although the edibility of the fruit bodies is often listed as "unknown", consumption of this fungus is not recommended as similar species in the family Helvellaceae contain varying levels of monomethylhydrazine (MMH). Although MMH can be removed by boiling in a well-ventilated area, consumption of any MMH-producing mushroom is not advisable (as with G. esculenta). Roger Phillips lists the species as poisonous. References acetabulum Fungi described in 1753 Fungi of North America Fungi of Europe Fungi of Asia Inedible fungi Taxa named by Carl Linnaeus Fungus species
Helvella acetabulum
Biology
1,182
9,258,009
https://en.wikipedia.org/wiki/Digital%20media%20player
A digital media player (also known as a streaming device or streaming box) is a type of consumer electronics device designed for the storage, playback, or viewing of digital media content. They are typically designed to be integrated into a home cinema configuration, and attached to a television or AV receiver or both. The term is most synonymous with devices designed primarily for the consumption of content from streaming media services such as internet video, including subscription-based over-the-top content services. These devices usually have a compact form factor (either as a compact set-top box, or a dongle designed to plug into an HDMI port), and contain a 10-foot user interface with support for a remote control and, in some cases, voice commands, as control schemes. Some services may support remote control on digital media players using their respective mobile apps, while Google's Chromecast ecosystem is designed around integration with the mobile apps of content services. A digital media player's operating system may provide a search engine for locating content available across multiple services and installed apps. Many digital media players offer internal access to digital distribution platforms, where users can download or purchase content such as films, television episodes, and apps. In addition to internet sources, digital media players may support the playback of content from other sources, such as external media (including USB drives or memory cards), or streamed from a computer or media server. Some digital media players may also support video games, though their complexity (which can range from casual games to ports of larger games) depends on operating system and hardware support, and besides those marketed as microconsoles, are not usually promoted as the device's main function. Digital media players do not usually include a tuner for receiving terrestrial television, nor disc drives for Blu-rays or DVD. Some devices, such as standalone Blu-ray players, may include similar functions to digital media players (often in a reduced form), as well as recent generations of video game consoles, while smart TVs integrate similar functions into the television itself. Some TV makers have, in turn, licensed operating system platforms from digital media players as middleware for their smart TVs—such as Android TV, Amazon Fire TV, and Roku—which typically provide a similar user experience to their standalone counterparts, but with TV-specific features and settings reflected in their user interface. Overview In the 2010s, with the popularity of portable media players and digital cameras, as well as fast Internet download speeds and relatively cheap mass storage, many people came into possession of large collections of digital media files that cannot be played on a conventional analog HiFi without connecting a computer to an amplifier or television. The means to play these files on a network-connected digital media player that is permanently connected to a television is seen as a convenience. The rapid growth in the availability of online content has made it easier for consumers to use these devices and obtain content. YouTube, for instance, is a common plug-in available on most networked devices. Netflix has also struck deals with many consumer-electronics makers to make their interface available in the device's menus, for their streaming subscribers. This symbiotic relationship between Netflix and consumer electronics makers has helped propel Netflix to become the largest subscription video service in the U.S. using up to 20% of U.S. bandwidth at peak times. Media players are often designed for compactness and affordability, and tend to have small or non-existent hardware displays other than simple LED lights to indicate whether the device is powered on. Interface navigation on the television is usually done with an infrared remote control, while more-advanced digital media players come with high-performance remote controls which allow control of the interface using integrated touch sensors. Some remotes also include accelerometers for air mouse features which allow basic motion gaming. Most digital media player devices are unable to play physical audio or video media directly, and instead require a user to convert these media into playable digital files using a separate computer and software. They are also usually incapable of recording audio or video. In the 2010s, it is also common to find digital media player functionality integrated into other consumer-electronics appliances, such as DVD players, set-top boxes, smart TVs, or even video game consoles. Terminology Digital media players are also commonly referred to as a digital media extender, digital media streamer, digital media hub, digital media adapter, or digital media receiver (which should not be confused with AV receiver). Digital media player manufacturers use a variety of names to describe their devices. Some more commonly used alternative names include: Connected DVD Connected media player Digital audio receiver Digital media adapter Digital media connect Digital media extender Digital media hub Digital media player Digital media streamer Digital media receiver Digital media renderer Digital video receiver Digital video streamer HD Media Player HDD media player Media Extender Media Regulator Net connected media player Network connected media player Network media player Networked Digital Video Disc Networked entertainment gateway OTT player Over-the-Top player Smart Television media player Smart Television player Streaming media box Streaming media player Streaming video player Wireless Media Adapter YouTube Player Support History By November 2000, an audio-only digital media player was demonstrated by a company called SimpleDevices, which was awarded two patents covering this invention in 2006. Developed under the SimpleFi name by Motorola in late 2001, the design was based on a Cirrus Arm-7 processor and the wireless HomeRF networking standard which pre-dated 802.11b in the residential markets. Other early market entrants in 2001 included the Turtle Beach AudioTron, Rio Receiver and SliMP3 digital media players. An early version of a video-capable digital media player was presented by F.C. Jeng et al. in the International Conf. on Consumer Electronics in 2002. It included a network interface card, a media processor for audio and video decoding, an analog video encoder (for video playback to a TV), an audio digital to analog converter for audio playback, and an IR (infrared receiver) for remote-control-interface. A concept of a digital media player was also introduced by Intel in 2002 at the Intel Developer Forum as part of their Extended Wireless PC Initiative. Intel's digital media player was based on an Xscale PXA210 processor and supported 802.11b wireless networking. Intel was among the first to use the Linux embedded operating system and UPnP technology for its digital media player. Networked audio and DVD players were among the first consumer devices to integrate digital media player functionality. Examples include the Philips Streamium-range of products that allowed for remote streaming of audio, the GoVideo D2730 Networked DVD player which integrated DVD playback with the capability to stream Rhapsody audio from a PC, and the Buffalo LinkTheater which combined a DVD player with a digital media player. More recently, the Xbox 360 gaming console from Microsoft was among the first gaming devices that integrated a digital media player. With the Xbox 360, Microsoft also introduced the concept of a Windows Media Center Extender, which allows users to access the Media center capabilities of a PC remotely, through a home network. More recently, Linksys, D-Link, and HP introduced the latest generation of digital media players that support 720p and 1080p high resolution video playback and may integrate both Windows Extender and traditional digital media player functionality. Typical features A digital media player can connect to the home network using either a wireless (IEEE 802.11a, b, g, and n) or wired Ethernet connection. Digital media players includes a user interface that allows users to navigate through their digital media library, search for, and play back media files. Some digital media players only handle music; some handle music and pictures; some handle music, pictures, and video; while others go further to allow internet browsing or controlling Live TV from a PC with a TV tuner. Some other capabilities which are accomplished by digital media players include: Play, catalog, and store local hard disk, flash drive, or memory card music CDs and view CD album art, view digital photos, and watch DVD and Blu-ray or other videos. Stream movies, music, photos (media) over the wired or wireless network using technologies like DLNA View digital pictures (one by one or as picture slideshows) Stream online video to a TV from services such as Netflix and YouTube. Play video games. Browse the Web, check email and access social networking services through downloadable Internet applications. Video conference by connecting a webcam and microphone. In the 2010s, there were stand-alone digital media players on the market from AC Ryan, Asus, Apple (e.g., Apple TV), NetGear (e.g., NTV and NeoTV models), Dune, iOmega, Logitech, Pivos Group, Micca, Sybas (Popcorn Hour), Amkette EvoTV, D-Link, EZfetch, Fire TV, Android TV, Pinnacle, Xtreamer, and Roku, just to name a few. The models change frequently, so it is advisable to visit their web sites for current model names. Processors These devices come with low power consumption processors or SoC (System on Chip) and are most commonly either based on MIPS or ARM architecture processors combined with integrated DSP GPU in a SoC (or MPSoC) package. They also include RAM-memory and some type of built-in type of non-volatile computer memory (Flash memory). Internal hard-drive capabilities HD media player or HDD media player (HDMP) is a consumer product that combines digital media player with a hard drive (HD) enclosure with all the hardware and software for playing audio, video and photos to a television. All these can play computer-based media files to a television without the need for a separate computer or network connection, and some can even be used as a conventional external hard drive. These types of digital media players are sometimes sold as empty shells to allow the user to fit their own choice of hard drive (some can manage unlimited hard disk capacity and other only a certain capacity, i.e. 1TB, 2TB, 3TB, or 4TB), and the same model is sometimes sold with or without an internal hard drive already fitted. Formats, resolutions and file systems Digital media players can usually play H.264 (SD and HD), MPEG-4 Part 2 (SD and HD), MPEG-1, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with PCM, MP3 and AC3 audio tracks. They can also display images (such as JPEG and PNG) and play music files (such as FLAC, MP3 and Ogg). Operating system While most media players have traditionally been running proprietary or open source software frameworks versions based Linux as their operating systems, many newer network connected media players are based on the Android platform which gives them an advantage in terms of applications and games from the Google Play store. Even without Android some digital media players still have the ability to run applications (sometimes available via an app store), interactive on-demand media, personalized communications, and social networking features. Connections There are two ways to connect an extender to its central media server - wired, or wireless. Streaming and communication protocols While early digital media players used proprietary communication protocols to interface with media servers, today most digital media players either use standard-based protocols such SMB/CIFS/SAMBA or NFS, or rely on some version of UPnP (Universal Plug and Play) and DLNA (Digital Living Network Alliance) standards. DLNA-compliant digital media players and Media Servers is meant to guarantee a minimum set of functionality and proper interoperability among digital media players and servers regardless of the manufacturer, but unfortunately not every manufacturer follows the standards perfectly which can lead to incompatibility. Media server Some digital media players will only connect to specific media server software installed on a PC to stream music, pictures and recorded or live TV originating from the computer. Apple iTunes can, for example, be used this way with the Apple TV hardware that connects to a TV. Apple has developed a tightly integrated device and content management ecosystem with their iTunes Store, personal computers, iOS devices, and the AppleTV digital media receiver. The most recent version of the AppleTV has lost the hard-drive that was included in its predecessor and fully depends on either streaming internet content, or another computer on the home network for media. Connection ports Television connection is usually done via; composite, SCART, Component, HDMI video, with Optical Audio (TOSLINK/SPDIF), and connect to the local network and broadband internet using either a wired Ethernet or a wireless Wi-Fi connection, and some also have built-in Bluetooth support for remotes and game-pads or joysticks. Some players come with USB (USB 2.0 or USB 3.0) ports which allow local media content playback. Use Market impact on traditional television services The convergence of content, technology, and broadband access allows consumers to stream television shows and movies to their high-definition television in competition with pay television providers. The research company SNL Kagan expects 12 million households, roughly 10%, to go without cable, satellite or telco video service by 2015 using Over The Top services. This represents a new trend in the broadcast television industry, as the list of options for watching movies and TV over the Internet grows at a rapid pace. Research also shows that even as traditional television service providers are trimming their customer base, they are adding Broadband Internet customers. Nearly 76.6 million U.S. households get broadband from leading cable and telephone companies, although only a portion have sufficient speeds to support quality video steaming. Convergence devices for home entertainment will likely play a much larger role in the future of broadcast television, effectively shifting traditional revenue streams while providing consumers with more options. According to a report from the researcher NPD In-Stat, only about 12 million U.S. households have their either Web-capable TVs or digital media players connected to the Internet, although In-Stat estimates about 25 million U.S. TV households own a set with the built-in network capability. Also, In-Stat predicts that 100 million homes in North America and western Europe will own digital media players and television sets that blend traditional programs with Internet content by 2016. Use for illegal streaming Since at least 2015, dealers have marketed digital media players, often running the Android operating system and branded as being "fully-loaded", that are promoted as offering free streaming access to copyrighted media content, including films and television programs, as well as live feeds of television channels. These players are commonly bundled with the open source media player software Kodi, which is in turn pre-loaded with plug-ins enabling access to services streaming this content without the permission of their respective copyright holders. These "fully-loaded" set-top boxes are often sold through online marketplaces such as Amazon.com and eBay, as well as local retailers. The spread of these players has been attributed to their low cost and ease of use, with user experiences similar to legal subscription services such as Netflix. "Fully-loaded" set-top boxes have been subject to legal controversies, especially noting that their user experiences made them accessible to end-users who may not always realize that they are actually streaming pirated content. In the United Kingdom, the Federation Against Copyright Theft (FACT) has taken court actions on behalf of rightsholders against those who market digital media players pre-loaded with access to copyrighted content. In January 2017, an individual seller plead not guilty to charges of marketing and distributing devices that circumvent technological protection measures. In March 2017, the High Court of Justice ruled that BT Group, Sky plc, TalkTalk, and Virgin Media must block servers that had been used on such set-top boxes to illegally stream Premier League football games. Later in the month, Amazon UK banned the sale of "certain media players" that had been pre-loaded with software to illegally stream copyrighted content. On 26 April 2017, the European Court of Justice ruled that the distribution of set-top boxes with access to unauthorized streams of copyrighted works violated the exclusive rights to communicate them to the public. In September 2017, a British seller of such boxes pled guilty to violations of the Copyright, Designs and Patents Act for selling devices that can circumvent effective technical protection measures. In Canada, it was initially believed that these set-top boxes fell within a legal grey area, as the transient nature of streaming content did not necessarily mean that the content was being downloaded in violation of Canadian copyright law. However, on 1 June 2016, a consortium of Canadian media companies (BCE Inc., Rogers Communications, and Videotron) obtained a temporary federal injunction against five retailers of Android-based set-top boxes, alleging that their continued sale were causing "irreparable harm" to their television businesses, and that the devices' primary purpose were to facilitate copyright infringement. The court rejected an argument by one of the defendants, who stated that they were only marketing a hardware device with publicly available software, ruling that the defendants were "deliberately encourag[ing] consumers and potential clients to circumvent authorized ways of accessing content." 11 additional defendants were subsequently added to the suit. The lawyer of one of the defendants argued that retailers should not be responsible for the actions of their users, as any type of computing device could theoretically be used for legal or illegal purposes. In April 2017, the Federal Court of Appeal blocked an appeal requesting that the injunction be lifted pending the outcome of the case. Although the software is free to use, the developers of Kodi have not endorsed any add-on or Kodi-powered device intended for facilitating copyright infringement. Nathan Betzen, president of the XBMC Foundation (the non-profit organization which oversees the development of the Kodi software), argued that the reputation of Kodi had been harmed by third-party retailers who "make a quick buck modifying Kodi, installing broken piracy add-ons, advertising that Kodi lets you watch free movies and TV, and then vanishing when the user buys the box and finds out that the add-on they were sold on was a crummy, constantly breaking mess." Betzen stated that the XBMC Foundation was willing to enforce its trademarks against those who use them to promote Kodi-based products which facilitate copyright infringement. Following a lawsuit by Dish Network against TVAddons, a website that offered streaming add-ons that were often used with Kodi and on such devices, in June 2017, the group shut down its add-ons and website. A technology analyst speculated that the service could eventually re-appear under a different name in the future, as have torrent trackers. In June, the service's operator was also sued by the Bell/Rogers/Videotron consortium for inducing copyright infringement. In June 2017, Televisa was granted a court order banning the sale of all Roku products in Mexico, as it was alleged that third-parties had been operating subscription television services for the devices that contain unlicensed content. The content is streamed through unofficial apps that are added to the devices through hacking. Roku objected to the allegations, stating that these services were not certified by the company or part of its official Channels platform, whose terms of service require that they have rights to stream the content that they offer. Roku also stated that it actively cooperates with reports of channels that infringe copyrights. The ruling was overturned in October 2018 after Roku took additional steps to remove channels with unauthorized content from the platform. In May 2018, the Federal Communications Commission sent letters to the CEOs of Amazon.com and eBay, asking for their help in removing such devices from their marketplaces. The letter cited malware risks, fraudulent use of FCC certification marks, and how their distribution through major online marketplaces may incorrectly suggest that they are legal and legitimate products. In Saudi Arabia, the practice of using digital media players for pirated television content first became popular during the Qatar diplomatic crisis, after Qatari pay television network beIN Sports was banned from doing business in the country. The pirate subscription television service BeoutQ operated a satellite television service featuring repackaged versions of the beIN Sports channels, but its Android-based satellite boxes also included a pre-loaded app store offering apps for multiple streaming and subscription services dealing primarily in copyrighted media. See also Comparison of digital media players Cord-cutting Digital Living Network Alliance Digital video recorder List of smart TV platforms Second screen Streaming media System on a chip Tivoization Tekpix References External links HP MediaSmart Connect Wins Popular Mechanics Editor's Choice Award at CES 2008 CNET Editors' Best Network Music Players Universal remote codes IPTV Smarters PC Magazine Media Hub & Receiver Finder AudioFi Reviews of wireless players PC World's Future Gear: PC on the HiFi, and the TV Consumer electronics Media players Networking hardware Television technology Multimedia Android (operating system) devices Digital audio
Digital media player
Technology,Engineering
4,327
22,722,671
https://en.wikipedia.org/wiki/Ponte%20del%20Diavolo
(Italian for "Devil's bridge") by Martin Ebel is a territorial game (with connective elements similar to Go), in which two players create islands and then add bridges to connect them. It was created by Martin Ebel and published by Hans im Glück in 2007 and by Rio Grande Games in 2008. Games magazine named Ponte del Diavolo their "Best New Abstract Strategy Game" Winner for 2009. External links "Rules for the game 'Ponte del Diavolo'" at Yucata References Board games introduced in 2007 Abstract strategy games Mathematical games
Ponte del Diavolo
Mathematics
121
25,171,328
https://en.wikipedia.org/wiki/A.B.C.%20Liniment
A.B.C. Liniment was a patent medicine liniment sold between approximately 1880 to 1935 as a topical pain relieving agent. It was sold for relief of pain caused by various ailments, including lumbago (lower back pain), sciatica, neuralgia, rheumatism, and stiffness after exercise. It was named for its three primary ingredients, aconite, belladonna, and chloroform. There were numerous examples of poisoning from the mixture, resulting in at least one death. References Ointments Patent medicines Toxins
A.B.C. Liniment
Chemistry
124
38,134,506
https://en.wikipedia.org/wiki/TradeCard
TradeCard, Inc. was an American software company. Its main product, also called TradeCard, was a SaaS collaboration product that was designed to allow companies to manage their extended supply chains including tracking movement of goods and payments. TradeCard software helped to improve visibility, cash flow and margins for over 10,000 retailers and brands, factories and suppliers, and service providers (financial institutions, logistics service providers, customs brokers and agents) operating in 78 countries. On January 7, 2013, TradeCard and GT Nexus announced plans to undergo a merger of equals, creating a global supply-chain management company that would employ about 1,000 people and serve about 20,000 businesses in industries including manufacturing, retail and pharmaceuticals. The combined company rebranded itself as GT Nexus. History TradeCard was founded in 1999 by Kurt Cavano as a privately owned firm. In 2003, Warburg Pincus led three funding rounds, with TradeCard closing $10 million. In 2010, Deloitte cited TradeCard for its entrepreneurial and disruptive cloud technology enterprise resource planning solution that provides new IT architectures designed to address unmet needs of enterprises. In 2011, TradeCard's revenue grew by 36% over the previous year, and the company claimed on its website that it handled $25 billion in sourcing volume on its platform, by 10,000 organizations and 45,000 unique users. In 2012, founder and CEO Kurt Cavano transitioned to the Chairman role and Sean Feeney was appointed CEO. TradeCard was headquartered in New York City, with offices in San Francisco, Amsterdam, Hong Kong, Shenzhen, Shanghai, Taipei, Seoul, Colombo and Ho Chi Minh City. Clients TradeCard provided global supply chain and financial supply chain products to retail companies, factories and suppliers, and service providers (financial institutions, logistics service providers, customs brokers and agents). Clients include retailers and brands such as Coach, Inc. Levi Strauss & Co., Columbia Sportswear, Guess, Rite Aid, and Perry Ellis International. Awards 2012 Best Platform Connecting Buyers, Suppliers and Financial Institutions by Global Finance 2012 Supply and Demand Chain 100 2012 Pros to Know by Supply and Demand Chain 2011 Top Innovator by Apparel Magazine 2011 Great Supply Chain Partner by SupplyChainBrain References External links Supply chain software companies American companies established in 1999 Companies based in New York City ERP software companies Software distribution Software industry Cloud platforms ERP software Private equity portfolio companies Business software companies Warburg Pincus companies
TradeCard
Technology,Engineering
499