source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Lustre%20%28file%20system%29 | Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster. Lustre file system software is available under the GNU General Public License (version 2 only) and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world,
including the world's No. 1 ranked TOP500 supercomputer in November 2022, Frontier, as well as previous top supercomputers such as Fugaku, Titan and Sequoia.
Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, hundreds of petabytes (PB) of storage on hundreds of servers, and tens of terabytes per second (TB/s) of aggregate I/O throughput. This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as meteorology, simulation, artificial intelligence and machine learning, oil and gas, life science, rich media, and finance. The I/O performance of Lustre has widespread impact on these applications and has attracted broad attention.
History
The Lustre file system architecture was started as a research project in 1999 by Peter J. Braam, who was a staff of Carnegie Mellon University (CMU) at the time. Braam went on to found his own company Cluster File Systems in 2001, starting from work on the InterMezzo file system in the Coda project at CMU.
Lustre was developed under the Accelerated Strategic Computing Initiative Path Forward project funded by the United States Department of Energy, which included Hewlett-Packard and Intel.
In September 2007, Sun Microsystems acquired the assets of Cluster File Systems Inc. including its ”intellectual property“.
Sun included Lustre with its high-p |
https://en.wikipedia.org/wiki/Klein%20graphs | In the mathematical field of graph theory, the Klein graphs are two different but related regular graphs, each with 84 edges. Each can be embedded in the orientable surface of genus 3, in which they form dual graphs.
The cubic Klein graph
This is a 3-regular (cubic) graph with 56 vertices and 84 edges, named after Felix Klein.
It is Hamiltonian, has chromatic number 3, chromatic index 3, radius 6, diameter 6 and girth 7. It is also a 3-vertex-connected and a 3-edge-connected graph. It has book thickness 3 and queue number 2.
It can be embedded in the genus-3 orientable surface (which can be represented as the Klein quartic), where it forms the Klein map with 24 heptagonal faces, Schläfli symbol {7,3}8.
According to the Foster census, the Klein graph, referenced as F056B, is the only cubic symmetric graph on 56 vertices which is not bipartite.
It can be derived from the 28-vertex Coxeter graph.
Algebraic properties
The automorphism group of the Klein graph is the group PGL2(7) of order 336, which has
PSL2(7) as a normal subgroup. This group acts transitively on its half-edges, so the Klein graph is a symmetric graph.
The characteristic polynomial of this 56-vertex Klein graph is equal to
The 7-regular Klein graph
This is a 7-regular graph with 24 vertices and 84 edges, named after Felix Klein.
It is Hamiltonian, has chromatic number 4, chromatic index 7, radius 3, diameter 3 and girth 3.
It can be embedded in the genus-3 orientable surface, where it forms the dual of the Klein map, with 56 triangular faces, Schläfli symbol {3,7}8.
It is the unique distance-regular graph with intersection array ; however, it is not a distance-transitive graph.
Algebraic properties
The automorphism group of the 7-valent Klein graph is the same group of order 336 as for the cubic Klein map, likewise acting transitively on its half-edges.
The characteristic polynomial of this 24-vertices Klein graph is equal to . |
https://en.wikipedia.org/wiki/Major%20sublingual%20duct | The excretory ducts of the sublingual gland are from eight to twenty in number. Of the smaller sublingual ducts (ducts of Rivinus), some join the submandibular duct; others open separately into the mouth, on the elevated crest of mucous membrane (plica sublingualis), caused by the projection of the gland, on either side of the frenulum linguae. One or more join to form the major sublingual duct (larger sublingual duct, duct of Bartholin), which opens into the submandibular duct. |
https://en.wikipedia.org/wiki/KLRA1 | Killer cell lectin-like receptor subfamily A (KLRA, alternative nomenclature Ly49) is a gene cluster coding proteins from family Ly49, which are membrane receptors expressed mainly on the surface of NK cells and other cells of immune system in some mammals including rodents and cattle but not humans. Mouse Klra gene cluster is located on chromosome 6 and comprises 20-30 genes and pseudogenes, e.g. Klra1 (Ly49A). Klra gene family is highly polymorphic and polygenic and various mouse strains encode different number of Klra genes.
The homologous human KLRAP1 gene has been classified as a transcribed pseudogene because all associated transcripts are candidates for nonsense-mediated decay (NMD). |
https://en.wikipedia.org/wiki/Discredited%20HIV/AIDS%20origins%20theories | Various fringe theories have arisen to speculate about purported alternative origins for the human immunodeficiency virus (HIV) and the acquired immunodeficiency syndrome (AIDS), with claims ranging from it being due to accidental exposure to supposedly purposeful acts. Several inquiries and investigations have been carried out as a result, and each of these theories has consequently been determined to be based on unfounded and/or false information. HIV has been shown to have evolved from or be closely related to the simian immunodeficiency virus (SIV) in West Central Africa sometime in the early 20th century. HIV was discovered in the 1980s by the French scientist Luc Montagnier. Before the 1980s, HIV was an unknown deadly disease.
Discredited theories
Smallpox vaccination theory
In 1987 there was some consideration given to the possibility that the "AIDS epidemic may have been triggered by the mass vaccination campaign which eradicated smallpox". An article in The Times suggested this, attributing to an unnamed "adviser to WHO" the quote "I believe the smallpox vaccine theory is the explanation to the explosion of AIDS". It is now thought that the smallpox vaccine causes serious complications for people who already have impaired immune systems, and the Times article described the case of a military recruit with "dormant HIV" who died within months of receiving it. But no citation was provided regarding people who did not previously have HIV. Currently several professional publications describe HIV as a contraindication for the smallpox vaccine—both for an infected person and their sexual partners and household members. Some conspiracy theorists propose an expanded hypothesis in which the smallpox vaccine was deliberately contaminated with HIV.
In contrast, a research article was published in 2010 suggesting that it might have been the actual eradication of smallpox and the subsequent "ending" of the mass vaccination campaign that contributed to the sudden emerg |
https://en.wikipedia.org/wiki/Pterotheca%20attenuata | Pterotheca attenuata is a fossil species from the Ordovician upper Midwestern United States. It has been variously classified as a monoplacophoran, bellerophont, or another type of gastropod.
Remains of the animal were found in deposits laid down in shallow marine waters, as the Decorah Shale and Platteville Limestone of the United States Midwest. It is often misclassified in museum collections because of its unusual morphology and therefore documentation of its range and abundance is poor.
Preservation and morphology
The shell is composed of two slightly concave sub-triangular layers that connect along the anterior most side. The dorsal layer has a ridge that extends perpendicularly from the rest of shell along the median plane. The underside of the ventral most layer is effaced and smooth, but the top of the ventral layer and the bottom of the dorsal layer both show a slight medial ridge.
P. attenuata is most often preserved in fine-grained sedimentary rock like shale and limestone, but its range likely extends outside of these facies. The shell is often found broken and the two layers separated. P. attenuata's unusual shape and the fragmentary nature of many of its fossils are both causes of its frequent misclassification as a brachiopod. |
https://en.wikipedia.org/wiki/Mirimanoff%27s%20congruence | In number theory, a branch of mathematics, a Mirimanoff's congruence is one of a collection of expressions in modular arithmetic which, if they hold, entail the truth of Fermat's Last Theorem. Since the theorem has now been proven, these are now of mainly historical significance, though the Mirimanoff polynomials are interesting in their own right. The theorem is due to Dmitry Mirimanoff.
Definition
The nth Mirimanoff polynomial for the prime p is
In terms of these polynomials, if t is one of the six values {-X/Y, -Y/X, -X/Z, -Z/X, -Y/Z, -Z/Y} where Xp+Yp+Zp=0 is a solution to Fermat's Last Theorem, then
φp-1(t) ≡ 0 (mod p)
φp-2(t)φ2(t) ≡ 0 (mod p)
φp-3(t)φ3(t) ≡ 0 (mod p)
...
φ(p+1)/2(t)φ(p-1)/2(t) ≡ 0 (mod p)
Other congruences
Mirimanoff also proved the following:
If an odd prime p does not divide one of the numerators of the Bernoulli numbers Bp-3, Bp-5, Bp-7 or Bp-9, then the first case of Fermat's Last Theorem, where p does not divide X, Y or Z in the equation Xp+Yp+Zp=0, holds.
If the first case of Fermat's Last Theorem fails for the prime p, then 3p-1 ≡ 1 (mod p2). A prime number with this property is sometimes called a Mirimanoff prime, in analogy to a Wieferich prime which is a prime such that 2p-1 ≡ 1 (mod p2). The existence of primes satisfying such congruences was recognized long before their implications for the first case of Fermat's Last Theorem became apparent; but while the discovery of the first Wieferich prime came after these theoretical developments and was prompted by them, the first instance of a Mirimanoff prime is so small that it was already known before Mirimanoff formulated the connection to FLT in 1910, which fact may explain the reluctance of some writers to use the name. So early as his 1895 paper (p. 298), Mirimanoff alludes to a rather complicated test for the primes now known by his name, deriving from a formula published by Sylvester in 1861, which is of little computational value but great theoretical interest. This test |
https://en.wikipedia.org/wiki/Pythagorean%20quadruple | A Pythagorean quadruple is a tuple of integers , , , and , such that . They are solutions of a Diophantine equation and often only positive integer values are considered. However, to provide a more complete geometric interpretation, the integer values can be allowed to be negative and zero (thus allowing Pythagorean triples to be included) with the only condition being that . In this setting, a Pythagorean quadruple defines a cuboid with integer side lengths , , and , whose space diagonal has integer length ; with this interpretation, Pythagorean quadruples are thus also called Pythagorean boxes. In this article we will assume, unless otherwise stated, that the values of a Pythagorean quadruple are all positive integers.
Parametrization of primitive quadruples
A Pythagorean quadruple is called primitive if the greatest common divisor of its entries is 1. Every Pythagorean quadruple is an integer multiple of a primitive quadruple. The set of primitive Pythagorean quadruples for which is odd can be generated by the formulas
where , , , are non-negative integers with greatest common divisor 1 such that is odd. Thus, all primitive Pythagorean quadruples are characterized by the identity
Alternate parametrization
All Pythagorean quadruples (including non-primitives, and with repetition, though , , and do not appear in all possible orders) can be generated from two positive integers and as follows:
If and have different parity, let be any factor of such that . Then and . Note that .
A similar method exists for generating all Pythagorean quadruples for which and are both even. Let and and let be a factor of such that . Then and . This method generates all Pythagorean quadruples exactly once each when and run through all pairs of natural numbers and runs through all permissible values for each pair.
No such method exists if both and are odd, in which case no solutions exist as can be seen by the parametrization in the previous section.
Pr |
https://en.wikipedia.org/wiki/Cloud-computing%20comparison | The following is a comparison of cloud-computing software and providers.
IaaS (Infrastructure as a service)
Providers
General
SaaS (Software as a Service)
General
Supported hosts
Supported guests
PaaS (Platform as a service)
Providers
Providers on IaaS
PaaS providers which can run on IaaS providers ("itself" means the provider is both PaaS and IaaS): |
https://en.wikipedia.org/wiki/Buffalo%20wallow | A buffalo wallow or bison wallow is a natural topographical depression in flat prairie land that holds rain water and runoff.
Though thriving bison herds roamed and grazed the great prairies of North America for thousands of years, they left few permanent markings on the landscape. Exceptions are the somewhat rare yet still visible ancient buffalo wallows found occasionally on the North American prairie flatlands.
Originally these naturally-occurring depressions would have served as temporary watering holes for wildlife, including the American bison (buffalo). Wallowing bison that drank from and bathed in these shallow water holes gradually altered their pristine nature. Each time they went away, they carried mud with them from the hole, thus enlarging the wallow. Furthermore, the wallowing action caused abrasion of hair, natural body oils and cellular debris from their hides, leaving the debris in the water and in the soil after the water evaporated. Every year debris accumulated in the soil in increasing concentration, forming a water-impenetrable layer that prevented rain water and runoff from percolating into the lower layers of the soil. Ultimately the water remained for long periods, which attracted more wildlife. Even when stagnant, the water would be eagerly drunk by thirsty animals.
Buffalo wallows are also made by the Asian water buffalo and the African buffalo.
In popular culture
In 1953, the writer Charles Tenney Jackson (1874–1955) published The Buffalo Wallow: A Prairie Childhood, an autobiographical novel about two boys (cousins) growing up during pioneer days in an almost empty stretch of Nebraska, where their favorite hideaway is a buffalo wallow. |
https://en.wikipedia.org/wiki/Noise%20%28video%29 | Noise, commonly known as static, white noise, static noise, or snow, in analog video and television, is a random dot pixel pattern of static displayed when no transmission signal is obtained by the antenna receiver of television sets and other display devices.
Description
The random pixel pattern is superimposed on the picture or the television screen, being visible as a random flicker of "dots", "snow" or "fuzzy zig-zags" in some television sets, is the result of electronic noise and radiated electromagnetic noise accidentally picked up by the antenna like air or cable. This effect is most commonly seen with analog TV sets, blank VHS tapes, or other display devices.
There are many sources of electromagnetic noise which cause the characteristic display patterns of static. Atmospheric sources of noise are the most ubiquitous, and include electromagnetic signals prompted by cosmic microwave background radiation, or more localized radio wave noise from nearby electronic devices.
The display device itself is also a source of noise, due in part to thermal noise produced by the inner electronics. Most of this noise comes from the first transistor the antenna is attached to.
Names
UK viewers used to see "snow" on black after sign-off, instead of "bugs" on white, a purely technical artifact due to old 405-line British senders using positive rather than the negative video modulation used in Canada, the US, and (currently) the UK as well. Since one impression of the "snow" is of fast-flickering black bugs on a white background, the phenomenon is often called myrornas krig in Swedish, myrekrig in Danish, hangyák háborúja in Hungarian, Ameisenkrieg in German, and semut bertengkar in Indonesian, which all translate to 'war of the ants'.
It is also known as ekran karıncalanması in Turkish, meaning 'ants on the screen', hangyafoci 'ant football' in Hungarian, and purici 'fleas' in Romanian. In French however, this phenomenon is mostly called neige 'snow', just like in Dutc |
https://en.wikipedia.org/wiki/PKCS%208 | In cryptography, PKCS #8 is a standard syntax for storing private key information. PKCS #8 is one of the family of standards called Public-Key Cryptography Standards (PKCS) created by RSA Laboratories. The latest version, 1.2, is available as RFC 5208.
The PKCS #8 private key may be encrypted with a passphrase using one of the PKCS #5 standards defined in RFC 2898, which supports multiple encryption schemes.
A new version 2 was proposed by S. Turner in 2010 as RFC 5958 and might obsolete RFC 5208 someday in the future.
PKCS #8 private keys are typically exchanged in the PEM base64-encoded format, for example:-----BEGIN PRIVATE KEY-----
MIIBVgIBADANBgkqhkiG9w0BAQEFAASCAUAwggE8AgEAAkEAq7BFUpkGp3+LQmlQ
Yx2eqzDV+xeG8kx/sQFV18S5JhzGeIJNA72wSeukEPojtqUyX2J0CciPBh7eqclQ
2zpAswIDAQABAkAgisq4+zRdrzkwH1ITV1vpytnkO/NiHcnePQiOW0VUybPyHoGM
/jf75C5xET7ZQpBe5kx5VHsPZj0CBb3b+wSRAiEA2mPWCBytosIU/ODRfq6EiV04
lt6waE7I2uSPqIC20LcCIQDJQYIHQII+3YaPqyhGgqMexuuuGx+lDKD6/Fu/JwPb
5QIhAKthiYcYKlL9h8bjDsQhZDUACPasjzdsDEdq8inDyLOFAiEAmCr/tZwA3qeA
ZoBzI10DGPIuoKXBd3nk/eBxPkaxlEECIQCNymjsoI7GldtujVnr1qT+3yedLfHK
srDVjIT3LsvTqw==
-----END PRIVATE KEY----------BEGIN ENCRYPTED PRIVATE KEY-----
MIIBrzBJBgkqhkiG9w0BBQ0wPDAbBgkqhkiG9w0BBQwwDgQImQO8S8BJYNACAggA
MB0GCWCGSAFlAwQBKgQQ398SY1Y6moXTJCO0PSahKgSCAWDeobyqIkAb9XmxjMmi
hABtlIJBsybBymdIrtPjtRBTmz+ga40KFNfKgTrtHO/3qf0wSHpWmKlQotRh6Ufk
0VBh4QjbcNFQLzqJqblW4E3v853PK1G4OpQNpFLDLaPZLIyzxWOom9c9GXNm+ddG
LbdeQRsPoolIdL61lYB505K/SXJCpemb1RCHO/dzsp/kRyLMQNsWiaJABkSyskcr
eDJBZWOGQ/WJKl1CMHC8XgjqvmpXXas47G5sMSgFs+NUqVSkMSrsWMa+XkH/oT/x
P8ze1v0RDu0AIqaxdZhZ389h09BKFvCAFnLKK0tadIRkZHtNahVWnFUks5EP3C1k
2cQQtWBkaZnRrEkB3H0/ty++WB0owHe7Pd9GKSnTMIo8gmQzT2dfZP3+flUFHTBs
RZ9L8UWp2zt5hNDtc82hyNs70SETaSsaiygYNbBGlVAWVR9Mp8SMNYr1kdeGRgc3
7r5E
-----END ENCRYPTED PRIVATE KEY----- |
https://en.wikipedia.org/wiki/Sakuma%E2%80%93Hattori%20equation | In physics, the Sakuma–Hattori equation is a mathematical model for predicting the amount of thermal radiation, radiometric flux or radiometric power emitted from a perfect blackbody or received by a thermal radiation detector.
History
The Sakuma–Hattori equation was first proposed by Fumihiro Sakuma, Akira Ono and Susumu Hattori in 1982. In 1996, a study investigated the usefulness of various forms of the Sakuma–Hattori equation. This study showed the Planckian form to provide the best fit for most applications. This study was done for 10 different forms of the Sakuma–Hattori equation containing not more than three fitting variables. In 2008, BIPM CCT-WG5 recommended its use for radiation thermometry uncertainty budgets below 960 °C.
General form
The Sakuma–Hattori equation gives the electromagnetic signal from thermal radiation based on an object's temperature. The signal can be electromagnetic flux or signal produced by a detector measuring this radiation. It has been suggested that below the silver point, a method using the Sakuma–Hattori equation be used. In its general form it looks like
where:
is the scalar coefficient
is the second radiation constant (0.014387752 m⋅K)
is the temperature-dependent effective wavelength in meters
is the absolute temperature in kelvins
Planckian form
Derivation
The Planckian form is realized by the following substitution:
Making this substitution renders the following the Sakuma–Hattori equation in the Planckian form.
Sakuma–Hattori equation (Planckian form)
Inverse equation
First derivative
Discussion
The Planckian form is recommended for use in calculating uncertainty budgets for radiation thermometry and infrared thermometry. It is also recommended for use in calibration of radiation thermometers below the silver point.
The Planckian form resembles Planck's law.
However the Sakuma–Hattori equation becomes very useful when considering low-temperature, wide-band radiation thermometry. To use Plan |
https://en.wikipedia.org/wiki/Atrazine | Atrazine is a chlorinated herbicide of the triazine class. It is used to prevent pre-emergence broadleaf weeds in crops such as maize (corn), soybean and sugarcane and on turf, such as golf courses and residential lawns. Atrazine's primary manufacturer is Syngenta and it is one of the most widely used herbicides in the United States, Canadian, and Australian agriculture. Its use was banned in the European Union in 2004, when the EU found groundwater levels exceeding the limits set by regulators, and Syngenta could not show that this could be prevented nor that these levels were safe.
At least two significant Canadian farm well studies showed that atrazine was the most common contaminant found. , atrazine was the most commonly detected pesticide contaminating drinking water in the U.S. Studies suggest it is an endocrine disruptor, an agent that can alter the natural hormonal system. However, in 2006 the U.S. Environmental Protection Agency (EPA) had stated that under the Food Quality Protection Act "the risks associated with the pesticide residues pose a reasonable certainty of no harm", and in 2007, the EPA said that atrazine does not adversely affect amphibian sexual development and that no additional testing was warranted. EPA's 2009 review concluded that "the agency's scientific bases for its regulation of atrazine are robust and ensure prevention of exposure levels that could lead to reproductive effects in humans". However, in their 2016 Refined Ecological Risk Assessment for Atrazine, it was stated that "it is difficult to make definitive conclusions about the impact of atrazine at a given concentration but multiple studies have reported effects to various endpoints at environmentally-relevant concentrations." EPA started a registration review in 2013.
The EPA's review has been criticized, and the safety of atrazine remains controversial. EPA has however stated that "If at any time EPA determines there are urgent human or environmental risks from atrazine ex |
https://en.wikipedia.org/wiki/Summa%20de%20arithmetica | (Summary of arithmetic, geometry, proportions and proportionality) is a book on mathematics written by Luca Pacioli and first published in 1494. It contains a comprehensive summary of Renaissance mathematics, including practical arithmetic, basic algebra, basic geometry and accounting, written for use as a textbook and reference work.
Written in vernacular Italian, the Summa is the first printed work on algebra, and it contains the first published description of the double-entry bookkeeping system. It set a new standard for writing and argumentation about algebra, and its impact upon the subsequent development and standardization of professional accounting methods was so great that Pacioli is sometimes referred to as the "father of accounting".
Contents
The Summa de arithmetica as originally printed consists of ten chapters on a series of mathematical topics, collectively covering essentially all of Renaissance mathematics. The first seven chapters form a summary of arithmetic in 222 pages. The eighth chapter explains contemporary algebra in 78 pages. The ninth chapter discusses various topics relevant to business and trade, including barter, bills of exchange, weights and measures and bookkeeping, in 150 pages. The tenth and final chapter describes practical geometry (including basic trigonometry) in 151 pages.
The book's mathematical content draws heavily on the traditions of the abacus schools of contemporary northern Italy, where the children of merchants and the middle class studied arithmetic on the model established by Fibonacci's Liber Abaci. The emphasis of this tradition was on facility with computation, using the Hindu–Arabic numeral system, developed through exposure to numerous example problems and case studies drawn principally from business and trade. Pacioli's work likewise teaches through examples, but it also develops arguments for the validity of its solutions through reference to general principles, axioms and logical proof. In this way the Su |
https://en.wikipedia.org/wiki/The%20Cactaceae | The Cactaceae is a monograph on plants of the cactus family written by the American botanists Nathaniel Lord Britton and Joseph Nelson Rose and published in multiple volumes between 1919 and 1923. It was landmark study that extensively reorganized cactus taxonomy and is still considered a cornerstone of the field. It was illustrated with drawings and color plates principally by the British botanical artist Mary Emily Eaton as well as with black-and-white photographs.
History
Nathaniel Lord Britton was a Columbia University geology and biology professor who left the university in 1895 to become the founding director of the New York Botanical Garden. Much of his own field work was done in the Caribbean. Joseph Nelson Rose was an authority on several plant families, including parsley (Apiaceae) and cacti (Cactaceae). He had been a plant curator at the Smithsonian since 1896, and while working there he made several field trips to Mexico, collecting specimens for the Smithsonian and for Britton's newly founded New York Botanical Garden. Together, Britton and Rose published many articles on the stonecrop family (Crassulaceae) before embarking in 1904 on research leading towards The Cactaceae. With the support of Douglas T. MacDougal, director of the Carnegie Institution for Science's Desert Botanical Laboratory, the Carnegie Institution agreed to fund the project. Rose took a leave of absence from the Smithsonian to pursue it, and both Rose and Britton were named Carnegie Institution Research Associates in 1912, when more focused work on the project began. Between 1912 and 1916 Rose and Britton did extensive field work, collecting specimens and touring the botanical gardens and notable collections of Europe, the Caribbean, and North, Central, and South America.
In this period, cactus taxonomy was in a disorganized state with only a few very large and heterogeneous genera. Britton and Rose broke these old-style catch-all genera into smaller, more defined genera, ultimat |
https://en.wikipedia.org/wiki/Clothing%20technology | Clothing technology involves the manufacturing, materials - innovations that have been developed and used. The timeline of clothing and textiles technology includes major changes in the manufacture and distribution of clothing.
From clothing in the ancient world into modernity, the use of technology has dramatically influenced clothing and fashion in the modern age. Industrialization brought changes in the manufacture of goods. In many nations, homemade goods crafted by hand have largely been replaced by factory produced goods on assembly lines purchased in a consumer culture. Innovations include man-made materials such as polyester, nylon, and vinyl as well as features like zippers and velcro. The advent of advanced electronics has resulted in wearable technology being developed and popularized since the 1980s.
Design is an important part of the industry beyond utilitarian concerns and the fashion and glamour industries have developed in relation to clothing marketing and retail. Environmental and human rights issues have also become considerations for clothing and spurred the promotion and use of some natural materials such as bamboo that are considered environmentally friendly.
Production
The advent of industrialization included factories, specialized and technologically advanced equipment, and production lines for the mass production of textiles like natural and synthetic fibers. Globalization and advances in trade increased sourcing of materials and competition for wares across borders. The swadeshi movement in India was an effort to counteract the economic control and influence that British factories exerted over the one-time colony. Concerns have also been raised over the use of so-called sweat shops.
Clothing lines based on famous designers have been featured and advertised in magazines and other media. Branding and [marketing] are features of the advertising age. Some designers have also become television and media personalities. In recent years fashi |
https://en.wikipedia.org/wiki/Sylvester%27s%20criterion | In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite. It is named after James Joseph Sylvester.
Sylvester's criterion states that a n × n Hermitian matrix M is positive-definite if and only if all the following matrices have a positive determinant:
the upper left 1-by-1 corner of M,
the upper left 2-by-2 corner of M,
the upper left 3-by-3 corner of M,
M itself.
In other words, all of the leading principal minors must be positive. By using appropriate permutations of rows and columns of M, it can also be shown that the positivity of any nested sequence of n principal minors of M is equivalent to M being positive-definite.
An analogous theorem holds for characterizing positive-semidefinite Hermitian matrices, except that it is no longer sufficient to consider only the leading principal minors:
a Hermitian matrix M is positive-semidefinite if and only if all principal minors of M are nonnegative.
Simple proof for special case
Suppose is Hermitian matrix . Let be the principal minor matrices, i.e. the upper left corner matrices. It will be shown that if is positive definite, then the principal minors are positive; that is, for all .
is positive definite. Indeed, choosing
we can notice that Equivalently, the eigenvalues of are positive, and this implies that since the determinant is the product of the eigenvalues.
To prove the reverse implication, we use induction. The general form of an Hermitian matrix is
,
where is an Hermitian matrix, is a vector and is a real constant.
Suppose the criterion holds for . Assuming that all the principal minors of are positive implies that , , and that is positive definite by the inductive hypothesis. Denote
then
By completing the squares, this last expression is equal to
where (note that exists because the eigenvalues of are all positive.)
The first term is positive by the inductive hypothesis. We now exam |
https://en.wikipedia.org/wiki/List%20of%20e-Science%20infrastructures | This is a list of e-Science infrastructures, that is, of computer systems created to support the computational demands of e-Science.
World Wide LHC Computing Grid
European Grid Infrastructure
Open Science Grid
Nordic Data Grid Facility |
https://en.wikipedia.org/wiki/Molecular%20descriptor | Molecular descriptors play a fundamental role in chemistry, pharmaceutical sciences, environmental protection policy, and health researches, as well as in quality control, being the way molecules, thought of as real bodies, are transformed into numbers, allowing some mathematical treatment of the chemical information contained in the molecule. This was defined by Todeschini and Consonni as:
"The molecular descriptor is the final result of a logic and mathematical procedure which transforms chemical information encoded within a symbolic representation of a molecule into a useful number or the result of some standardized experiment."
By this definition, the molecular descriptors are divided into two main categories: experimental measurements, such as log P, molar refractivity, dipole moment, polarizability, and, in general, additive physico-chemical properties, and theoretical molecular descriptors, which are derived from a symbolic representation of the molecule and can be further classified according to the different types of molecular representation.
The main classes of theoretical molecular descriptors are: 1) 0D-descriptors (i.e. constitutional descriptors, count descriptors), 2) 1D-descriptors (i.e. list of structural fragments, fingerprints),3) 2D-descriptors (i.e. graph invariants),4) 3D-descriptors (such as, for example, 3D-MoRSE descriptors, WHIM descriptors, GETAWAY descriptors, quantum-chemical descriptors, size, steric, surface and volume descriptors),5) 4D-descriptors (such as those derived from GRID or CoMFA methods, Volsurf).
Invariance properties of molecular descriptors
The invariance properties of molecular descriptors can be defined as the ability of the algorithm for their calculation to give a descriptor value that is independent of the particular characteristics of the molecular representation, such as atom numbering or labeling, spatial reference frame, molecular conformations, etc. Invariance to molecular numbering or labeling is assumed |
https://en.wikipedia.org/wiki/Parallel%20SCSI | Parallel SCSI (formally, SCSI Parallel Interface, or SPI) is the earliest of the interface implementations in the SCSI family. SPI is a parallel bus; there is one set of electrical connections stretching from one end of the SCSI bus to the other. A SCSI device attaches to the bus but does not interrupt it. Both ends of the bus must be terminated.
SCSI is a peer-to-peer peripheral interface. Every device attaches to the SCSI bus in a similar manner. Depending on the version, up to 8 or 16 devices can be attached to a single bus. There can be multiple hosts and multiple peripheral devices but there should be at least one host. The SCSI protocol defines communication from host to host, host to a peripheral device, and peripheral device to a peripheral device. The Symbios Logic 53C810 chip is an example of a PCI host interface that can act as a SCSI target.
SCSI-1 and SCSI-2 have the option of parity bit error checking. Starting with SCSI-U160 (part of SCSI-3) all commands and data are error checked by a cyclic redundancy check.
History
The first two formal SCSI standards, SCSI-1 and SCSI-2, described parallel SCSI. The SCSI-3 standard then split the framework into separate layers which allowed the introduction of other data interfaces beyond parallel SCSI. The original SCSI-1 version of the parallel bus was 8 bits wide (plus a ninth parity bit). The SCSI-2 standard allowed for faster operation (10 MHz) and wider buses (16-bit or 32-bit). The 16-bit option became the most popular.
At 10 MHz with a bus width of 16 bits it is possible to achieve a data rate of 20 MB/s. Subsequent extensions to the SCSI standard allowed for faster speeds: 20 MHz, 40 MHz, 80 MHz, 160 MHz and finally 320 MHz. At 320 MHz x 16 bits there is a theoretical maximum peak data rate of 640 MB/s.
Due to the technical constraints of a parallel bus system, SCSI has since evolved into faster serial interfaces, mainly Serial Attached SCSI and Fibre Channel. The iSCSI protocol doesn't describe |
https://en.wikipedia.org/wiki/Loudspeaker%20acoustics | Loudspeaker acoustics is a subfield of acoustical engineering concerned with the reproduction of sound and the parameters involved in doing so in actual equipment.
Engineers measure the performance of drivers and complete speaker systems to characterize their behavior, often in an anechoic chamber, outdoors, or using time windowed measurement systems -- all to avoid including room effects (e.g., reverberation) in the measurements.
Designers use models (from electrical filter theory) to predict the performance of drive units in different enclosures, now almost always based on the work of A N Thiele and Richard Small.
Important driver characteristics are:
Frequency response
Off-axis response dispersion pattern, lobing
Sensitivity (dB SPL for 1 watt input)
Maximum power handling
Non-linear distortion
Colouration (i.e., more or less, delayed resonance).
It is the performance of a loudspeaker/listening room combination that really matters, as the two interact in multiple ways. There are two approaches to high-quality reproduction. One ensures the listening room is reasonably 'alive' with reverberant sound at all frequencies, in which case the speakers should ideally have equal dispersion at all frequencies in order to equally excite the reverberant fields created by reflections off room surfaces. The other attempts to arrange the listening room to be 'dead' acoustically, leaving indirect sound to the dispersion of the speakers need only be sufficient to cover the listening positions.
A dead or inert acoustic may be best, especially if properly filled with 'surround' reproduction, so that the reverberant field of the original space is reproduced realistically. This is currently quite hard to achieve, and so the ideal loudspeaker systems for stereo reproduction would have a uniform dispersion at all frequencies. Listening to sound in an anechoic "dead" room is quite different from listening in a conventional room, and, while revealing about loudspeaker behaviour it h |
https://en.wikipedia.org/wiki/Eorhynchochelys | Eorhynchochelys (meaning "dawn-beaked turtle" in Greek) is an extinct genus of stem-turtle from the Late Triassic Xiaowa Formation (or Wayao Member of the Falang Formation) of southwestern China.
Description
Eorhynchochelys is notable for its unusual combination of a turtle-style skull and a conventional reptilian body. The skull, for example, has an edentulous beak typical of all members of Testudinata. However, the thorax region is markedly different from Pappochelys and Odontochelys and more similar to Eunotosaurus in lacking a shell, even though the ribs were wide and flat. The skull also has a single pair of holes behind the skull, unlike the presence of two pairs of holes in Pappochelys. Unlike other stem-turtles, Eorhynchochelys had twelve dorsal vertebrae. It reached up to in total length, which is much larger than Pappochelys. |
https://en.wikipedia.org/wiki/Xi%20baryon | The Xi baryons or cascade particles are a family of subatomic hadron particles which have the symbol and may have an electric charge () of +2 , +1 , 0, or −1 , where is the elementary charge.
Like all conventional baryons, particles contain three quarks. baryons, in particular, contain either one up or one down quark and two other, more massive quarks. The two more massive quarks are any two of strange, charm, or bottom (doubles allowed). For notation, the assumption is that the two heavy quarks in the are both strange; subscripts "c" and "b" are added for each even heavier charm or bottom quark that replaces one of the two presumed strange quarks.
They are historically called the cascade particles because of their unstable state; they are typically observed to decay rapidly into lighter particles, through a chain of decays (cascading decays). The first discovery of a charged Xi baryon was in cosmic ray experiments by the Manchester group in 1952. The first discovery of the neutral Xi particle was at Lawrence Berkeley Laboratory in 1959. It was also observed as a daughter product from the decay of the omega baryon () observed at Brookhaven National Laboratory in 1964. The Xi spectrum is important to nonperturbative quantum chromodynamics (QCD), such as lattice QCD.
History
The particle is also known as the cascade B particle and contains quarks from all three families. It was discovered by D0 and CDF experiments at Fermilab. The discovery was announced on 12 June 2007. It was the first known particle made of quarks from all three quark generations – namely, a down quark, a strange quark, and a bottom quark. The D0 and CDF collaborations reported the consistent masses of the new state. The Particle Data Group world average mass is .
For notation, the assumption is that the two heavy quarks are both strange, denoted by a simple ; a subscript "c" is added for each constituent charm quark, and a "b" for each bottom quark. Hence , , , , etc.
Unless specified, |
https://en.wikipedia.org/wiki/Semimembranosus%20muscle | The semimembranosus muscle () is the most medial of the three hamstring muscles in the thigh. It is so named because it has a flat tendon of origin. It lies posteromedially in the thigh, deep to the semitendinosus muscle. It extends the hip joint and flexes the knee joint.
Structure
The semimembranosus muscle, so called from its membranous tendon of origin, is situated at the back and medial side of the thigh. It is wider, flatter, and deeper than the semitendinosus (with which it shares very close insertion and attachment points). The muscle overlaps the upper part of the popliteal vessels.
Origin
The semimembranosus muscle originates by a thick tendon from the superolateral aspect of the ischial tuberosity. It arises above and medial to the biceps femoris muscle and semitendinosus muscle. The tendon of origin expands into an aponeurosis, which covers the upper part of the anterior surface of the muscle; from this aponeurosis, muscular fibers arise, and converge to another aponeurosis which covers the lower part of the posterior surface of the muscle and contracts into the tendon of insertion.
Insertion
The semimembranosus muscle inserts on the:
medial condyle of the tibia.
medial margin of the tibia.
lateral condyle of femur.
fascia of the popliteus muscle.
The tendon of insertion gives off certain fibrous expansions: one, of considerable size, passes upward and laterally to be inserted into the posterior lateral condyle of the femur, forming part of the oblique popliteal ligament of the knee-joint; a second is continued downward to the fascia which covers the popliteus muscle; while a few fibers join the medial collateral ligament of the joint and the fascia of the leg.
Nerve supply
The semimembranosus is innervated by the tibial part of the sciatic nerve. The sciatic nerve consists of the anterior divisions of ventral nerve roots from L4 through S3. These nerve roots are part of the larger nerve network–the sacral plexus. The tibial part of the sc |
https://en.wikipedia.org/wiki/Absoluteness%20%28logic%29 | In mathematical logic, a formula is said to be absolute to some class of structures (also called models), if it has the same truth value in each of the members of that class. One can also speak of absoluteness of a formula between two structures, if it is absolute to some class which contains both of them.. Theorems about absoluteness typically establish relationships between the absoluteness of formulas and their syntactic form.
There are two weaker forms of partial absoluteness. If the truth of a formula in each substructure N of a structure M follows from its truth in M, the formula is downward absolute. If the truth of a formula in a structure N implies its truth in each structure M extending N, the formula is upward absolute.
Issues of absoluteness are particularly important in set theory and model theory, fields where multiple structures are considered simultaneously. In model theory, several basic results and definitions are motivated by absoluteness. In set theory, the issue of which properties of sets are absolute is well studied. The Shoenfield absoluteness theorem, due to Joseph Shoenfield (1961), establishes the absoluteness of a large class of formulas between a model of set theory and its constructible universe, with important methodological consequences. The absoluteness of large cardinal axioms is also studied, with positive and negative results known.
In model theory
In model theory, there are several general results and definitions related to absoluteness. A fundamental example of downward absoluteness is that universal sentences (those with only universal quantifiers) that are true in a structure are also true in every substructure of the original structure. Conversely, existential sentences are upward absolute from a structure to any structure containing it.
Two structures are defined to be elementarily equivalent if they agree about the truth value of all sentences in their shared language, that is, if all sentences in their language are a |
https://en.wikipedia.org/wiki/Two-moment%20decision%20model | In decision theory, economics, and finance, a two-moment decision model is a model that describes or prescribes the process of making decisions in a context in which the decision-maker is faced with random variables whose realizations cannot be known in advance, and in which choices are made based on knowledge of two moments of those random variables. The two moments are almost always the mean—that is, the expected value, which is the first moment about zero—and the variance, which is the second moment about the mean (or the standard deviation, which is the square root of the variance).
The most well-known two-moment decision model is that of modern portfolio theory, which gives rise to the decision portion of the Capital Asset Pricing Model; these employ mean-variance analysis, and focus on the mean and variance of a portfolio's final value.
Two-moment models and expected utility maximization
Suppose that all relevant random variables are in the same location-scale family, meaning that the distribution of every random variable is the same as the distribution of some linear transformation of any other random variable. Then for any von Neumann–Morgenstern utility function, using a mean-variance decision framework is consistent with expected utility maximization, as illustrated in example 1:
Example 1: Let there be one risky asset with random return , and one riskfree asset with known return , and let an investor's initial wealth be . If the amount , the choice variable, is to be invested in the risky asset and the amount is to be invested in the safe asset, then, contingent on , the investor's random final wealth will be . Then for any choice of , is distributed as a location-scale transformation of . If we define random variable as equal in distribution to then is equal in distribution to , where μ represents an expected value and σ represents a random variable's standard deviation (the square root of its second moment). Thus we can write expected u |
https://en.wikipedia.org/wiki/Wet%20process%20engineering | Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing.
The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric.
All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering.
Water
Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead |
https://en.wikipedia.org/wiki/Pyomyositis | Pyomyositis is a bacterial infection of the skeletal muscles which results in an abscess. Pyomyositis is most common in tropical areas but can also occur in temperate zones.
Pyomyositis can be classified as primary or secondary. Primary pyomyositis is a skeletal muscle infection arising from hematogenous infection, whereas secondary pyomyositis arises from localized penetrating trauma or contiguous spread to the muscle.
Diagnosis
Diagnosis is done via the following manner:
Pus discharge culture and sensitivity
X ray of the part to rule out osteomyelitis
Creatinine phosphokinase (more than 50,000 units)
MRI is useful
Ultrasound guided aspiration
Treatment
The abscesses within the muscle must be drained surgically (not all patient require surgery if there is no abscess). Antibiotics are given for a minimum of three weeks to clear the infection.
Epidemiology
Pyomyositis is most often caused by the bacterium Staphylococcus aureus. The infection can affect any skeletal muscle, but most often infects the large muscle groups such as the quadriceps or gluteal muscles.
Pyomyositis is mainly a disease of children and was first described by Scriba in 1885. Most patients are aged 2 to 5 years, but infection may occur in any age group. Infection often follows minor trauma and is more common in the tropics, where it accounts for 4% of all hospital admissions. In temperate countries such as the US, pyomyositis was a rare condition (accounting for 1 in 3000 pediatric admissions), but has become more common since the appearance of the USA300 strain of MRSA.
Gonococcal pyomyositis is a rare infection caused by Neisseria gonorrhoeae.
Additional images |
https://en.wikipedia.org/wiki/List%20of%20NUTS%20regions%20in%20the%20European%20Union%20by%20GDP | This is a list of NUTS regions in the European Union by GDP. The European Union uses a classification for subnational territory called Nomenclature of Territorial Units for Statistics () (commonly abbreviated as NUTS). The NUTS 1 classification is applied to a group of regions, NUTS 2 for regions and NUTS 3 as subdivisions of regions. There are also two levels (NUTS 4 and 5) which relate to local administrative unit levels. Countries agree a NUTS classification with the European Commission. Geddes notes that NUTS level 2 is "particularly important", because they often exist as territorial-government divisions and are used for regional policies by countries. NUTS 1 typically has a population of 3-7 million; NUTS 2 0.8-3 million; and NUTS 3 150,000-800,000. As of 2015, there are 98 regions at NUTS 1 level, 276 regions at NUTS 2 level and 1,342 regions at NUTS 3 level (as a result, statistics at the NUTS level 3 are found as an external link to this article). The EU is based on the classification of NUTS 2 regions as: less developed regions, transition regions and more developed regions.
The EU's Structural Funds and Cohesion Fund direct funding to NUTS level 2 regions based on their GDP (PPS) per capita in comparison to the EU average: less developed regions (less than 75%), transition regions (between 75% and 90% and more developed regions (over 90%). For the period 2014–20, EUR 351 billion will be invested in the EU's regions with most being directed to the less developed regions.
NUTS level 1 (data in 2017)
NUTS level 2
See also
Economy of the European Union
List of metropolitan areas in the European Union by GDP
List of European regions by GDP |
https://en.wikipedia.org/wiki/Celestial%20spheres | The celestial spheres, or celestial orbs, were the fundamental entities of the cosmological models developed by Plato, Eudoxus, Aristotle, Ptolemy, Copernicus, and others. In these celestial models, the apparent motions of the fixed stars and planets are accounted for by treating them as embedded in rotating spheres made of an aetherial, transparent fifth element (quintessence), like gems set in orbs. Since it was believed that the fixed stars did not change their positions relative to one another, it was argued that they must be on the surface of a single starry sphere.
In modern thought, the orbits of the planets are viewed as the paths of those planets through mostly empty space. Ancient and medieval thinkers, however, considered the celestial orbs to be thick spheres of rarefied matter nested one within the other, each one in complete contact with the sphere above it and the sphere below. When scholars applied Ptolemy's epicycles, they presumed that each planetary sphere was exactly thick enough to accommodate them. By combining this nested sphere model with astronomical observations, scholars calculated what became generally accepted values at the time for the distances to the Sun: about , to the other planets, and to the edge of the universe: about . The nested sphere model's distances to the Sun and planets differ significantly from modern measurements of the distances, and the size of the universe is now known to be inconceivably large and continuously expanding.
Albert Van Helden has suggested that from about 1250 until the 17th century, virtually all educated Europeans were familiar with the Ptolemaic model of "nesting spheres and the cosmic dimensions derived from it". Even following the adoption of Copernicus's heliocentric model of the universe, new versions of the celestial sphere model were introduced, with the planetary spheres following this sequence from the central Sun: Mercury, Venus, Earth-Moon, Mars, Jupiter and Saturn.
Mainstream belief in |
https://en.wikipedia.org/wiki/Arctur-2 | Arctur-2 is a supercomputer located in Slovenia which is used by scientists and industry professionals to run intensive workloads and computer simulations such as aerodynamics simulations and steel casting simulations.
The Arctur-2 High Performance Computer (HPC) is located in Nova Gorica (Slovenia) and was put into operation in early 2017.
Arctur-2 is a system built by Sugon and consists of 30 nodes, each with two Intel Xeon E5-2690v4 processors; 8 of these nodes are equipped with 4 Nvidia Tesla M60 GPUs each, and another 8 of them have big memory capacity of 1024GB per node.
The supercomputer is managed by Arctur. |
https://en.wikipedia.org/wiki/Low-voltage%20differential%20signaling | Low-voltage differential signaling (LVDS), also known as TIA/EIA-644, is a technical standard that specifies electrical characteristics of a differential, serial signaling standard. LVDS operates at low power and can run at very high speeds using inexpensive twisted-pair copper cables. LVDS is a physical layer specification only; many data communication standards and applications use it and add a data link layer as defined in the OSI model on top of it.
LVDS was introduced in 1994, and has become popular in products such as LCD-TVs, in-car entertainment systems, industrial cameras and machine vision, notebook and tablet computers, and communications systems. The typical applications are high-speed video, graphics, video camera data transfers, and general purpose computer buses.
Early on, the notebook computer and LCD display vendors commonly used the term LVDS instead of FPD-Link when referring to their protocol, and the term LVDS has mistakenly become synonymous with Flat Panel Display Link in the video-display engineering vocabulary.
Differential vs. single-ended signaling
LVDS is a differential signaling system, meaning that it transmits information as the difference between the voltages on a pair of wires; the two wire voltages are compared at the receiver. In a typical implementation, the transmitter injects a constant current of 3.5 mA into the wires, with the direction of current determining the digital logic level. The current passes through a termination resistor of about 100 to 120 ohms (matched to the cable's characteristic impedance to reduce reflections) at the receiving end, and then returns in the opposite direction via the other wire. From Ohm's law, the voltage difference across the resistor is therefore about 350 mV. The receiver senses the polarity of this voltage to determine the logic level.
As long as there is tight electric- and magnetic-field coupling between the two wires, LVDS reduces the generation of electromagnetic noise. This nois |
https://en.wikipedia.org/wiki/List%20of%20dock%20applications | The following is a list of dock applications.
Application launchers
dock applications |
https://en.wikipedia.org/wiki/Dolichonychia | Dolichonychia is a medical condition in which the nail beds of the fingers and toes are abnormally long and slender, specifically, a finger nail index of 1.30 or more, it is a common feature in people with connective tissue disorders, such as Ehlers–Danlos syndromes, Marfan syndrome, and hypohidrotic ectodermal dysplasia., it often appears alongside arachnodactyly and/or dolichostenomelia, which is the condition of having long and slender fingers and toes.
See also
Arachnodactyly
Dolichostenomelia
Marfan syndrome
Ehlers–Danlos syndromes |
https://en.wikipedia.org/wiki/Nucleic%20acid%20methods | Nucleic acid methods are the techniques used to study nucleic acids: DNA and RNA.
Purification
DNA extraction
Phenol–chloroform extraction
Minicolumn purification
RNA extraction
Boom method
Synchronous coefficient of drag alteration (SCODA) DNA purification
Quantification
Abundance in weight: spectroscopic nucleic acid quantitation
Absolute abundance in number: real-time polymerase chain reaction (quantitative PCR)
High-throughput relative abundance: DNA microarray
High-throughput absolute abundance: serial analysis of gene expression (SAGE)
Size: gel electrophoresis
Synthesis
De novo: oligonucleotide synthesis
Amplification: polymerase chain reaction (PCR)
Kinetics
Multi-parametric surface plasmon resonance
Dual-polarization interferometry
Quartz crystal microbalance with dissipation monitoring (QCM-D)
Gene function
RNA interference
Other
Bisulfite sequencing
DNA sequencing
Expression cloning
Fluorescence in situ hybridization
Lab-on-a-chip
Comparison of nucleic acid simulation software
Northern blot
Nuclear run-on assay
Radioactivity in the life sciences
Southern blot
Differential centrifugation (sucrose gradient)
Toeprinting assay
Several bioinformatics methods, as seen in list of RNA structure prediction software
See also
CSH Protocols
Current Protocols |
https://en.wikipedia.org/wiki/Automotive%20pixel%20link | Automotive pixel link, or APIX, is a serial high speed gigabit multichannel link to interconnect displays, cameras and control units over one single cable targeting automotive applications. APIX2 transmits up to two independent high-resolution real-time video channels and has bidirectional protected data communication with Ethernet, SPI, and I2C, including 8 channels for audio.
APIX, APIX2 and APIX3 were designed by INOVA Semiconductors based in Munich, Germany. INOVA provides its own chips and licences the IP to other semiconductor suppliers such as Fujitsu, Toshiba, Analog Devices, Cypress, and Socionext.
Popularity
The standard is used by a growing number of car OEMs especially in the infotainment area.
It was licensed in 2008 by Fujitsu for use in their automotive controllers. There are also Commercial off-the-shelf boards available from e.g. Congatec. There are also implementations as IP block for Xilinx FPGAs.
Specifications
APIX (or APIX1), which is on lifetime buy through the end of 2019, allows for:
Two- or Four-Wire Full Duplex Link
Up to 1 Gbit/s Downstream Link
Up to 62.5 Mbit/s max Upstream Link
Low EMI
Wide spread spectrum pixel clock
More than 15 m distance with low profile STP cables
10/12/18/24 bit pixel Interface
APIX2 allows for higher link speeds and more flexibility:
It can mix various data streams, including SPI, I2C or Ethernet
500 Mbit/s, 1 Gbit/s and 3 Gbit/s sustained downstream link bandwidth
187.5 Mbit/s upstream link bandwidth
Single/dual channel LVDS 18 or 24 bit
Video resolution up to 1280x720x24bit (720p) / 2 x 1280x720x18 bit / 1600x600x24 bit @ 60 Hz
APIX3 adds higher link speeds:
Support STP (shielded twisted pair), QTP (quad twisted pair) and co-axial
Link bandwidth of 6 Gbit/s over STP and 12Gbit/s over QTP
Video resolution up to HD and Ultra HD
Notes
Digital display connectors |
https://en.wikipedia.org/wiki/Brachialis%20muscle | The brachialis (brachialis anticus), also known as the Teichmann muscle, is a muscle in the upper arm that flexes the elbow. It lies beneath the biceps brachii, and makes up part of the floor of the region known as the cubital fossa (elbow pit). It originates from the anterior aspect of the distal humerus; it inserts onto the tuberosity of the ulna. It is innervated by the musculocutaneous nerve, and commonly also receives additional innervation from the radial nerve. The brachialis is the prime mover of elbow flexion generating about 50% more power than the biceps.
Structure
Origin
The brachialis originates from the anterior surface of the distal half of the humerus, near the insertion of the deltoid muscle, which it embraces by two angular processes. Its origin extends below to within 2.5 cm of the margin of the articular surface of the humerus at the elbow joint.
Insertion
Its fibers converge to a thick tendon which is inserted into the tuberosity of the ulna, and the rough depression on the anterior surface of the coronoid process of the ulna.
Innervation
The brachialis muscle is innervated by the musculocutaneous nerve, which runs on its superficial surface, between it and the biceps brachii. However, in 70-80% of people, the muscle has double innervation with the radial nerve (C5-T1). The divide between the two innervations is at the insertion of the deltoid.
Blood supply
The brachialis is supplied by muscular branches of the brachial artery and by the recurrent radial artery.
Variation
The muscle is occasionally doubled; additional muscle slips to the supinator, pronator teres, biceps brachii, lacertus fibrosus, or radius are more rarely found.
Function
The brachialis flexes the arm at the elbow joint. Unlike the biceps, the brachialis does not insert on the radius, and does not participate in pronation and supination of the forearm.
History
Etymology
The brachialis muscle In classical Latin bracchialis means of or belonging to the arm, and is deri |
https://en.wikipedia.org/wiki/Spatial%20application | A spatial application is a technological application (such as video) requiring high spatial resolution, possibly at the expense of reduced temporal positioning accuracy, such as increased jerkiness.
Examples of spatial applications include the requirement to display small characters and to resolve fine detail in still video, or in motion video that contains very limited motion. |
https://en.wikipedia.org/wiki/Leprosy | Leprosy, also known as Hansen's disease (HD), is a long-term infection by the bacteria Mycobacterium leprae or Mycobacterium lepromatosis. Infection can lead to damage of the nerves, respiratory tract, skin, and eyes. This nerve damage may result in a lack of ability to feel pain, which can lead to the loss of parts of a person's extremities from repeated injuries or infection through unnoticed wounds. An infected person may also experience muscle weakness and poor eyesight. Leprosy symptoms may begin within one year, but, for some people, symptoms may take 20 years or more to occur.
Leprosy is spread between people, although extensive contact is necessary. Leprosy has a low pathogenicity, and 95% of people who contract M. leprae do not develop the disease. Spread is thought to occur through a cough or contact with fluid from the nose of a person infected by leprosy. Genetic factors and immune function play a role in how easily a person catches the disease. Leprosy does not spread during pregnancy to the unborn child or through sexual contact. Leprosy occurs more commonly among people living in poverty. There are two main types of the disease – paucibacillary and multibacillary, which differ in the number of bacteria present. A person with paucibacillary disease has five or fewer poorly pigmented, numb skin patches, while a person with multibacillary disease has more than five skin patches. The diagnosis is confirmed by finding acid-fast bacilli in a biopsy of the skin.
Leprosy is curable with multidrug therapy. Treatment of paucibacillary leprosy is with the medications dapsone, rifampicin, and clofazimine for six months. Treatment for multibacillary leprosy uses the same medications for 12 months. A number of other antibiotics may also be used. These treatments are provided free of charge by the World Health Organization.
Leprosy is not highly contagious. People with leprosy can live with their families and go to school and work. In the 1980s, there were 5.2 mi |
https://en.wikipedia.org/wiki/The%20Void%20%28philosophy%29 | The Void is the philosophical concept of nothingness manifested. The notion of the Void is relevant to several realms of metaphysics.
The manifestation/obtaining of nothingness is closely associated with the contemplation of emptiness, and with human attempts to identify and personify it. As such, the concept of the Void, and ideas similar to it, have a significant and historically evolving presence in artistic and creative expression, as well as in academic, scientific and philosophical debate surrounding the nature of the human condition.
In Western mystical traditions, it was often argued that the transcendent 'Ground of Being' could therefore be approached through aphairesis, a form of negation.
Philosophy
Western philosophers have discussed the existence and nature of void since Parmenides suggested it did not exist and used this to argue for the non-existence of change, motion, differentiation, among other things. In response to Parmenides, Democritus described the universe as only being composed of atoms and void.
Aristotle, in Book IV of Physics, denied the existence of the Void () with his rejection of finite entities.
Stoic philosophers admitted the subsistence of four incorporeals among which they included void: "Outside of the world is diffused the infinite void, which is incorporeal. By incorporeal is meant that which, though capable of being occupied by body, is not so occupied. The world has no empty space within it, but forms one united whole. This is a necessary result of the sympathy and tension which binds together things in heaven and earth. Chrysippus discusses the void in his work On Void and in the first book of his Physical Sciences; so too Apollophanes in his Physics, Apollodorus
, and Posidonius in his Physical Discourse, book ii."
There were questions as to whether void was truly nothing or if it was in fact filled with other things, with theories of aether being suggested in the 18th century to fill the void.
Particle physics
In |
https://en.wikipedia.org/wiki/Swiss%20Vapeur%20Parc | The Swiss Vapeur Parc is a miniature park in Le Bouveret, a village on Lac Léman, Switzerland. It was opened on June 6, 1989, by an International Festival of Steam (therefore steam trains). When the park opened its total surface area was 9000 m2 (2.2 acres), but the park expanded and as of 2007, the park covers a surface area of 17'000 m2 (or 4.2 acres). In 1989, the park possessed only 2 locomotives (one running on benzine and one on steam). As of 2007, the number of trains running on benzine has sextupled while the number of steam trains has increased to 9 trains. By March 31, 2007, the Park has had 2'126'000 visitors.
Every June the park is host to the International Steam Festival.
Image gallery
External links
The Official Website
Press Communique 2007
Tourist attractions in Valais
Amusement parks in Switzerland
1989 establishments in Switzerland
Buildings and structures in Valais
Miniature parks
Amusement parks opened in 1989
20th-century architecture in Switzerland |
https://en.wikipedia.org/wiki/Splanchnopleuric%20mesenchyme | In the anatomy of an embryo, the splanchnopleuric mesenchyme is a structure created during embryogenesis when the lateral mesodermal germ layer splits into two layers. The inner (or splanchnic) layer adheres to the endoderm, and with it forms the splanchnopleure (mesoderm external to the coelom plus the endoderm).
See also
Post development the somato and splanchnopleuric junction lies at the duodeno-jejunal flexure.
somatopleure
mesenchyme |
https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman%20algorithm | The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorithm compares segments of all possible lengths and optimizes the similarity measure.
The algorithm was first proposed by Temple F. Smith and Michael S. Waterman in 1981. Like the Needleman–Wunsch algorithm, of which it is a variation, Smith–Waterman is a dynamic programming algorithm. As such, it has the desirable property that it is guaranteed to find the optimal local alignment with respect to the scoring system being used (which includes the substitution matrix and the gap-scoring scheme). The main difference to the Needleman–Wunsch algorithm is that negative scoring matrix cells are set to zero, which renders the (thus positively scoring) local alignments visible. Traceback procedure starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment. Because of its cubic time complexity, it often cannot be practically applied to large-scale problems and is replaced in favor of computationally more efficient alternatives such as (Gotoh, 1982), (Altschul and Erickson, 1986), and (Myers and Miller, 1988).
History
In 1970, Saul B. Needleman and Christian D. Wunsch proposed a heuristic homology algorithm for sequence alignment, also referred to as the Needleman–Wunsch algorithm. It is a global alignment algorithm that requires calculation steps ( and are the lengths of the two sequences being aligned). It uses the iterative calculation of a matrix for the purpose of showing global alignment. In the following decade, Sankoff, Reichert, Beyer and others formulated alternative heuristic algorithms for analyzing gene sequences. Sellers introduced a system for measuring sequence distances. In 1976, Waterman et al. added the concept of gaps into the origina |
https://en.wikipedia.org/wiki/Moyal%20bracket | In physics, the Moyal bracket is the suitably normalized antisymmetrization of the phase-space star product.
The Moyal bracket was developed in about 1940 by José Enrique Moyal, but Moyal only succeeded in publishing his work in 1949 after a lengthy dispute with Paul Dirac. In the meantime this idea was independently introduced in 1946 by Hip Groenewold.
Overview
The Moyal bracket is a way of describing the commutator of observables in the phase space formulation of quantum mechanics when these observables are described as functions on phase space. It relies on schemes for identifying functions on phase space with quantum observables, the most famous of these schemes being the Wigner–Weyl transform. It underlies Moyal’s dynamical equation, an equivalent formulation of Heisenberg’s quantum equation of motion, thereby providing the quantum generalization of Hamilton’s equations.
Mathematically, it is a deformation of the phase-space Poisson bracket (essentially an extension of it), the deformation parameter being the reduced Planck constant . Thus, its group contraction yields the Poisson bracket Lie algebra.
Up to formal equivalence, the Moyal Bracket is the unique one-parameter Lie-algebraic deformation of the Poisson bracket. Its algebraic isomorphism to the algebra of commutators bypasses the negative result of the Groenewold–van Hove theorem, which precludes such an isomorphism for the Poisson bracket, a question implicitly raised by Dirac in his 1926 doctoral thesis, the "method of classical analogy" for quantization.
For instance, in a two-dimensional flat phase space, and for the Weyl-map correspondence, the Moyal bracket reads,
where ★ is the star-product operator in phase space (cf. Moyal product), while and are differentiable phase-space functions, and is their Poisson bracket.
More specifically, in operational calculus language, this equals
The left & right arrows over the partial derivatives denote the left & right partial derivati |
https://en.wikipedia.org/wiki/COVID-19%20pandemic%20and%20animals | The COVID-19 pandemic has affected animals directly and indirectly. SARS-CoV-2, the virus that causes COVID-19, is zoonotic, which likely to have originated from animals such as bats and pangolins. Human impact on wildlife and animal habitats may be causing such spillover events to become much more likely. The largest incident to date was the culling of 14 to 17 million mink in Denmark after it was discovered that they were infected with a mutant strain of the virus.
While research is inconclusive, pet owners reported that their animals contributed to better mental health and lower loneliness during COVID-19 lockdowns. However, contact with humans infected with the virus could have adverse effects on pet animals.
Background
SARS-CoV-2 is believed to have zoonotic origins and has close genetic similarity to bat coronaviruses, suggesting it emerged from a bat-borne virus.
Cases
A small number of pet animals have been infected. There have been several cases of zoo animals testing positive for the virus, and some became sick. The virus has also been detected in wild animals.
Cats, dogs, ferrets, fruit bats, gorillas, pangolins, hamsters, mink, sea otters, pumas, snow leopards, tigers, lions, hyenas, hippos, tree shrews and whitetail deer can be infected with and have tested positive at least once for the virus. According to the US Centers for Disease Control and Prevention, the risk of transmission from animals to humans and vice versa is considerably low but further studies are yet to be conducted. Mice were initially unsusceptible but researchers showed that a type of mutation (called aromatic substitution in position 501 or position 498 but not both) in the SARS-CoV-2 virus spike protein can mouse-adapt the novel Coronavirus. Some animals which were only thought to have been susceptible at low levels were later found to have experienced higher levels of infection than previously realized, either due to viral mutations or improved surveillance technology. Dogs, |
https://en.wikipedia.org/wiki/148th%20meridian%20west | The meridian 148° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 148th meridian west forms a great circle with the 32nd meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 148th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Beaufort Sea
| style="background:#b0e0e6;" | Passing just west of Cross Island, Alaska, (at )
|-valign="top"
|
! scope="row" |
| Alaska — Mainland, Esther Island, Perry Island, the mainland again, Chenega Island, Evans Island and Latouche Island
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just west of Montague Island, Alaska, (at ) Passing just east of Tikehau atoll, (at ) Passing just west of Rangiroa atoll, (at ) Passing just east of Mehetia island, (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" | Antarctica
| Unclaimed territory
|-
|}
See also
147th meridian west
149th meridian west
w148 meridian west |
https://en.wikipedia.org/wiki/Genetic%20heterogeneity | Genetic heterogeneity occurs through the production of single or similar phenotypes through different genetic mechanisms. There are two types of genetic heterogeneity: allelic heterogeneity, which occurs when a similar phenotype is produced by different alleles within the same gene; and locus heterogeneity, which occurs when a similar phenotype is produced by mutations at different loci.
Role in medical disorders
Marked genetic heterogeneity is correlated to multiple levels of causation in many common human diseases including cystic fibrosis, Alzheimer's disease, autism spectrum disorders, inherited predisposition to breast cancer, and non-syndromic hearing loss. These levels of causation are complex and occur through: (1) rare, individual mutations that when combined contribute to the development of common diseases; (2) the accumulation of many different rare, individual mutations within the same gene that contribute to the development of the same common disease within different individuals; (3) the accumulation of many different rare, individual mutations within the same gene that contribute to the development of different phenotypic variations of the same common disease within different individuals; and (4) the development of the same common disease in different individuals through different mutations. Increased understanding of the role of genetic heterogeneity and the mechanisms through which it produces common disease phenotypes will facilitate the development of effective prevention and treatment methods for these diseases.
Cystic Fibrosis
Cystic fibrosis is an inherited autosomal recessive genetic disorder that occurs through a mutation in a single gene that codes for the cystic fibrosis transmembrane conductance regulator. Research has identified over 2,000 cystic fibrosis associated mutations in the gene encoding for the cystic fibrosis transmembrane conductance regulator at varying degrees of frequency within the disease carrying population. These mu |
https://en.wikipedia.org/wiki/Cryptographic%20primitive | Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions.
Rationale
When creating cryptographic systems, designers use cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion.
Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable with number of computer operations, and it is broken with significantly fewer than operations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include:
The designer might not be competent in the mathematical and practical considerations involved in cryptographic primitives.
Designing a new cryptographic primitive is very time-consuming and very error-prone, even for experts in the field.
Since algorithms in this field are not only required to be designed well but also need to be tested well by the cryptologist community, even if a cryptographic routine looks good from a design point of view it might still contain errors. Successfully withstanding such scrutiny gives some confidence (in fact, so far, the only confidence) that the algorithm is indeed secure enough to use; security proofs for cryptographic primitives are generally not available.
Cryptographic primitives are similar in some ways to prog |
https://en.wikipedia.org/wiki/Server%20change%20number | The Server Change Number (SCN) is a counter variable used in Client/Server Architecture systems to find out whether the server state could be synchronized with the state of the client. In case of a difference, there have been obviously communication problems.
The number is incremented once the server has successfully integrated changes coming from the client in the case of a server-side event. The counter is incremented once more, if the changes made by the programmer are committed.
Servers (computing) |
https://en.wikipedia.org/wiki/A/B%20testing | A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research methodology. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective.
Overview
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. These values are similar except for one variation which might affect a user's behavior. A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
A/B tests are useful for understanding user engagement and satisfaction of online features like a new feature or product. Large social media sites like LinkedIn, Facebook, and Instagram use A/B testing to make user experiences more successful and as a way to streamline their services.
Today, A/B tests are being used also for conducting complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another. A/B testing is used by data engineers, marketers, designers, software engineers, and entrepreneurs, among others. Many positions rely on the data from A/B tests, as they allow companies to understand growth, increase revenue, and optimize customer satisfaction.
Version A might be used at present (thus forming the control group), while version B is modified in some respect vs. A (thus forming the treatment group) |
https://en.wikipedia.org/wiki/Fritillaria%20pudica | Fritillaria pudica, the yellow fritillary, is a small perennial plant found in the sagebrush country in the western United States (Idaho, Montana, Oregon, Washington, Wyoming, very northern California, Nevada, northwestern Colorado, North Dakota and Utah) and Canada (Alberta and British Columbia). It is a member of the lily family Liliaceae. Another common (but somewhat ambiguous) name is "yellow bells", since it has a bell-shaped yellow flower. It may be found in dryish, loose soil; it is amongst the first plants to flower after the snow melts, but the flower does not last very long; as the petals age, they turn a brick-red colour and begin to curl outward. The flowers grow singly or in pairs on the stems, and the floral parts grow in multiples of threes. The species produces a small corm, which forms corms earning the genus the nickname 'riceroot'. During his historic journey, Meriwether Lewis collected a specimen while passing through Idaho in 1806.
The corm can be dug up and eaten fresh or cooked; it served Native Americans as a good source of food in times past, and is still eaten occasionally. Today these plants are not common, so digging and eating the corms is not encouraged. The plant is called in Sahaptin. |
https://en.wikipedia.org/wiki/Asbestiform | Asbestiform is a crystal habit. It describes a mineral that grows in a fibrous aggregate of high tensile strength, flexible, long, and thin crystals that readily separate. The most common asbestiform mineral is chrysotile, commonly called "white asbestos", a magnesium phyllosilicate part of the serpentine group. Other asbestiform minerals include riebeckite, an amphibole whose fibrous form is known as crocidolite or "blue asbestos", and brown asbestos, a cummingtonite-grunerite solid solution series.
The United States Environmental Protection Agency explains that, "In general, exposure may occur only when the asbestos-containing material is disturbed or damaged in some way to release particles and fibers into the air."
"Mountain leather" is an old-fashioned term for flexible, sheet-like natural formations of asbestiform minerals which resemble leather. Asbestos-containing minerals known to form mountain leather include: actinolite, palygorskite, saponite, sepiolite, tremolite, and zeolite.
See also
Chrysotile |
https://en.wikipedia.org/wiki/NFA%20minimization | In automata theory (a branch of theoretical computer science), NFA minimization is the task of transforming a given nondeterministic finite automaton (NFA) into an equivalent NFA that has a minimum number of states, transitions, or both. While efficient algorithms exist for DFA minimization, NFA minimization is PSPACE-complete. No efficient (polynomial time) algorithms are known, and under the standard assumption P ≠ PSPACE, none exist. The most efficient known algorithm is the Kameda‒Weiner algorithm.
Non-uniqueness of minimal NFA
Unlike deterministic finite automata, minimal NFAs may not be unique. There may be multiple NFAs of the same size which accept the same regular language, but for which there is no equivalent NFA or DFA with fewer states. |
https://en.wikipedia.org/wiki/Theoretical%20and%20Applied%20Genetics | Theoretical and Applied Genetics is a peer-reviewed scientific journal published by Springer Science+Business Media. The journal publishes articles in the fields of plant genetics, genomics, and biotechnology. It was established in 1929 as Der Züchter, which name was changed to the current one in 1968. Previous editors include H. Stubbe (1946–1976) and H. F. Linskens (1977–1987). The journal publishes only original research articles.
Abstracting and indexing
Theoretical and Applied Genetics is abstracted and indexed in Aquatic Sciences and Fisheries Abstracts, Chemical Abstracts, Current Contents, PubMed, Science Citation Index, and Scopus. According to the Journal Citation Reports, its 2010 impact factor is 3.264. This ranks it second out of 74 journals in the category "Agronomy", first out of 30 in "Horticulture", 23rd out of 187 in "Plant Sciences", and 59th out of 156 in "Genetics & Heredity". |
https://en.wikipedia.org/wiki/Dispersity | In chemistry, the dispersity is a measure of the heterogeneity of sizes of molecules or particles in a mixture. A collection of objects is called uniform if the objects have the same size, shape, or mass. A sample of objects that have an inconsistent size, shape and mass distribution is called non-uniform. The objects can be in any form of chemical dispersion, such as particles in a colloid, droplets in a cloud, crystals in a rock,
or polymer macromolecules in a solution or a solid polymer mass. Polymers can be described by molecular mass distribution; a population of particles can be described by size, surface area, and/or mass distribution; and thin films can be described by film thickness distribution.
IUPAC has deprecated the use of the term polydispersity index, having replaced it with the term dispersity, represented by the symbol Đ (pronounced D-stroke) which can refer to either molecular mass or degree of polymerization. It can be calculated using the equation ĐM = Mw/Mn, where Mw is the weight-average molar mass and Mn is the number-average molar mass. It can also be calculated according to degree of polymerization, where ĐX = Xw/Xn, where Xw is the weight-average degree of polymerization and Xn is the number-average degree of polymerization. In certain limiting cases where ĐM = ĐX, it is simply referred to as Đ. IUPAC has also deprecated the terms monodisperse, which is considered to be self-contradictory, and polydisperse, which is considered redundant, preferring the terms uniform and non-uniform instead. The terms monodisperse and polydisperse are however still preferentially used to describe particles in an aerosol.
Overview
A uniform polymer (often referred to as a monodisperse polymer) is composed of molecules of the same mass. Nearly all natural polymers are uniform. Synthetic near-uniform polymer chains can be made by processes such as anionic polymerization, a method using an anionic catalyst to produce chains that are similar in length. This te |
https://en.wikipedia.org/wiki/Windows%20Calculator | Windows Calculator is a software calculator developed by Microsoft and included in Windows. In its Windows 10 incarnation it has four modes: standard, scientific, programmer, and a graphing mode. The standard mode includes a number pad and buttons for performing arithmetic operations. The scientific mode takes this a step further and adds exponents and trigonometric function, and programmer mode allows the user to perform operations related to computer programming. In 2020, a graphing mode was added to the Calculator, allowing users to graph equations on a coordinate plane.
The Windows Calculator is one of a few applications that have been bundled in all versions of Windows, starting with Windows 1.0. Since then, the calculator has been upgraded with various capabilities.
In addition, the calculator has also been included with Windows Phone and Xbox One.
History
A simple arithmetic calculator was first included with Windows 1.0.
In Windows 3.0, a scientific mode was added, which included exponents and roots, logarithms, factorial-based functions, trigonometry (supports radian, degree and gradians angles), base conversions (2, 8, 10, 16), logic operations, statistical functions such as single variable statistics and linear regression.
Windows 9x
Until Windows 95, it uses an IEEE 754-1985 double-precision floating-point, and the highest representable number by the calculator is 21024, which is slightly above 10308 (~1.80 × 10308).
In Windows 98 and later, it uses an arbitrary-precision arithmetic library, replacing the standard IEEE floating point library. It offers bignum precision for basic operations (addition, subtraction, multiplication, division) and 32 digits of precision for advanced operations (square root, transcendental functions). The largest value that can be represented on the Windows Calculator is currently and the smallest is . (Also ! calculates the gamma function which is defined over all real numbers, only excluding the negative integers).
|
https://en.wikipedia.org/wiki/Eyach%20virus | Eyach virus (EYAV) is a viral infection (genus Coltivirus) in the Reoviridae family transmitted by a tick vector. It has been isolated from Ixodes ricinus and I. ventalloi ticks in Europe.
Transmission and clinical syndromes
Eyach virus is acquired by tick bite. The tick gets infected after a blood meal from a vertebrate host, which is suspected to be the European rabbit O. cunniculus. Eyach virus has been linked to tick-borne encephalitis, as well as polyradiculoneuritis and meningopolyneuritis, based on serological samples of patients with these neurological disorders. |
https://en.wikipedia.org/wiki/Associative%20magic%20square | An associative magic square is a magic square for which each pair of numbers symmetrically opposite to the center sum up to the same value. For an n × n square, filled with the numbers from 1 to n2, this common sum must equal n2 + 1. These squares are also called associated magic squares, regular magic squares, regmagic squares, or symmetric magic squares.
Examples
For instance, the Lo Shu Square – the unique 3 × 3 magic square – is associative, because each pair of opposite points form a line of the square together with the center point, so the sum of the two opposite points equals the sum of a line minus the value of the center point regardless of which two opposite points are chosen. The 4 × 4 magic square from Albrecht Dürer 1514 engraving – also found in a 1765 letter of Benjamin Franklin – is also associative, with each pair of opposite numbers summing to 17.
Existence and enumeration
The numbers of possible associative n × n magic squares for n = 3,4,5,..., counting two squares as the same whenever they differ only by a rotation or reflection, are:
1, 48, 48544, 0, 1125154039419854784, ...
The number zero for n = 6 is an example of a more general phenomenon: associative magic squares do not exist for values of n that are singly even (equal to 2 modulo 4). Every associative magic square of even order forms a singular matrix, but associative magic squares of odd order can be singular or nonsingular. |
https://en.wikipedia.org/wiki/Leucine-responsive%20regulatory%20protein | Leucine responsive protein, or Lrp, is a global regulator protein, meaning that it regulates the biosynthesis of leucine, as well as the other branched-chain amino acids, valine and isoleucine. In bacteria, it is encoded by the lrp gene.
Lrp alternatively activates and represses the expression of acetolactate synthase's (ALS) several isoenzymes. Lrp, in E. coli, along with DAM plays a role in the regulation of the fim operon, a group of genes needed for successful synthesis and trafficking of Type I Pili. These hair like structures are important virulence factors for different pathogenic strains of Bacteria as they can mediate biofilm formation and adhesion to host epithelia. Other examples include Salmonella enterica serovar Typhimurium and Klebsiella pneumoniae.
More generally, Lrp facilitates the proliferation and pathogenesis of bacteria in their hosts. |
https://en.wikipedia.org/wiki/RVT-802 | RVT-802 (allogeneic cultured postnatal thymus-derived tissue) is a medication being developed by Enzyvant Therapeutics Ireland Limited for the treatment of congenital athymia (absence of a thymus gland), especially in the context of DiGeorge syndrome.
Development history
Enzyvant licensed the technology underlying RVT-802 from Duke University in 2017. In the same year, the Food and Drug Administration granted Regenerative Medicine Advanced Therapy – the second such approval ever to be granted – status to RVT-802. In December 2019, the Food and Drug Administration raised concerns about the manufacturing of RVT-802, and declined to approve it, instead issuing a Complete Response Letter. In April 2021, Enzyvant resubmitted its Biologics License Application. It is expected that the review of RVT-802 will conclude in October 2021 (PDUFA date).
Method of action
RVT-802 is an investigational treatment for congenital athymia, primarily associated with DiGeorge syndrome. It is a tissue-based therapy that consists of cultured donor thymus-derived tissue. RVT-802 consists of donor thymus-derived tissue that is cultured and surgically implanted into the recipient
In patients with congenital athymia, the thymus gland is absent. Because of the crucial role the thymus gland plays in the maturation and differentiation of T cells, athymia results in severe immunodeficiency, typically resulting in death within the first two years of life.
RVT-802 is manufactured by extracting thymus tissue from infants undergoing cardiac surgery, depleting it of immature T cells to prevent graft-versus-host disease, then implanting the processed tissue into the recipient's leg, where it fulfils the immunological role of the thymus. |
https://en.wikipedia.org/wiki/Fundamental%20vector%20field | In the study of mathematics and especially differential geometry, fundamental vector fields are an instrument that describes the infinitesimal behaviour of a smooth Lie group action on a smooth manifold. Such vector fields find important applications in the study of Lie theory, symplectic geometry, and the study of Hamiltonian group actions.
Motivation
Important to applications in mathematics and physics is the notion of a flow on a manifold. In particular, if is a smooth manifold and is a smooth vector field, one is interested in finding integral curves to . More precisely, given one is interested in curves such that:
for which local solutions are guaranteed by the Existence and Uniqueness Theorem of Ordinary Differential Equations. If is furthermore a complete vector field, then the flow of , defined as the collection of all integral curves for , is a diffeomorphism of . The flow given by is in fact an action of the additive Lie group on .
Conversely, every smooth action defines a complete vector field via the equation:
It is then a simple result that there is a bijective correspondence between actions on and complete vector fields on .
In the language of flow theory, the vector field is called the infinitesimal generator. Intuitively, the behaviour of the flow at each point corresponds to the "direction" indicated by the vector field. It is a natural question to ask whether one may establish a similar correspondence between vector fields and more arbitrary Lie group actions on .
Definition
Let be a Lie group with corresponding Lie algebra . Furthermore, let be a smooth manifold endowed with a smooth action . Denote the map such that , called the orbit map of corresponding to . For , the fundamental vector field corresponding to is any of the following equivalent definitions:
where is the differential of a smooth map and is the zero vector in the vector space .
The map can then be shown to be a Lie algebra homomorphism.
Application |
https://en.wikipedia.org/wiki/Granuloma%20annulare | Granuloma annulare (GA) is a common, sometimes chronic skin condition which presents as reddish bumps on the skin arranged in a circle or ring. It can initially occur at any age, though two-thirds of patients are under 30 years old, and it is seen most often in children and young adults. Females are two times as likely to have it as males.
Signs and symptoms
Aside from the visible rash, granuloma annulare is usually asymptomatic. Sometimes the rash may burn or itch. People with GA usually notice a ring of small, firm bumps (papules) over the backs of the forearms, hands or feet, often centered on joints or knuckles. The bumps are caused by the clustering of T cells below the skin. These papules start as very small, pimple looking bumps, which spread over time from that size to dime, quarter, half-dollar size and beyond. Occasionally, multiple rings may join into one. Rarely, GA may appear as a firm nodule under the skin of the arms or legs. It also occurs on the sides and circumferential at the waist and without therapy can continue to be present for many years. Outbreaks continue to develop at the edges of the aging rings.
Causes
The condition is usually seen in otherwise healthy people. Occasionally, it may be associated with diabetes or thyroid disease. It has also been associated with autoimmune diseases such as systemic lupus erythematosus, rheumatoid arthritis, Lyme disease and Addison's disease. At this time, no conclusive connection has been made between patients.
Pathology
Granuloma annulare microscopically consists of dermal epithelioid histiocytes around a central zone of mucin—a so-called palisaded granuloma.
Pathogenesis
Granuloma annulare is an idiopathic condition, though many catalysts have been proposed. Among these is skin trauma, UV exposure, vaccinations, tuberculin skin testing, and Borrelia and viral infections.
The mechanisms proposed at a molecular level vary even more. In 1977, Dahl et al. proposed that since the lesions of GA often |
https://en.wikipedia.org/wiki/Dendrobranchiata | Dendrobranchiata is a suborder of decapods, commonly known as prawns. There are 540 extant species in seven families, and a fossil record extending back to the Devonian. They differ from related animals, such as Caridea and Stenopodidea, by the branching form of the gills and by the fact that they do not brood their eggs, but release them directly into the water. They may reach a length of over and a mass of , and are widely fished and farmed for human consumption.
Shrimp and prawns
While Dendrobranchiata and Caridea belong to different suborders of Decapoda, they are very similar in appearance, and in many contexts such as commercial farming and fisheries, they are both often referred to as "shrimp" and "prawn" interchangeably. In the United Kingdom, the word "prawn" is more common on menus than "shrimp", while the opposite is the case in North America. The term "prawn" is also loosely used to describe any large shrimp, especially those that come 15 (or fewer) to the pound (such as "king prawns", yet sometimes known as "jumbo shrimp"). Australia and some other Commonwealth nations follow this British usage to an even greater extent, using the word "prawn" almost exclusively.
Description
Together with other swimming Decapoda, Dendrobranchiata show the "caridoid facies", or shrimp-like form. The body is typically robust, and can be divided into a cephalothorax (head and thorax fused together) and a pleon (abdomen). The body is generally slightly flattened side-to-side. The largest species, Penaeus monodon, can reach a mass of and a length of .
Head
The most conspicuous appendages arising from the head are the antennae. The first pair are biramous (having two flagella), except in Luciferidae, and are relatively small. The second pair can be 2–3 times the length of the body and are always uniramous (having a single flagellum). The mouthparts comprise pairs of mandibles, maxillules and maxillae, arising from the head, and three pairs of maxillipeds, arising from t |
https://en.wikipedia.org/wiki/Alpha%20recursion%20theory | In recursion theory, α recursion theory is a generalisation of recursion theory to subsets of admissible ordinals . An admissible set is closed under functions, where denotes a rank of Godel's constructible hierarchy. is an admissible ordinal if is a model of Kripke–Platek set theory. In what follows is considered to be fixed.
The objects of study in recursion are subsets of . These sets are said to have some properties:
A set is said to be -recursively-enumerable if it is definable over , possibly with parameters from in the definition.
A is -recursive if both A and (its relative complement in ) are -recursively-enumerable. It's of note that -recursive sets are members of by definition of .
Members of are called -finite and play a similar role to the finite numbers in classical recursion theory.
Members of are called -arithmetic.
There are also some similar definitions for functions mapping to :
A function mapping to is -recursively-enumerable, or -partial recursive, iff its graph is -definable on .
A function mapping to is -recursive iff its graph is -definable on .
Additionally, a function mapping to is -arithmetical iff there exists some such that the function's graph is -definable on .
Additional connections between recursion theory and α recursion theory can be drawn, although explicit definitions may not have yet been written to formalize them:
The functions -definable in play a role similar to those of the primitive recursive functions.
We say R is a reduction procedure if it is recursively enumerable and every member of R is of the form where H, J, K are all α-finite.
A is said to be α-recursive in B if there exist reduction procedures such that:
If A is recursive in B this is written . By this definition A is recursive in (the empty set) if and only if A is recursive. However A being recursive in B is not equivalent to A being .
We say A is regular if or in other words if every initial portion of A is α-finite. |
https://en.wikipedia.org/wiki/Rising%20Sun%20Flag | The is a Japanese flag that consists of a red disc and sixteen red rays emanating from the disc. Like the Japanese national flag, the Rising Sun Flag symbolizes the sun.
The flag was originally used by feudal warlords in Japan during the Edo period (1603–1868 CE). On May 15, 1870, as a policy of the Meiji government, it was adopted as the war flag of the Imperial Japanese Army, and on October 7, 1889, it was adopted as the naval ensign of the Imperial Japanese Navy.
At present, the flag is flown by the Japan Maritime Self-Defense Force, and an eight-ray version is flown by the Japan Self-Defense Forces and the Japan Ground Self-Defense Force. The rising sun design is also seen in numerous scenes in daily life in Japan, such as in fishermen's banners hoisted to signify large catches of fish, flags to celebrate childbirth, and in flags for seasonal festivities.
The flag is controversial in Korea and China, where it is associated with Japanese militarism and imperialism.
History and design
The flag of Japan and the symbolism of the rising Sun has held symbolic meaning in Japan since the Asuka period (538–710 CE). The Japanese archipelago is east of the Asian mainland, and is thus where the Sun "rises". In 607 CE, an official correspondence that began with "from the Emperor of the rising sun" was sent to Chinese Emperor Yang of Sui. Japan is often referred to as "the land of the rising sun". In the 12th century work The Tale of the Heike, it was written that different samurai carried drawings of the Sun on their fans.
The Japanese word for Japan is , which is pronounced or , and literally means "the origin of the sun". The character means "sun" or "day"; means "base" or "origin". The compound therefore means "origin of the sun" and is the source of the popular Western epithet "Land of the Rising Sun". The red disc symbolizes the Sun and the red lines are light rays shining from the rising sun.
The design of the Rising Sun Flag (Asahi) has been widely used si |
https://en.wikipedia.org/wiki/Yeast%20artificial%20chromosome | Yeast artificial chromosomes (YACs) are genetically engineered chromosomes derived from the DNA of the yeast, Saccharomyces cerevisiae , which is then ligated into a bacterial plasmid. By inserting large fragments of DNA, from 100–1000 kb, the inserted sequences can be cloned and physically mapped using a process called chromosome walking. This is the process that was initially used for the Human Genome Project, however due to stability issues, YACs were abandoned for the use of bacterial artificial chromosome
The bakers' yeast S. cerevisiae is one of the most important experimental organisms for studying eukaryotic molecular genetics.
Beginning with the initial research of the Rankin et al., Strul et al., and Hsaio et al., the inherently fragile chromosome was stabilized by discovering the necessary autonomously replicating sequence (ARS); a refined YAC utilizing this data was described in 1983 by Murray et al.
The primary components of a YAC are the ARS, centromere , and telomeres from S. cerevisiae. Additionally, selectable marker genes, such as antibiotic resistance and a visible marker, are utilized to select transformed yeast cells. Without these sequences, the chromosome will not be stable during extracellular replication, and would not be distinguishable from colonies without the vector.
Construction
A YAC is built using an initial circular DNA plasmid, which is typically cut into a linear DNA molecule using restriction enzymes; DNA ligase is then used to ligate a DNA sequence or gene of interest into the linearized DNA, forming a single large, circular piece of DNA. The basic generation of linear yeast artificial chromosomes can be broken down into 6 main steps:
Full chromosome III
Chromosome III is the third smallest chromosome in S. cerevisiae; its size was estimated from pulsed-field gel electro- phoresis studies to be 300–360 kb
This chromosome has been the subject of intensive study, not least because it contains the three genetic loci invo |
https://en.wikipedia.org/wiki/Markov%20algorithm | In theoretical computer science, a Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a general model of computation and can represent any mathematical expression from its simple notation. Markov algorithms are named after the Soviet mathematician Andrey Markov, Jr.
Refal is a programming language based on Markov algorithms.
Description
Normal algorithms are verbal, that is, intended to be applied to strings in different alphabets.
The definition of any normal algorithm consists of two parts: an alphabet, which is a set of symbols, and a scheme. The algorithm is applied to strings of symbols of the alphabet. The scheme is a finite ordered set of substitution formulas. Each formula can be either simple or final. Simple substitution formulas are represented by strings of the form , where and are two arbitrary strings in the alphabet. Similarly, final substitution formulas are represented by strings of the form .
Here is an example of a normal algorithm scheme in the five-letter alphabet :
The process of applying the normal algorithm to an arbitrary string in the alphabet of this algorithm is a discrete sequence of elementary steps, consisting of the following. Let’s assume that is the word obtained in the previous step of the algorithm (or the original word , if the current step is the first). If of the substitution formulas there is no left-hand side which is included in the , then the algorithm terminates, and the result of its work is considered to be the string . Otherwise, the first of the substitution formulae whose left sides are included in is selected. If the substitution formula is of the form , then out of all of possible representations of the string of the form (where and are arbitrary strings) the one with the shortest is chosen. Then the algorithm terminates and the result of its work |
https://en.wikipedia.org/wiki/Jugular%20lymph%20trunk | The jugular trunk is a lymphatic vessel in the neck. It is formed by vessels that emerge from the superior deep cervical lymph nodes and unite to efferents of the inferior deep cervical lymph nodes.
On the right side, this trunk ends in the junction of the internal jugular and subclavian veins, called the venous angle. On the left side it joins the thoracic duct. |
https://en.wikipedia.org/wiki/Powell%27s%20dog%20leg%20method | Powell's dog leg method, also called Powell's hybrid method, is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell. Similarly to the Levenberg–Marquardt algorithm, it combines the Gauss–Newton algorithm with gradient descent, but it uses an explicit trust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of the objective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step (dog leg step).
The name of the method derives from the resemblance between the construction of the dog leg step and the shape of a dogleg hole in golf.
Formulation
Given a least squares problem in the form
with , Powell's dog leg method finds the optimal point by constructing a sequence that converges to . At a given iteration, the Gauss–Newton step is given by
where is the Jacobian matrix, while the steepest descent direction is given by
The objective function is linearised along the steepest descent direction
To compute the value of the parameter at the Cauchy point, the derivative of the last expression with respect to is imposed to be equal to zero, giving
Given a trust region of radius , Powell's dog leg method selects the update step as equal to:
, if the Gauss–Newton step is within the trust region ();
if both the Gauss–Newton and the steepest descent steps are outside the trust region ();
with such that , if the Gauss–Newton step is outside the trust region but the steepest descent step is inside ( |
https://en.wikipedia.org/wiki/Mass%20chromatogram | A mass chromatogram is a representation of mass spectrometry data as a chromatogram, where the x-axis represents time and the y-axis represents signal intensity. The source data contains mass information; however, it is not graphically represented in a mass chromatogram in favor of visualizing signal intensity versus time. The most common use of this data representation is when mass spectrometry is used in conjunction with some form of chromatography, such as in liquid chromatography–mass spectrometry or gas chromatography–mass spectrometry. In this case, the x-axis represents retention time, analogous to any other chromatogram. The y-axis represents signal intensity or relative signal intensity. There are many different types of metrics that this intensity may represent, depending on what information is extracted from each mass spectrum.
Total ion current (TIC) chromatogram
The total ion current (TIC) chromatogram represents the summed intensity across the entire range of masses being detected at every point in the analysis. The range is typically several hundred mass-to-charge units or more. In complex samples, the TIC chromatogram often provides limited information as multiple analytes elute simultaneously, obscuring individual species.
Base peak chromatogram
The base peak chromatogram is similar to the TIC chromatogram, however it monitors only the most intense peak in each spectrum. This means that the base peak chromatogram represents the intensity of the most intense peak at every point in the analysis. Base peak chromatograms often have a cleaner look and thus are more informative than TIC chromatograms because the background is reduced by focusing on a single analyte at every point.
Extracted-ion chromatogram (EIC or XIC)
In an extracted-ion chromatogram (EIC or XIC), also called a reconstructed-ion chromatogram (RIC), one or more m/z values representing one or more analytes of interest are recovered ('extracted') from the entire data set for a chrom |
https://en.wikipedia.org/wiki/Adaptive%20chosen-ciphertext%20attack | An adaptive chosen-ciphertext attack (abbreviated as CCA2) is an interactive form of chosen-ciphertext attack in which an attacker first sends a number of ciphertexts to be decrypted chosen adaptively, and then uses the results to distinguish a target ciphertext without consulting the oracle on the challenge ciphertext. In an adaptive attack, the attacker is further allowed adaptive queries to be asked after the target is revealed (but the target query is disallowed). It is extending the indifferent (non-adaptive) chosen-ciphertext attack (CCA1) where the second stage of adaptive queries is not allowed. Charles Rackoff and Dan Simon defined CCA2 and suggested a system building on the non-adaptive CCA1 definition and system of Moni Naor and Moti Yung (which was the first treatment of chosen ciphertext attack immunity of public key systems).
In certain practical settings, the goal of this attack is to gradually reveal information about an encrypted message, or about the decryption key itself. For public-key systems, adaptive-chosen-ciphertexts are generally applicable only when they have the property of ciphertext malleability — that is, a ciphertext can be modified in specific ways that will have a predictable effect on the decryption of that message.
Practical attacks
Adaptive-chosen-ciphertext attacks were perhaps considered to be a theoretical concern, but not to have been be manifested in practice, until 1998, when Daniel Bleichenbacher (then of Bell Laboratories) demonstrated a practical attack against systems using RSA encryption in concert with the PKCS#1 v1 encoding function, including a version of the Secure Sockets Layer (SSL) protocol used by thousands of web servers at the time.
The Bleichenbacher attacks, also known as the million message attack, took advantage of flaws within the PKCS #1 function to gradually reveal the content of an RSA encrypted message. Doing this requires sending several million test ciphertexts to the decryption device (e.g. SSL |
https://en.wikipedia.org/wiki/PortMidi | PortMidi is a computer library for real time input and output of MIDI data. It is designed to be portable to many different operating systems. PortMidi is part of the PortMusic project.
See also
PortAudio
External links
portmidi.h – definition of the API and contains the documentation for PortMidi
Audio libraries
Computer libraries |
https://en.wikipedia.org/wiki/S-box | In cryptography, an S-box (substitution-box) is a basic component of symmetric key algorithms which performs substitution. In block ciphers, they are typically used to obscure the relationship between the key and the ciphertext, thus ensuring Shannon's property of confusion. Mathematically, an S-box is a nonlinear vectorial Boolean function.
In general, an S-box takes some number of input bits, m, and transforms them into some number of output bits, n, where n is not necessarily equal to m. An m×n S-box can be implemented as a lookup table with 2m words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key (e.g. the Blowfish and the Twofish encryption algorithms).
Example
One good example of a fixed table is the S-box from DES (S5), mapping 6-bit input into a 4-bit output:
Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits (the first and last bits), and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; the corresponding output would be "1001".
Analysis and properties
When DES was first published in 1977, the design criteria of its S-boxes were kept secret to avoid compromising the technique of differential cryptanalysis (which was not yet publicly known). As a result, research in what made good S-boxes was sparse at the time. Rather, the eight S-boxes of DES were the subject of intense study for many years out of a concern that a backdoor (a vulnerability known only to its designers) might have been planted in the cipher. As the S-boxes are the only nonlinear part of the cipher, compromising those would compromise the entire cipher.
The S-box design criteria were eventually published (in ) after the public rediscovery of differential cryptanalysis, showing that they had been carefully tuned to increase resistance against this specific attack such that |
https://en.wikipedia.org/wiki/Umbrella%20%28song%29 | "Umbrella" is a song by Barbadian singer Rihanna, released worldwide on March 29, 2007, through Def Jam Recordings as the lead single and opening track from her third studio album, Good Girl Gone Bad (2007). Its featured artist, American rapper Jay-Z, co-wrote the song with its producers Tricky Stewart and Kuk Harrell, with additional writing contributions coming from The-Dream.
"Umbrella" was a global success, topping the charts in 17 countries such as Australia, Canada, Germany, Spain, the Republic of Ireland, Sweden, Switzerland, the United Kingdom, and the United States. In the UK, where the song's chart performance coincided with the prolonged rain and flooding, it was one of the most played songs on radio in the 2000s decade. It managed to stay atop the UK Singles Chart for 10 consecutive weeks, the longest run at number one for any single of that decade, and is also one of the few songs to top the chart for at least 10 weeks. As one of the highest digital debuts in the United States at the time, it remained atop of the US Billboard Hot 100 for seven consecutive weeks.
The single's accompanying music video, directed by Chris Applebaum and featuring Rihanna's nude body covered in silver paint, earned her a Video of the Year at the 2007 MTV Video Music Awards and Most Watched Video on MuchMusic.com at MuchMusic Video Awards. "Umbrella" has been covered by several notable performers across various musical genres, including All Time Low, the Baseballs, Train, Manic Street Preachers, McFly, Mike Shinoda of Linkin Park, OneRepublic, Taylor Swift, and Vanilla Sky. Rihanna performed the song at the 2007 MTV Movie Awards and at the 2008 BRIT Awards, and also included it as the closing number of the Good Girl Gone Bad Tour (2008), the Last Girl on Earth (2010), and the Loud Tour (2011) as well as in the Diamonds World Tour (2013), and the Anti World Tour (2016). "Umbrella" is also a playable song in the 2012 video game Just Dance 4.
Background and development
Amer |
https://en.wikipedia.org/wiki/David%20Chalmers | David John Chalmers (; born 20 April 1966) is an Australian philosopher and cognitive scientist specializing in the areas of philosophy of mind and philosophy of language. He is a professor of philosophy and neural science at New York University, as well as co-director of NYU's Center for Mind, Brain and Consciousness (along with Ned Block). In 2006, he was elected a Fellow of the Australian Academy of the Humanities. In 2013, he was elected a Fellow of the American Academy of Arts & Sciences.
Chalmers is best known for formulating the hard problem of consciousness.
He and David Bourget cofounded PhilPapers, a database of journal articles for philosophers.
Early life and education
Chalmers was born in Sydney, New South Wales, in 1966, and subsequently grew up in Adelaide, South Australia, where he attended Unley High School. His father, Alan Chalmers, is also a noted philosopher of science.
As a child, he experienced synesthesia. He began coding and playing computer games at age 10 on a PDP-10 at a medical center. He also performed exceptionally in mathematics, and secured a bronze medal in the International Mathematical Olympiad. When Chalmers was 13 he read Douglas Hofstadter's 1979 book Gödel, Escher, Bach, which awakened an interest in philosophy.
Chalmers received his undergraduate degree in pure mathematics from the University of Adelaide in Australia. After graduating Chalmers spent six months reading philosophy books while hitchhiking across Europe, before continuing his studies at the University of Oxford, where he was a Rhodes Scholar but eventually withdrew from the course. In 1993, Chalmers received his PhD in philosophy and cognitive science from Indiana University Bloomington under Douglas Hofstadter, writing a doctoral thesis entitled Toward a Theory of Consciousness. He was a postdoctoral fellow in the Philosophy-Neuroscience-Psychology program directed by Andy Clark at Washington University in St. Louis from 1993 to 1995.
Career
In 1994, Ch |
https://en.wikipedia.org/wiki/Autoimmune%20autonomic%20ganglionopathy | Autoimmune autonomic ganglionopathy is a type of immune-mediated autonomic failure that is associated with antibodies against the ganglionic nicotinic acetylcholine receptor present in sympathetic, parasympathetic, and enteric ganglia. Typical symptoms include gastrointestinal dysmotility, orthostatic hypotension, and tonic pupils. Many cases have a sudden onset, but others worsen over time, resembling degenerative forms of autonomic dysfunction. For milder cases, supportive treatment is used to manage symptoms. Plasma exchange, intravenous immunoglobulin, corticosteroids, or immunosuppression have been used successfully to treat more severe cases.
Signs and symptoms
Although symptoms of AAG can vary from patient to patient, symptoms are dysautonomia. Hallmarks include:
Gastrointestinal dysmotility, including lack of appetite, nausea, constipation, diarrhea
Anhidrosis (decreased ability to sweat), often preceded by excessive sweating
Bladder dysfunction (neurogenic bladder)
Small fiber peripheral neuropathy
Severe orthostatic hypotension
Pupillary dysfunction
Syncope (fainting)
Sicca syndrome (chronic dryness of the eyes and mouth) See:
No indication from the history or physical examination of cerebellar, striatal, pyramidal, and extrapyramidal dysfunction, as these features suggest the more serious multiple system atrophy.
Causes
The cause is generally either paraneoplastic syndrome or idiopathic. In idiopathic AAG, the body's own immune system targets a receptor in the autonomic ganglia, which is part of a peripheral nerve fiber. If the AAG is paraneoplastic, they have a form of cancer, and their immune system has produced paraneoplastic antibodies in response to the cancer.
Diagnosis
Traditional autonomic testing is used to aid in the diagnosis of AAG. These tests can include a tilt table test (TTT), thermoregulatory sweat test (TST), quantitative sudomotor autonomic reflex testing (QSART) and various blood panels. Additionally, a blood test showi |
https://en.wikipedia.org/wiki/Chlorarachniophyte | The chlorarachniophytes are a small group of exclusively marine algae widely distributed in tropical and temperate waters. They are typically mixotrophic, ingesting bacteria and smaller protists as well as conducting photosynthesis. Normally they have the form of small amoebae, with branching cytoplasmic extensions that capture prey and connect the cells together, forming a net. They may also form flagellate zoospores, which characteristically have a single subapical flagellum that spirals backwards around the cell body, and walled coccoid cells.
The chloroplasts were presumably acquired by ingesting some green alga. They are surrounded by four membranes, the outermost of which is continuous with the endoplasmic reticulum, and contain a small nucleomorph between the middle two, which is a remnant of the alga's nucleus. This contains a small amount of DNA and divides without forming a mitotic spindle. The origin of the chloroplasts from green algae is supported by their pigmentation, which includes chlorophylls a and b, and by genetic similarities. The only other groups of algae that contain nucleomorphs are a few species of dinoflagellates, which also have plastids originating from green algae, and the cryptomonads, which acquired their chloroplasts from a red alga.
The chlorarachniophytes only include five genera, which show some variation in their life-cycles and may lack one or two of the stages described above. Genetic studies place them among the Cercozoa, a diverse group of amoeboid and amoeboid-like protozoa.
The chlorarachniophytes were placed before in the order Rhizochloridales, class Xanthophyceae (e.g., Smith, 1938), as algae, or in order Rhizochloridea, class Xanthomonadina (e.g., Deflandre, 1956), as protozoa.
So far sexual reproduction has only been reported in two species; Chlorarachnion reptans and Cryptochlora perforans.
Phylogeny
Based on the work of Hirakawa et al. 2011.
Taxonomy
Class Chlorarachniophyceae Hibberd & Norris 1984 [Chlorar |
https://en.wikipedia.org/wiki/George%20E.%20Pake%20Prize | The George E. Pake Prize is a prize that has been awarded annually by the American Physical Society since 1984. The recipients are chosen for "outstanding work by physicists combining original research accomplishments with leadership in the management of research or development in industry". The prize is named after George E. Pake (1924–2004), founding director of Xerox PARC, and as of 2007 it is valued at $5,000.
Recipients
Source: American Physical Society
See also
List of physics awards
External links
George E. Pake Prize, American Physical Society
Awards of the American Physical Society |
https://en.wikipedia.org/wiki/Potassium%20carbonate | Potassium carbonate is the inorganic compound with the formula K2CO3. It is a white salt, which is soluble in water and forms a strongly alkaline solution. It is deliquescent, often appearing as a damp or wet solid. Potassium carbonate is mainly used in the production of soap and glass.
History
Potassium carbonate is the primary component of potash and the more refined pearl ash or salts of tartar. The first patent issued by the US Patent Office was awarded to Samuel Hopkins in 1790 for an improved method of making potash and pearl ash.
In late 18th-century North America, before the development of baking powder, pearl ash was used as a leavening agent for quick breads.
Production
Potassium lye (which in this case can alternatively be called potash), a substance which contains potassium carbonate, potassium bicarbonate, and potassium hydroxide, was historically produced by dissolving the lye found in the wooden ashes inside of water for app one or more days, disposing of the undissolved ashes, and then drying/evaporating the remaining liquid.
With modern observation, this process would produce greater yields if done with the ashes of banana peels due to their increased amounts of potassium carbonate.
As previously mentioned, Samuel Hopkins created an improved method of making pearl ash. One of those procedures was putting the lye/potash in a kiln to remove impurities.
Potassium carbonate is today, prepared commercially, by the reaction of potassium hydroxide with carbon dioxide:
2 KOH + CO2 → K2CO3 + H2O
From the solution crystallizes the sequestrate K2CO3·H2O ("potash hydrate"). Heating this solid above gives the anhydrous salt. In an alternative method, potassium chloride is treated with carbon dioxide in the presence of an organic amine to give potassium bicarbonate, which is then calcined:
2 KHCO3 → K2CO3 + H2O + CO2
Applications
(historically) for soap, glass, and dishware production
as a mild drying agent where other drying agents, such as calc |
https://en.wikipedia.org/wiki/Roundness | Roundness is the measure of how closely the shape of an object approaches that of a mathematically perfect circle. Roundness applies in two dimensions, such as the cross sectional circles along a cylindrical object such as a shaft or a cylindrical roller for a bearing. In geometric dimensioning and tolerancing, control of a cylinder can also include its fidelity to the longitudinal axis, yielding cylindricity. The analogue of roundness in three dimensions (that is, for spheres) is sphericity.
Roundness is dominated by the shape's gross features rather than the definition of its edges and corners, or the surface roughness of a manufactured object. A smooth ellipse can have low roundness, if its eccentricity is large. Regular polygons increase their roundness with increasing numbers of sides, even though they are still sharp-edged.
In geology and the study of sediments (where three-dimensional particles are most important), roundness is considered to be the measurement of surface roughness and the overall shape is described by sphericity.
Simple definitions
The ISO definition of roundness is the ratio of the radii of inscribed and circumscribed circles, i.e. the maximum and minimum sizes for circles that are just sufficient to fit inside and to enclose the shape.
Diameter
Having a constant diameter, measured at varying angles around the shape, is often considered to be a simple measurement of roundness. This is misleading.
Although constant diameter is a necessary condition for roundness, it is not a sufficient condition for roundness: shapes exist that have constant diameter but are far from round. Mathematical shapes such as the Reuleaux triangle and, an everyday example, the British 50p coin demonstrate this.
Radial displacements
Roundness does not describe radial displacements of a shape from some notional centre point, merely the overall shape.
This is important in manufacturing, such as for crankshafts and similar objects, where not only the roundnes |
https://en.wikipedia.org/wiki/Langton%27s%20loops | Langton's loops are a particular "species" of artificial life in a cellular automaton created in 1984 by Christopher Langton. They consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm" (or pseudopod), which will become the daughter loop. The "genes" instruct it to make three left turns, completing the loop, which then disconnects from its parent.
History
In 1952 John von Neumann created the first cellular automaton (CA) with the goal of creating a self-replicating machine. This automaton was necessarily very complex due to its computation- and construction-universality. In 1968 Edgar F. Codd reduced the number of states from 29 in von Neumann's CA to 8 in his. When Christopher Langton did away with the universality condition, he was able to significantly reduce the automaton's complexity. Its self-replicating loops are based on one of the simplest elements in Codd's automaton, the periodic emitter.
Specification
Langton's Loops run in a CA that has 8 states, and uses the von Neumann neighborhood with rotational symmetry. The transition table can be found here: .
As with Codd's CA, Langton's Loops consist of sheathed wires. The signals travel passively along the wires until they reach the open ends, when the command they carry is executed.
Colonies
Because of a particular property of the loops' "pseudopodia", they are unable to reproduce into the space occupied by another loop. Thus, once a loop is surrounded, it is incapable of reproducing, resulting in a coral-like colony with a thin layer of reproducing organisms surrounding a core of inactive "dead" organisms. Unless provided unbounded space, the colony's size will be limited. The maximum population will be asymptotic to , where A is the total area of the space in cells.
Encoding of the genome
The loops' genetic code is stored as a series of nonzero-zero state pairs. The standard loop's genome is illustrated in the picture at the to |
https://en.wikipedia.org/wiki/Shneider-Miles%20scattering | Shneider-Miles scattering (also referred to as collisional scattering or quasi-Rayleigh scattering) is the quasi-elastic scattering of electromagnetic radiation by charged particles in a small-scale medium with frequent particle collisions. Collisional scattering typically occurs in coherent microwave scattering of high neutral density, low ionization degree microplasmas such as atmospheric pressure laser-induced plasmas. Shneider-Miles scattering is characterized by a 90° phase shift between the incident and scattered waves and a scattering cross section proportional to the square of the incident driving frequency (). Scattered waves are emitted in a short dipole radiation pattern. The variable phase shift present in semi-collisional scattering regimes allows for determination of a plasma's collisional frequency through coherent microwave scattering.
History
Mikhail Shneider and Richard Miles first described the phenomenon mathematically in their 2005 work on microwave diagnostics of small plasma objects. The scattering regime was experimentally demonstrated and formally named by Adam R. Patel and Alexey Shashurin and has been applied in the coherent microwave scattering diagnosis of small laser-induced plasma objects.
Physical description
A plasma, consisting of neutral particles, ions, and unbound electrons, responds to the oscillating electric field of incident electromagnetic radiation primarily through the motion of electrons (ions and neutral particles can often be regarded as stationary due to their larger mass). If the frequency of the incident radiation is sufficiently low and the plasma frequency is sufficiently high (corresponding to the Rayleigh scattering regime), the electrons will travel until the plasma object becomes polarized, counteracting the incident electric field and preventing further movement until the incident field reverses direction. If the frequency of the incident radiation is sufficiently high and the plasma frequency is sufficie |
https://en.wikipedia.org/wiki/ISO%2031-8 | ISO 31-8 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to physical chemistry and molecular physics.
Quantities and units
Notes
In the tables of quantities and their units, the ISO 31-8 standard shows symbols for substances as subscripts (e.g., cB, wB, pB). It also notes that it is generally advisable to put symbols for substances and their states in parentheses on the same line, as in c(H2SO4).
Normative annexes
Annex A: Names and symbols of the chemical elements
This annex contains a list of elements by atomic number, giving the names and standard symbols of the chemical elements from atomic number 1 (hydrogen, H) to 109 (unnilennium, Une).
The list given in ISO 31-8:1992 was quoted from the 1998 IUPAC "Green Book" Quantities, Units and Symbols in Physical Chemistry and adds in some cases in parentheses the Latin name for information, where the standard symbol has no relation to the English name of the element. Since the 1992 edition of the standard was published, some elements with atomic number above 103 have been discovered and renamed.
Annex B: Symbols for chemical elements and nucleides
Symbols for chemical elements shall be written in roman (upright) type. The symbol is not followed by a full-stop.
Examples:
H He C Ca
Attached subscripts or superscripts specifying a nucleotide or molecule have the following meanings and positions:
The nucleon number (mass number) is shown in the left superscript position (e.g., 14N)
The number of atoms of a nucleotide is shown in the right subscript position (e.g., 14N2)
The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd)
If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Na+)
Annex C: pH
pH is defined operationally as follows. For a solution X, first measure the electromotive force EX of the galvanic cell
reference electrode |
https://en.wikipedia.org/wiki/Virtual%20appliance | A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.
A virtual appliance is not a complete virtual machine platform, but rather a software image containing a software stack designed to run on a virtual machine platform which may be a Type 1 or Type 2 hypervisor
. Like a physical computer, a hypervisor is merely a platform for running an operating system environment and does not provide application software itself.
Many virtual appliances provide a Web page user interface to permit their configuration. A virtual appliance is usually built to host a single application; it therefore represents a new way to deploy applications on a network.
File formats
Virtual appliances are provided to the user or customer as files, via either electronic downloads or physical distribution. The file format most commonly used is the Open Virtualization Format (OVF). It may also be distributed as Open Virtual Appliance (OVA), the .ova file format is interchangeable with .ovf. The Distributed Management Task Force (DMTF) publishes the OVF specification documentation. Most virtualization platforms, including those from VMware, Microsoft, Oracle, and Citrix, can install virtual appliances from an OVF file.
Grid computing
Virtualization solves a key problem in the grid computing arena – namely, the reality that any sufficiently large grid will inevitably consist of a wide variety of heterogeneous hardware and operating system configurations. Adding virtual appliances into the picture allows for extremely rapid provisioning of grid nodes and importantly, cleanly decouples the grid |
https://en.wikipedia.org/wiki/Overdrive%20voltage | Overdrive voltage, usually abbreviated as VOV, is typically referred to in the context of MOSFET transistors. The overdrive voltage is defined as the voltage between transistor gate and source (VGS) in excess of the threshold voltage (VTH) where VTH is defined as the minimum voltage required between gate and source to turn the transistor on (allow it to conduct electricity). Due to this definition, overdrive voltage is also known as "excess gate voltage" or "effective voltage." Overdrive voltage can be found using the simple equation: VOV = VGS − VTH.
Technology
VOV is important as it directly affects the output drain terminal current (ID) of the transistor, an important property of amplifier circuits. By increasing VOV, ID can be increased until saturation is reached.
Overdrive voltage is also important because of its relationship to VDS, the drain voltage relative to the source, which can be used to determine the region of operation of the MOSFET. The table below shows how to use overdrive voltage to understand what region of operation the MOSFET is in:
A more physics-related explanation follows:
In an NMOS transistor, the channel region under zero bias has an abundance of holes (i.e., it is p-type silicon). By applying a negative gate bias (VGS < 0) we attract more holes, and this is called accumulation. A positive gate voltage (VGS > 0) will attract electrons and repel holes, and this is called depletion because we are depleting the number of holes. At a critical voltage called the threshold voltage (VTH) the channel will actually be so depleted of holes and rich in electrons that it will INVERT to being n-type silicon, and this is called the inversion region.
As we increase this voltage, VGS, beyond VTH, we are said to be then overdriving the gate by creating a stronger channel, hence the overdrive (often called Vov, Vod, or Von) is defined as (VGS − VTH).
See also
MOSFET
Threshold voltage
Electronic amplifier
Short-channel effect
Biasing |
https://en.wikipedia.org/wiki/Obturator%20membrane | The obturator membrane is a thin fibrous sheet, which almost completely closes the obturator foramen.
Its fibers are arranged in interlacing bundles mainly transverse in direction; the uppermost bundle is attached to the obturator tubercles and completes the obturator canal for the passage of the obturator vessels and nerve.
The membrane is attached to the sharp margin of the obturator foramen except at its lower lateral angle, where it is fixed to the pelvic surface of the inferior ramus of the ischium, i. e., within the margin.
Both obturator muscles are connected with this membrane.
Additional images |
https://en.wikipedia.org/wiki/GUVI | GUVI (an acronym for Grab Your Vernacular Imprint) is an IIT Madras and IIM Ahmedabad incubated company based in Chennai, India. It was founded by ex-PayPal engineers Arun Prakash, Sridevi Arun Prakash, and SP Balamurugan in 2014. GUVI offers free and paid learning courses on various IT and tech domains in Indian vernacular languages such as Tamil, Telugu, Hindi, Kannada, Bengali, and English.
GUVI's mission is "to make technical education available to all in their native languages". In 2022, Indian Information Technology and Consulting Service giant HCL acquired a majority stake in the vernacular edtech company that provides diverse professional and certificate tech courses to students, graduates, and working professionals who wish to upskill themselves.
History
The founder trio comprising Arun Prakash, Sridevi Arun Prakash, and SP Balamurugan, started GUVI as a volunteering initiative in the form of a YouTube channel back in 2011 while they were working for PayPal. They used to post videos, tutorials, and practice material explaining technical terminologies and concepts in vernacular languages like Tamil, Telugu, and Marathi. The founders’ primary goal was to bring tech closer to the learners not fluent in English.
It all started when GUVI’s CEO and founder, Arun Prakash went to attend an alumni meet at his college where he got the chance to meet and interact with the current batch of students. During the interaction, the students seemed to lack basic technical knowledge. So, Arun started teaching and explaining those tech concepts in the student’s native language. They seemed to grasp the concepts and understand them very well.
This made Arun realize the gaping skill gap in college students because of the high dependency on English for tech education. He then discussed his concern with Sridevi and Balamurugan. Then they decided to put out and teach technical skills to students in vernacular language to bridge the gap in tech education caused by the language |
https://en.wikipedia.org/wiki/Inverse%20search | Inverse search (also called "reverse search") is a feature of some non-interactive typesetting programs, such as LaTeX and GNU LilyPond. These programs read an abstract, textual, definition of a document as input, and convert this into a graphical format such as DVI or PDF. In a windowing system, this typically means that the source code is entered in one editor window, and the resulting output is viewed in a different output window. Inverse search means that a graphical object in the output window works as a hyperlink, which brings you back to the line and column in the editor, where the clicked object was defined. The inverse search feature is particularly useful during proofreading.
Implementations
In TeX and LaTeX, the package srcltx provides an inverse search feature through DVI output files (e.g., with yap or Xdvi), while vpe, pdfsync and SyncTeX provide similar functionality for PDF output, among other techniques. The Comparison of TeX editors has a column on support of inverse search; most of them provide it nowadays.
GNU LilyPond provides an inverse search feature through PDF output files, since version 2.6. The program calls this feature Point-and-click,
Many integrated development environments for programming use inverse search to display compilation error messages, and during debugging when a breakpoint is hit. |
https://en.wikipedia.org/wiki/Manta%20Matcher | Manta Matcher is a global online database for manta rays.
Creation
It is one of the Wildbook Web applications developed by Wild Me, a 501(c)(3) not-for-profit organization in the United States, and was created in partnership with Andrea Marshall of the Marine Megafauna Foundation.
Manta rays have unique spot patterning on their undersides, which allows for individual identification. Scuba divers around the world can photograph mantas and upload their manta identification photographs to the Manta Matcher website, supporting global research and conservation efforts.
Identification of rays
Manta Matcher is a pattern-matching software that eases researcher workload; key spot pattern features are extracted using a scale-invariant feature transform (SIFT) algorithm, which can cope with complications presented by highly variable spot patterns and low contrast photographs.
Purpose and research supported
This citizen science tool is free to use by researchers worldwide. Manta Matcher represents a global initiative to centralize manta ray sightings and facilitate research on these vulnerable species through collaborative studies, including the cross-referencing of regional databases.
Manta Matcher has already supported research that contributed to the listing of reef mantas (Manta alfredi) on Appendix 1 of the Convention on Migratory Species in November 2014. |
https://en.wikipedia.org/wiki/Juan%20Pablo%20Duarte | Juan Pablo Duarte y Díez (26 January 1813 – 15 July 1876) was a Dominican military leader, writer, activist, and nationalist politician who was the foremost of the founding fathers of the Dominican Republic and bears the title of Father of the Nation. As one of the most celebrated figures in Dominican history, Duarte is considered a national hero and revolutionary visionary in the modern Dominican Republic, who along with military general Ramón Matías Mella and Francisco del Rosario Sánchez, organized and promoted La Trinitaria, a secret society that eventually led to the Dominican revolt and independence from Haitian rule in 1844 and the start of the Dominican War of Independence.
Born in a humble working class family in 1813, his desire for knowledge and his dreams of improvement led him to Europe, where he strengthened his liberal ideas. These ideas formulated the outline for establishing an independent Dominican state. Upon returning, he voluntarily dedicated himself to teaching in the streets, improvising a school in his father's business, determined that the people of his era assimilate his ideals of revolutionary enlightenment.
Duarte became an officer in the National Guard and a year later in 1843 he participated in the "Reformist Revolution" against the dictatorship of Jean-Pierre Boyer, who threatened to invade the western part of the island with the intention of unifying it. After the defeat of the Haitian President and the proclamation of the Dominican Republic in 1844, the Board formed to designate the first ruler of the nation and elected Duarte by a strong majority vote to preside over the nation but he declined the proposal, while Tomás Bobadilla took office instead.
Duarte helped inspire and finance the Dominican War of Independence, paying a heavy toll which would eventually ruin him financially. Duarte also disagreed strongly with royalist and pro-annexation sectors in the nation, especially with the wealthy caudillo and military strongman Ped |
https://en.wikipedia.org/wiki/Apple%20II%20character%20set | Apple II text mode uses the 7-bit ASCII (us-ascii) character set. The high-bit is set to display in normal mode on the 40x24 text screen.
Character sets
Apple II / Apple II plus
The original Signetics 2513 character generator chip has 64 glyphs for upper case, numbers, symbols, and punctuation characters. Each 5x7 pixel bitmap matrix is displayed in a 7x8 character cell on the text screen. The 64 characters can be displayed in INVERSE in the range $00 to $3F, FLASHing in the range $40 to $7F, and NORMAL mode in the range $80 to $FF. Normal mode characters are repeated in the $80 to $FF range.
To display lowercase letters, applications can run in the graphics modes and use custom fonts, rather than running in text mode using the font in ROM.
Apple IIe
Apple IIc
Apple IIc alternate
Apple IIGS
Apple II MouseText character set |
https://en.wikipedia.org/wiki/Total%20complement%20activity | Total complement activity (TCA) refers to a series of tests that determine the functioning of the complement system in an individual.
Tests
A variety of tests can be used to measure TCA, but the most commonly used on is the CH50 test. Other tests include the liposome immunoassay (LIA), single tube titration method, and the plate-hemolysis method.
CH50 Procedure
The test is based on the capacity of an individual's serum to lyse sheep erythrocytes coated with anti-sheep antibodies (preferably rabbit IgG). The individual's serum is diluted until the minimum concentration is reached that 50% of the sheep red blood cells are lysed, and that concentration is recorded as the CH50.
The CH50 is testing the classical complement pathway in an individual thus requiring functioning C1-C9 factors.
CH50 Interpretation
If an individual has deficient or malfunctioning complement factors, then at a baseline they have decreased capacity to lyse the erythrocytes. Therefore, any dilution to their serum would further impair this functioning, meaning that a lower dilution needs to be reached to achieve 50% capacity. In contrast, any individual with increased complement levels or activity would have an elevated CH50 since increasing dilution would be necessary to reach the 50% lyse marking.
Decreased CH50 values may be seen in cirrhosis or hepatitis as a result of impaired complement production in the liver. It can also be seen in systemic lupus erythematosus as a result of increased usage of complement factors due to the pathology of the autoimmune condition. It is decreased during attacks of hereditary angioedema (but those with the disease have a normal value in between attacks).
Increased CH50 values means that their complement is hyperfunctional relative to normal, and this may be seen in cancer or ulcerative colitis.
One can interpret the CH50 value along with the individual's complement factor values to help determine the etiology. For example, if and individual has norm |
https://en.wikipedia.org/wiki/The%20Emoji%20Code | The Emoji Code is a 2017 book by linguist Vyvyan Evans, analyzing emoji as a form of digital communication in the evolution of language and writing systems. The book argues that emoji constitutes missing element in digital communication, vis-a-vis face-to-face spoken communication, by providing the "new body language of the digital age". As such, Evans claims that "emojis actually enhance our language [in digital communication] and our ability to wield it."
Thesis
The Emoji Code claims that Emoji fulfils a similar function in digital communication to gesture, body language and intonation in spoken interactions, helping to provide the emotional cues so often missing in textspeak. By clarifying our digital conversations, emojis can be seen as empowering, a force for good in twenty-first-century communication. As such, the argument is that Emoji is a paralanguage, facilitating better emotional resonance in digital communication, making us more effective communicators. In essence, "emojis are a visual representation that offer non-verbal cues in text, much in the same way that body language and vocal tone is a conduit of meaning in everyday face-to-face conversations." Evans also argues that Emoji has a vital function in educational contexts, especially among children.
A notable argument of the book is that Emoji fulfils a number of communicative functions that mirror those fulfilled by gesture, eye gaze, facial expression and tone of voice in face-to-face spoken interaction. Evans enumerates six functions of Emoji in digital communication: substitution, reinforcement, contradiction, metacommentary (or complementing), emphasis and discourse management. This analysis has been hailed as influential in how Emoji functions as a system of communication.
Reception
The Emoji Code has been criticized as ardently advocating Emoji as a system of communication when in fact, Emoji is a "gimmick" and something of a backward step, in terms of how we communicate. Evans respond |
https://en.wikipedia.org/wiki/Zero%20matrix | In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of matrices, and is denoted by the symbol or followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are
Properties
The set of matrices with entries in a ring K forms a ring . The zero matrix in is the matrix with all entries equal to , where is the additive identity in K.
The zero matrix is the additive identity in . That is, for all it satisfies the equation
There is exactly one zero matrix of any given dimension m×n (with entries from a given ring), so when the context is clear, one often refers to the zero matrix. In general, the zero element of a ring is unique, and is typically denoted by 0 without any subscript indicating the parent ring. Hence the examples above represent zero matrices over any ring.
The zero matrix also represents the linear transformation which sends all the vectors to the zero vector. It is idempotent, meaning that when it is multiplied by itself, the result is itself.
The zero matrix is the only matrix whose rank is 0.
Occurrences
The mortal matrix problem is the problem of determining, given a finite set of n × n matrices with integer entries, whether they can be multiplied in some order, possibly with repetition, to yield the zero matrix. This is known to be undecidable for a set of six or more 3 × 3 matrices, or a set of two 15 × 15 matrices.
In ordinary least squares regression, if there is a perfect fit to the data, the annihilator matrix is the zero matrix.
See also
Identity matrix, the multiplicative identity for matrices
Matrix of ones, a matrix where all elements are one
Nilpotent matrix
Single-entry matrix, a matrix where all but one element is zero |
https://en.wikipedia.org/wiki/Triangle%20of%20auscultation | The triangle of auscultation is a relative thinning of the musculature of the back, situated along the medial border of the scapula which allows for improved listening to the lungs.
Boundaries
It has the following boundaries:
medially, by the inferior portion of the trapezius
inferiorly, by the latissimus dorsi
laterally, by the medial border of the scapula
The superficial floor of the triangle is formed by the lateral portion of the erector spinae muscles. Deep to these muscles are the osseous portions of the 6th and 7th ribs and the internal and external intercostal muscles.
Clinical significance
The triangle of auscultation is useful for assessment using a pulmonary auscultation and thoracic procedures. Due to the relative thinning of the musculature of the back in the triangle, the posterior thoracic wall is closer to the skin surface, making respiratory sounds audible more clearly with a stethoscope. On the left side, the cardiac orifice of the stomach lies deep to the triangle. In days before X-rays were discovered, the sound of swallowed liquids were auscultated over this triangle to confirm an oesophageal tumour.
To better expose the floor of the triangle up of the posterior thoracic wall in the 6th and 7th intercostal space, a patient is asked to fold their arms across their chest, laterally rotating the scapulae, while bending forward at the trunk, somewhat resembling the fetal position.
The triangle of auscultation can be used as a surgical approach path. It can also be used for applying a nerve block known as the rhomboid intercostal block, which can be used to relieve pain after rib fractures, and a thoracotomy. This nerve block is usually achieved by injection of the local anesthetic agent into the fascial plane between the rhomboid upper intercostal muscle and the rhombic muscles. |
https://en.wikipedia.org/wiki/Aerobic%20organism | An aerobic organism or aerobe is an organism that can survive and grow in an oxygenated environment. The ability to exhibit aerobic respiration may yield benefits to the aerobic organism, as aerobic respiration yields more energy than anaerobic respiration. Energy production of the cell involves the synthesis of ATP by an enzyme called ATP synthase. In aerobic respiration, ATP synthase is coupled with an electron transport chain in which oxygen acts as a terminal electron acceptor. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found.
Types
Obligate aerobes need oxygen to grow. In a process known as cellular respiration, these organisms use oxygen to oxidize substrates (for example sugars and fats) and generate energy.
Facultative anaerobes use oxygen if it is available, but also have anaerobic methods of energy production.
Microaerophiles require oxygen for energy production, but are harmed by atmospheric concentrations of oxygen (21% O2).
Aerotolerant anaerobes do not use oxygen but are not harmed by it.
When an organism is able to survive in both oxygen and anaerobic environments, the use of the Pasteur effect can distinguish between facultative anaerobes and aerotolerant organisms. If the organism is using fermentation in an anaerobic environment, the addition of oxygen will cause facultative anaerobes to suspend fermentation and begin using oxygen for respiration. Aerotolerant organisms must continue fermentation in the presence of oxygen.
Facultative organisms grow in both oxygen rich media and oxygen free media.
Aerobic Respiration
Aerobic organisms use a process called aerobic respiration to create ATP from ADP and a phosphate. Glucose (a monosaccharide) is oxidized to power the |
https://en.wikipedia.org/wiki/Eusporangiate%20fern | Eusporangiate ferns are vascular spore plants, whose sporangia arise from several epidermal cells and not from a single cell as in leptosporangiate ferns. Typically these ferns have reduced root systems and sporangia that produce large amounts of spores (up to 7000 spores per sporangium in Christensenia).
There are four extant eusporangiate fern families, distributed among three classes. Each family is assigned to its own order.
Class Psilotopsida
Order Psilotales, family Psilotaceae – Whisk ferns (2 genera, about 17 species)
Order Ophioglossales, family Ophioglossaceae – Adder's-tongues (5 genera, about 80 species)
Class Equisetopsida
Order Equisetales, family Equisetaceae – Horsetails (1 genus, about 15 species)
Class Marattiopsida
Order Marattiales, family Marattiaceae – Marattoid ferns (6 genera, about 500 species)
The following diagram shows a likely phylogenic placement of eusporangiate fern classes within the vascular plants.
Cladistics
While it is generally accepted that the leptosporangiate ferns are monophyletic, it is considered to be likely that the eusporangiate ferns, as a group, are paraphyletic. In each of the three examples from recently published studies, shown in the following table, it can be seen that, together, the four eusporangiate fern families do not form a single clade. |
https://en.wikipedia.org/wiki/Harold%20V.%20McIntosh | Harold Varner McIntosh (1929–2015) was an American computational physicist who worked for many years in Mexico. Beyond physics, his research interests included quantum chemistry, programming language design, cellular automata, and flexagons.
Early life and education
McIntosh was born on March 11, 1929, in Colorado, and was an undergraduate at the Colorado State College of Agriculture and Mechanic Arts (now Colorado State University), where he graduated in 1949 with a degree in physics. He went on to graduate study at Cornell University, earning a master's degree in 1952. He began doctoral study at Brandeis University, but stopped before completing the program.
Much later in his career, he completed a doctorate in quantum chemistry at Uppsala University in Sweden in 1972.
Career and later life
After leaving Brandeis, McIntosh worked at the Aberdeen Proving Ground, and then in the Research Institute for Advanced Studies in Baltimore. In 1962 he moved to the University of Florida to work on quantum theory in the department of physics and astronomy there.
In 1964, McIntosh moved to Mexico, where he would work for the rest of his career. He started in the center for research and advanced studies of the Instituto Politécnico Nacional, which eventually became CINVESTAV; he worked there on the design of the CONVERT programming language. After another year as director of programming at the computer center of the National Autonomous University of Mexico (again working on programming language design) he returned in 1966 to the Instituto Politécnico Nacional as a professor in the School of Physics and Mathematics and coordinator for applied mathematics. Here, as well as the development of programming languages and software for scientific visualization, his interests returned to physics, including issues of degeneracy in the solution of physical equations, and quantum two-body problems involving a magnetic monopole (the so-called MICZ Kepler system, in which the M stands for |
https://en.wikipedia.org/wiki/History%20of%20computing%20in%20the%20Soviet%20Union | The history of computing in the Soviet Union began in the late 1940s, when the country began to develop its Small Electronic Calculating Machine (MESM) at the Kiev Institute of Electrotechnology in Feofaniya. Initial ideological opposition to cybernetics in the Soviet Union was overcome by a Khrushchev era policy that encouraged computer production.
By the early 1970s, the uncoordinated work of competing government ministries had left the Soviet computer industry in disarray. Due to lack of common standards for peripherals and lack of digital storage capacity the Soviet Union's technology significantly lagged behind the West's semiconductor industry. The Soviet government decided to abandon development of original computer designs and encouraged cloning of existing Western systems (e.g. the 1801 CPU series was scrapped in favor of the PDP-11 ISA by the early 1980s).
Soviet industry was unable to mass-produce computers to acceptable quality standards and locally manufactured copies of Western hardware were unreliable. As personal computers spread to industries and offices in the West, the Soviet Union's technological lag increased.
Nearly all Soviet computer manufacturers ceased operations after the breakup of the Soviet Union. A few companies that survived into 1990s used foreign components and never achieved large production volumes.
History
Early history
In 1936, an analog computer known as a water integrator was designed by Vladimir Lukyanov. It was the world's first computer for solving partial differential equations.
The Soviet Union began to develop digital computers after World War II. A universally programmable electronic computer was created by a team of scientists directed by Sergey Lebedev at the Kiev Institute of Electrotechnology in Feofaniya. The computer, known as MESM (), became operational in 1950. By some authors it was also depicted as the first such computer in continental Europe, even though the Zuse Z4 and the Swedish BARK preceded it. Th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.