source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Momentum%20operator
In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is: where is Planck's reduced constant, the imaginary unit, is the spatial coordinate, and a partial derivative (denoted by ) is used instead of a total derivative () since the wave function is also a function of time. The "hat" indicates an operator. The "application" of the operator on a differentiable wave function is as follows: In a basis of Hilbert space consisting of momentum eigenstates expressed in the momentum representation, the action of the operator is simply multiplication by , i.e. it is a multiplication operator, just as the position operator is a multiplication operator in the position representation. Note that the definition above is the canonical momentum, which is not gauge invariant and not a measurable physical quantity for charged particles in an electromagnetic field. In that case, the canonical momentum is not equal to the kinetic momentum. At the time quantum mechanics was developed in the 1920s, the momentum operator was found by many theoretical physicists, including Niels Bohr, Arnold Sommerfeld, Erwin Schrödinger, and Eugene Wigner. Its existence and form is sometimes taken as one of the foundational postulates of quantum mechanics. Origin from De Broglie plane waves The momentum and energy operators can be constructed in the following way. One dimension Starting in one dimension, using the plane wave solution to Schrödinger's equation of a single free particle, where is interpreted as momentum in the -direction and is the particle energy. The first order partial derivative with respect to space is This suggests the operator equivalence so the momentum of the particle and the value that is measured when a particle is in a plane wave state is the eigenvalue of the a
https://en.wikipedia.org/wiki/Crystal%20earpiece
A crystal earpiece is a type of piezoelectric earphone, producing sound by using a piezoelectric crystal, a material that changes its shape when electricity is applied to it. It is usually designed to plug into the ear canal of the user. Operation A crystal earpiece typically consists of a piezoelectric crystal with metal electrodes attached to either side, glued to a conical plastic or metal foil diaphragm, enclosed in a plastic case. The piezoelectric material used in early crystal earphones was Rochelle salt, but modern earphones use barium titanate, or less often quartz. When the audio signal is applied to the electrodes, the crystal bends back and forth a little with the signal, vibrating the diaphragm. The diaphragm pushes on the air, creating sound waves. The plastic earpiece casing confines the sound waves and conducts them efficiently into the ear canal, to the eardrum. The diaphragm is generally fixed at its outer edge, relying on bending to operate. The air path in the earpiece is generally a horn shape, with a narrowing column of air which increases the air displacement at the eardrum, increasing the volume. Application Crystal earpieces are usually monaural devices with very low sound fidelity, but high sensitivity and impedance. Their peak use was probably with 1960s era transistor radios and hearing aids. They are not used with modern portable media players due to unacceptable sound quality. The main causes of poor performance with these earpieces are low diaphragm excursion, nonlinearity, in-band resonance and the very short horn shape of the earpiece casing. The resulting sound is very tinny and lacking in bass. Modern headphones use electromagnetic drivers that work similarly to speakers, with moving coils or moving iron cores in a magnetic field. One remaining use for crystal earpieces is in crystal radios. Their very high sensitivity enables them to use the very weak signals produced by crystal radios, and their high impedance (on the or
https://en.wikipedia.org/wiki/Concrete%20security
In cryptography, concrete security or exact security is a practice-oriented approach that aims to give more precise estimates of the computational complexities of adversarial tasks than polynomial equivalence would allow. It quantifies the security of a cryptosystem by bounding the probability of success for an adversary running for a fixed amount of time. Security proofs with precise analyses are referred to as concrete. Traditionally, provable security is asymptotic: it classifies the hardness of computational problems using polynomial-time reducibility. Secure schemes are defined to be those in which the advantage of any computationally bounded adversary is negligible. While such a theoretical guarantee is important, in practice one needs to know exactly how efficient a reduction is because of the need to instantiate the security parameter - it is not enough to know that "sufficiently large" security parameters will do. An inefficient reduction results either in the success probability for the adversary or the resource requirement of the scheme being greater than desired. Concrete security parametrizes all the resources available to the adversary, such as running time and memory, and other resources specific to the system in question, such as the number of plaintexts it can obtain or the number of queries it can make to any oracles available. Then the advantage of the adversary is upper bounded as a function of these resources and of the problem size. It is often possible to give a lower bound (i.e. an adversarial strategy) matching the upper bound, hence the name exact security. Examples Concrete security estimates have been applied to cryptographic algorithms: In 1996, schemes for digital signatures based on the RSA and Rabin cryptosystems were proposed, which were shown to be approximately as difficult to break as the original cryptosystems. In 1997, some notions of concrete security (left-or-right indistinguishability, real-or-random indistinguishabil
https://en.wikipedia.org/wiki/Negative%20thermal%20expansion
Negative thermal expansion (NTE) is an unusual physicochemical process in which some materials contract upon heating, rather than expand as most other materials do. The most well-known material with NTE is water at 0 to 3.98 °C. Also, the density of water ice is smaller than the density of liquid water. Water's NTE is the reason why water ice floats, rather than sinks, in liquid water. Materials which undergo NTE have a range of potential engineering, photonic, electronic, and structural applications. For example, if one were to mix a negative thermal expansion material with a "normal" material which expands on heating, it could be possible to use it as a thermal expansion compensator that might allow for forming composites with tailored or even close to zero thermal expansion. Origin of negative thermal expansion There are a number of physical processes which may cause contraction with increasing temperature, including transverse vibrational modes, rigid unit modes and phase transitions. In 2011, Liu et al. showed that the NTE phenomenon originates from the existence of high pressure, small volume configurations with higher entropy, with their configurations present in the stable phase matrix through thermal fluctuations. They were able to predict both the colossal positive thermal expansion (In cerium) and zero and infinite negative thermal expansion (in ). Alternatively, large negative and positive thermal expansion may result from the design of internal microstructure. Negative thermal expansion in close-packed systems Negative thermal expansion is usually observed in non-close-packed systems with directional interactions (e.g. ice, graphene, etc.) and complex compounds (e.g. , , beta-quartz, some zeolites, etc.). However, in a paper, it was shown that negative thermal expansion (NTE) is also realized in single-component close-packed lattices with pair central force interactions. The following sufficient condition for potential giving rise to NTE behavio
https://en.wikipedia.org/wiki/Magic%20angle
The magic angle is a precisely defined angle, the value of which is approximately 54.7356°. The magic angle is a root of a second-order Legendre polynomial, , and so any interaction which depends on this second-order Legendre polynomial vanishes at the magic angle. This property makes the magic angle of particular importance in magic angle spinning solid-state NMR spectroscopy. In magnetic resonance imaging, structures with ordered collagen, such as tendons and ligaments, oriented at the magic angle may appear hyperintense in some sequences; this is called the magic angle artifact or effect. Mathematical definition The magic angle θm is where arccos and arctan are the inverse cosine and tangent functions respectively. θm is the angle between the space diagonal of a cube and any of its three connecting edges, see image. Another representation of the magic angle is half of the opening angle formed when a cube is rotated from its space diagonal axis, which may be represented as arccos − or 2 arctan  radians ≈ 109.4712°. This double magic angle is directly related to tetrahedral molecular geometry and is the angle between two vertices and the exact center of a tetrahedron (i.e., the edge central angle also known as the tetrahedral angle). Magic angle and nuclear magnetic resonance In nuclear magnetic resonance (NMR) spectroscopy, three prominent nuclear magnetic interactions, dipolar coupling, chemical shift anisotropy (CSA), and first-order quadrupolar coupling, depend on the orientation of the interaction tensor with the external magnetic field. By spinning the sample around a given axis, their average angular dependence becomes: where θ is the angle between the principal axis of the interaction and the magnetic field, θr is the angle of the axis of rotation relative to the magnetic field and β is the (arbitrary) angle between the axis of rotation and principal axis of the interaction. For dipolar couplings, the principal axis corresponds to the internucl
https://en.wikipedia.org/wiki/Hexol
In chemistry, hexol is a cation with formula {[Co(NH3)4(OH)2]3Co}6+ — a coordination complex consisting of four cobalt cations in oxidation state +3, twelve ammonia molecules , and six hydroxy anions , with a net charge of +6. The hydroxy groups act as bridges between the central cobalt atom and the other three, which carry the ammonia ligands. Salts of hexol, such as the sulfate {[Co(NH3)4(OH)2]3Co}(SO4)3(H2O)x, are of historical significance as the first synthetic non-carbon-containing chiral compounds. Preparation Salts of hexol were first described by Jørgensen, although it was Werner who recognized its structure. The cation is prepared by heating a solution containing the cis-diaquotetramminecobalt(III) cation [Co(NH3)4(H2O)2]3+ with a dilute base: 4 [Co(NH3)4(H2O)2]3+ + 2 HO− → {[Co(NH3)4(OH)2]3Co}6+ + 4 NH4+ + 4 H2O Hexol sulfate Starting with the sulfate and using ammonium hydroxide as the base, depending on the conditions, one obtains the 9-hydrate, the 6-hydrate, or the 4-hydrate of hexol sulfate. These salts form dark brownish-violet or black tabular crystals, with low solubility in water. When treated with concentrated hydrochloric acid, hexol sulfate converts to cis-diaquotetramminecobalt(III) sulfate. In boiling dilute sulfuric acid, hexol sulfate further degrades with evolution of oxygen and nitrogen. Optical properties The hexol cation exists as two optical isomers that are mirror images of each other, depending on the arrangement of the bonds between the central cobalt atom and the three bidentate peripheral units [Co(NH3)4(HO)2]. It belongs to the D point group. The nature of chirality can be compared to that of the ferrioxalate anion . In a historic set of experiments, a salt of hexol with an optically active anion — specifically, its D-(+)-bromocamphorsulfonate – was resolved into separate salts of the two cation isomers by fractional crystallisation. A more efficient resolution involves the bis(tartrato)diantimonate(III) anion.
https://en.wikipedia.org/wiki/Analytic%20element%20method
The analytic element method (AEM) is a numerical method used for the solution of partial differential equations. It was initially developed by O.D.L. Strack at the University of Minnesota. It is similar in nature to the boundary element method (BEM), as it does not rely upon the discretization of volumes or areas in the modeled system; only internal and external boundaries are discretized. One of the primary distinctions between AEM and BEMs is that the boundary integrals are calculated analytically. Although originally developed to model groundwater flow, AEM has subsequently been applied to other fields of study including studies of heat flow and conduction, periodic waves, and deformation by force. Mathematical basis The basic premise of the analytic element method is that, for linear differential equations, elementary solutions may be superimposed to obtain more complex solutions. A suite of 2D and 3D analytic solutions ("elements") are available for different governing equations. These elements typically correspond to a discontinuity in the dependent variable or its gradient along a geometric boundary (e.g., point, line, ellipse, circle, sphere, etc.). This discontinuity has a specific functional form (usually a polynomial in 2D) and may be manipulated to satisfy Dirichlet, Neumann, or Robin (mixed) boundary conditions. Each analytic solution is infinite in space and/or time. Commonly each analytic solution contains degrees of freedom (coefficients) that may be calculated to meet prescribed boundary conditions along the element's border. To obtain a global solution (i.e., the correct element coefficients), a system of equations is solved such that the boundary conditions are satisfied along all of the elements (using collocation, least-squares minimization, or a similar approach). Notably, the global solution provides a spatially continuous description of the dependent variable everywhere in the infinite domain, and the governing equation is satisfied everyw
https://en.wikipedia.org/wiki/Wirehead%20%28science%20fiction%29
Wireheading is a term associated with fictional or futuristic applications of brain stimulation reward, the act of directly triggering the brain's reward center by electrical stimulation of an inserted wire, for the purpose of 'short-circuiting' the brain's normal reward process and artificially inducing pleasure. Scientists have successfully performed brain stimulation reward on rats (1950s) and humans (1960s). This stimulation does not appear to lead to tolerance or satiation in the way that sex or drugs do. The term is sometimes associated with science fiction writer Larry Niven, who used the term in his Known Space series. In the philosophy of artificial intelligence, the term is used to refer to AI systems that hack their own reward channel. More broadly, the term can also refer to various kinds of interaction between human beings and technology. In fiction Literature Wireheading, like other forms of brain alteration, is often treated as dystopian in science fiction literature. In Larry Niven's Known Space stories, a "wirehead" is someone who has been fitted with an electronic brain implant known as a "droud" in order to stimulate the pleasure centers of their brain. Wireheading is the most addictive habit known (Louis Wu is the only given example of a recovered addict), and wireheads usually die from neglecting their basic needs in favour of the ceaseless pleasure. Wireheading is so powerful and easy that it becomes an evolutionary pressure, selecting against that portion of humanity without self-control. A wirehead's death is central to Niven's story "Death by Ecstasy", published in 1969 under the title The Organleggers, and a main character in the book Ringworld Engineers is a former wirehead trying to quit. Also in the Known Space universe, a device called a "tasp" which does not need a surgical implant (similar to transcranial magnetic stimulation) can be used to achieve similar goals: the pleasure center of a person's brain is found and remotely stim
https://en.wikipedia.org/wiki/Logarithmic%20mean%20temperature%20difference
In thermal engineering, the logarithmic mean temperature difference (LMTD) is used to determine the temperature driving force for heat transfer in flow systems, most notably in heat exchangers. The LMTD is a logarithmic average of the temperature difference between the hot and cold feeds at each end of the double pipe exchanger. For a given heat exchanger with constant area and heat transfer coefficient, the larger the LMTD, the more heat is transferred. The use of the LMTD arises straightforwardly from the analysis of a heat exchanger with constant flow rate and fluid thermal properties. Definition We assume that a generic heat exchanger has two ends (which we call "A" and "B") at which the hot and cold streams enter or exit on either side; then, the LMTD is defined by the logarithmic mean as follows: where is the temperature difference between the two streams at end , and is the temperature difference between the two streams at end . When the two temperature differences are equal, this formula does not directly resolve, so the LMTD is conventionally taken to equal its limit value, which is in this case trivially equal to the two differences. With this definition, the LMTD can be used to find the exchanged heat in a heat exchanger: where (in SI units): is the exchanged heat duty (watts), is the heat transfer coefficient (watts per kelvin per square meter), is the exchange area. Note that estimating the heat transfer coefficient may be quite complicated. This holds both for cocurrent flow, where the streams enter from the same end, and for countercurrent flow, where they enter from different ends. In a cross-flow, in which one system, usually the heat sink, has the same nominal temperature at all points on the heat transfer surface, a similar relation between exchanged heat and LMTD holds, but with a correction factor. A correction factor is also required for other more complex geometries, such as a shell and tube exchanger with baffles. Derivation
https://en.wikipedia.org/wiki/El%20Bulli
El Bulli () was a restaurant near the town of Roses, Spain, run by chef Ferran Adrià, later joined by Albert Adrià, and renowned for its modernist cuisine. Established in 1964, the restaurant overlooked Cala Montjoi, a bay on the Costa Brava of Catalonia. El Bulli held three Michelin stars and was described as "the most imaginative generator of haute cuisine on the planet" in 2006. The restaurant closed 30 July 2011 and relaunched as El Bulli Foundation, a center for culinary creativity. Restaurant The restaurant had a limited season: the PIXA season, for example, ran from 15 June to 20 December. Bookings for the next year were taken on a single day after the closing of the current season. It accommodated only 8,000 diners a season, but got more than two million requests. The average cost of a meal was €250 (US$325). The restaurant itself had operated at a loss since 2000, with operating profit coming from El Bulli-related books and lectures by Adrià. As of April 2008, the restaurant employed 42 chefs. Restaurant magazine judged El Bulli to be No. 1 on its Top 50 list of the world's best restaurants for a record five times—in 2002, 2006, 2007, 2008 and 2009, and No. 2 in 2010. History The restaurant's location was selected in 1961 by Hans Schilling, a German, and his Czech wife Marketa, who wanted a piece of land for a planned holiday resort. By the year 1963, the resulting holiday resort included a small makeshift bar of the type known in Spanish as a "chiringuito", which bar was variously called "El Bulli-bar" and "Hacienda El Bulli"; this little bar was the nucleus of the future restaurant El Bulli. The name "El Bulli" came from a colloquial term used to describe the French bulldogs the Schillings owned. The first restaurant was opened in 1964. The restaurant won its first Michelin star in 1976 while under French chef Jean-Louis Neichel. Ferran Adrià joined the staff in 1984, and was put in sole charge of the kitchen in 1987. In 1990 the restaurant gained i
https://en.wikipedia.org/wiki/Windows%20Vista
Windows Vista is a major release of Microsoft's Windows NT operating system. It was released to manufacturing on November 8, 2006, and became generally available on January 30, 2007, on the Windows Marketplace, the first release of Windows to be made available through a digital distribution platform. Vista succeeded Windows XP (2001); at the time, the five-year gap between the two was the longest time span between successive Windows releases. Microsoft began developing Vista under the codename "Longhorn" in 2001, shortly before the release of XP. It was intended as a small upgrade to bridge the gap between XP and the next major Windows version, codenamed Blackcomb. As development progressed, it assimilated many of Blackcomb's features and was repositioned as a major Windows release. Vista introduced the updated graphical user interface and visual style Aero, Windows Search, redesigned networking, audio, print, and display sub-systems, and new multimedia tools such as Windows DVD Maker among other changes. Vista aimed to increase the level of communication between machines on a home network, using peer-to-peer technology to simplify sharing files and media between computers and devices. Vista included version 3.0 of the .NET Framework, allowing software developers to write applications without traditional Windows APIs. It removed support for Itanium and devices without ACPI. While its new features and security improvements garnered praise, Vista was the target of significant criticism, such as its high system requirements, more restrictive licensing terms, lack of compatibility, longer boot time, and excessive authorization prompts from User Account Control. It saw lower adoption and satisfaction rates than XP, and it is generally considered a market failure. However, Vista usage did exceed Microsoft's pre-launch two-year-out expectations of achieving 200 million users, with an estimated 330 million internet users in January 2009. On October 22, 2010, Microsoft ce
https://en.wikipedia.org/wiki/Terror%20management%20theory
Terror management theory (TMT) is both a social and evolutionary psychology theory originally proposed by Jeff Greenberg, Sheldon Solomon, and Tom Pyszczynski and codified in their book The Worm at the Core: On the Role of Death in Life (2015). It proposes that a basic psychological conflict results from having a self-preservation instinct while realizing that death is inevitable and to some extent unpredictable. This conflict produces terror, which is managed through a combination of escapism and cultural beliefs that act to counter biological reality with more significant and enduring forms of meaning and value. The most obvious examples of cultural values that assuage death anxiety are those that purport to offer literal immortality (e.g. belief in the afterlife through religion). However, TMT also argues that other cultural values – including those that are seemingly unrelated to death – offer symbolic immortality. For example, values of national identity, posterity, cultural perspectives on sex, and human superiority over animals have been linked to calming death concerns. In many cases these values are thought to offer symbolic immortality, by either a) providing the sense that one is part of something greater that will ultimately outlive the individual (e.g. country, lineage, species), or b) making one's symbolic identity superior to biological nature (i.e. you are a personality, which makes you more than a glob of cells). Because cultural values influence what is meaningful, they are foundational for self-esteem. TMT describes self-esteem as being the personal, subjective measure of how well an individual is living up to their cultural values. Terror management theory was developed by social psychologists Greenberg, Solomon, and Pyszczynski. However, the idea of TMT originated from anthropologist Ernest Becker's 1973 Pulitzer Prize-winning work of nonfiction The Denial of Death. Becker argues most human action is taken to ignore or avoid the inevitabil
https://en.wikipedia.org/wiki/Taylor%20cone
A Taylor cone refers to the cone observed in electrospinning, electrospraying and hydrodynamic spray processes from which a jet of charged particles emanates above a threshold voltage. Aside from electrospray ionization in mass spectrometry, the Taylor cone is important in field-emission electric propulsion (FEEP) and colloid thrusters used in fine control and high efficiency (low power) thrust of spacecraft. History This cone was described by Sir Geoffrey Ingram Taylor in 1964 before electrospray was "discovered". This work followed on the work of Zeleny who photographed a cone-jet of glycerine in a strong electric field and the work of several others: Wilson and Taylor (1925), Nolan (1926) and Macky (1931). Taylor was primarily interested in the behavior of water droplets in strong electric fields, such as in thunderstorms. Formation When a small volume of electrically conductive liquid is exposed to an electric field, the shape of liquid starts to deform from the shape caused by surface tension alone. As the voltage is increased the effect of the electric field becomes more prominent. As this effect of the electric field begins to exert a similar magnitude of force on the droplet as the surface tension does, a cone shape begins to form with convex sides and a rounded tip. This approaches the shape of a cone with a whole angle (width) of 98.6°. When a certain threshold voltage has been reached the slightly rounded tip inverts and emits a jet of liquid. This is called a cone-jet and is the beginning of the electrospraying process in which ions may be transferred to the gas phase. It is generally found that in order to achieve a stable cone-jet a slightly higher than threshold voltage must be used. As the voltage is increased even more, other modes of droplet disintegration are found. The term Taylor cone can specifically refer to the theoretical limit of a perfect cone of exactly the predicted angle or generally refer to the approximately conical portion of
https://en.wikipedia.org/wiki/Plume%20tectonics
Plume tectonics is a geoscientific theory that finds its roots in the mantle doming concept which was especially popular during the 1930s and initially did not accept major plate movements and continental drifting. It has survived from the 1970s until today in various forms and presentations. It has slowly evolved into a concept that recognises and accepts large-scale plate motions such as envisaged by plate tectonics, but placing them in a framework where large mantle plumes are the major driving force of the system. The initial followers of the concept during the first half of the 20th century are scientists like Beloussov and van Bemmelen, and recently the concept has gained interest especially in Japan, through new compiled work on palaeomagnetism, and is still advocated by the group of scientists elaboration upon Earth expansion. It is nowadays generally not accepted as the main theory to explain the driving forces of tectonic plate movements, although numerous modulations on the concept have been proposed. The theory focuses on the movements of mantle plumes under tectonic plates, viewing them as the major driving force of movements of (parts of) the Earth's crust. In its more modern form, conceived in the 1970s, it tries to reconcile in one single geodynamic model the horizontalistic concept of plate tectonics, and the verticalistic concepts of mantle plumes, by the gravitational movement of plates away from major domes of the Earth's crust. The existence of various supercontinents in Earth history and their break-up has been associated recently with major upwellings of the mantle. It is classified together with mantle convection as one of the mechanism that are used to explain the movements of tectonic plates. It also shows affinity with the concept of hot spots which is used in modern-day plate tectonics to generate a framework of specific mantle upwelling points that are relatively stable throughout time and are used to calibrate the plate movements usin
https://en.wikipedia.org/wiki/179%20%28number%29
179 (one hundred [and] seventy-nine) is the natural number following 178 and preceding 180. In mathematics 179 is part of the Cunningham chain of prime numbers 89, 179, 359, 719, 1439, 2879, in which each successive number is two times the previous number, plus one. Among Cunningham chains of this length, this one has the smallest numbers. Because 179 is neither the start nor the end of this chain, it is both a safe prime and a Sophie Germain prime. It is also a super-prime number, because it is the 41st smallest prime and 41 is also prime. Since 971 (the digits of 179 reversed) is prime, 179 is an emirp. In other fields Astronomers have suggested that sunspot frequency undergoes a cycle of approximately 179 years in length. See also AD 179 and 179 BC List of highways numbered 179 External links
https://en.wikipedia.org/wiki/Gofio
Gofio is a sort of Canarian flour made from roasted grains (typically wheat or certain varieties of maize) or other starchy plants (e.g. beans and, historically, fern root), some varieties containing a little added salt. Gofio has been an important ingredient in Canarian cooking for some time, and Canarian emigrants have spread its use to the Caribbean (notably in Cuba, Dominican Republic, Puerto Rico, and Venezuela) and the Western Sahara. There are various ways to use it, such as kneading, dissolving in soup, and baking. It can also be used as a thickener. It is also found in Argentina, Uruguay, and Chile, where it is known as harina tostada and is employed in a wide variety of recipes. The gofio commercially available in the Canary Islands is always finely ground, like ordinary flour, despite the definition given in the Spanish Dictionary of the Royal Academy. It can't be seen at shops other than the Canary Islands. In 2014, the name Gofio Canario was added to the register of Protected designation of origin and Protected geographical indication by the European Commission. Elements Gofio is thought to have been the main staple of the diet of the Guanches, the original inhabitants of the Canary Islands, who produced it from barley and the rhizome of certain ferns. The latter is also known to have been used in historical times, especially in famine, even up until the 20th century. Gofio derives from the name for the product in the aboriginal language of Gran Canaria, while in neighbouring Tenerife it was known as ahoren. Among the Berbers of North Africa, from whom the Guanche population largely derived, there existed a toasted barley flour with similar usage as a food, called arkul. In Morocco, toasted flour is also mixed with, among other ingredients, almond paste, honey, argan oil, anise, fennel, and sesame seeds to make "sellou" (also called "zamita" or "slilou" in some regions), a sweet paste known for its long shelf life and high nutritive value. It was
https://en.wikipedia.org/wiki/181%20%28number%29
181 (one hundred [and] eighty-one) is the natural number following 180 and preceding 182. In mathematics 181 is an odd number 181 is a centered number 181 is a centered pentagonal number 181 is a centered 12-gonal number 181 is a centered 18-gonal number 181 is a centered 30-gonal number 181 is a centered square number 181 is a star number that represents a centered hexagram (as in the game of Chinese checkers) 181 is a deficient number, as 1 is less than 181 181 is an odious number 181 is a prime number 181 is a Chen prime 181 is a dihedral prime 181 is a full reptend prime 181 is a palindromic prime 181 is a strobogrammatic prime, the same when viewed upside down 181 is a twin prime with 179 181 is a square-free number 181 is an undulating number, if written in the ternary, the negaternary, or the nonary numeral systems 181 is the difference of 2 square numbers: 912 – 902 181 is the sum of 2 consecutive square numbers: 92 + 102 181 is the sum of 5 consecutive prime numbers: 29 + 31 + 37 + 41 + 43 In geography Langenburg No. 181, Saskatchewan rural municipality in Saskatchewan, Canada 181 Fremont Street proposed skyscraper in San Francisco, California 181 West Madison Street, Chicago In the military 181st (Brandon) Battalion, CEF was a unit in the Canadian Expeditionary Force during World War I 181st Airlift Squadron is a unit of the Texas Air National Guard 181st Infantry Brigade of the United States Army based at Fort McCoy, Wisconsin 181st Intelligence Wing is a unit of the United States Air Force located at Hulman Field, Terre Haute, Indiana AN/APQ-181 an all-weather, low probability of intercept (LPI) radar system for the U.S. Air Force B-2A Spirit bomber aircraft Bücker Bü 181 Bestmann single-engine trainer aircraft during World War II was a ship scheduled to be acquired by the United States Navy, however, the program was cancelled was a United States Navy troop transport during World War II was a United States Navy
https://en.wikipedia.org/wiki/191%20%28number%29
191 (one hundred [and] ninety-one) is the natural number following 190 and preceding 192. In mathematics 191 is a prime number, part of a prime quadruplet of four primes: 191, 193, 197, and 199. Because doubling and adding one produces another prime number (383), 191 is a Sophie Germain prime. It is the smallest prime that is not a full reptend prime in any base from 2 to 10; in fact, the smallest base for which 191 is a full period prime is base 19. See also 191 (disambiguation)
https://en.wikipedia.org/wiki/193%20%28number%29
193 (one hundred [and] ninety-three) is the natural number following 192 and preceding 194. In mathematics 193 is the number of compositions of 14 into distinct parts. In decimal, it is the seventeenth full repetend prime, or long prime. It is the only odd prime known for which 2 is not a primitive root of . It is the thirteenth Pierpont prime, which implies that a regular 193-gon can be constructed using a compass, straightedge, and angle trisector. It is part of the fourteenth pair of twin primes , the seventh trio of prime triplets , and the fourth set of prime quadruplets . Aside from itself, the friendly giant (the largest sporadic group) holds a total of 193 conjugacy classes. It also holds at least 44 maximal subgroups aside from the double cover of (the forty-fourth prime number is 193). 193 is also the eighth numerator of convergents to Euler's number; correct to three decimal places: The denominator is 71, which is the largest supersingular prime that uniquely divides the order of the friendly giant. In other fields 193 is the telephonic number of the 27 Brazilian Military Firefighters Corpses. 193 is the number of internationally recognized nations by the United Nations Organization (UNO). See also 193 (disambiguation)
https://en.wikipedia.org/wiki/Security%20parameter
In cryptography, a security parameter is a way of measuring of how "hard" it is for an adversary to break a cryptographic scheme. There are two main types of security parameter: computational and statistical, often denoted by and , respectively. Roughly speaking, the computational security parameter is a measure for the input size of the computational problem on which the cryptographic scheme is based, which determines its computational complexity, whereas the statistical security parameter is a measure of the probability with which an adversary can break the scheme (whatever that means for the protocol). Security parameters are usually expressed in unary representation - i.e. is expressed as a string of s, , conventionally written as - so that the time complexity of the cryptographic algorithm is polynomial in the size of the input. Computational security The security of cryptographic primitives relies on the hardness of some hard problems. One sets the computational security parameter such that computation is considered intractable. Examples If the security of a scheme depends on the secrecy of a key for a pseudorandom function (PRF), then we may specify that the PRF key should be sampled from the space so that a brute-force search requires computational power. In the RSA cryptosystem, the security parameter denotes the length in bits of the modulus n; the positive integer n must therefore be a number in the set {0, ..., 2 - 1}. Statistical security Security in cryptography often relies on the fact that statistical distance between a distribution predicated on a secret, and a simulated distribution produced by an entity that does not know the secret is small. We formalise this using the statistical security parameter by saying that the distributions are statistically close if the statistical distance between distributions can be expressed as a negligible function in the security parameter. One sets the statistical security parameter such t
https://en.wikipedia.org/wiki/197%20%28number%29
197 (one hundred [and] ninety-seven) is the natural number following 196 and preceding 198. In mathematics 197 is a prime number, the third of a prime quadruplet: 191, 193, 197, 199 197 is the smallest prime number that is the sum of 7 consecutive primes: 17 + 19 + 23 + 29 + 31 + 37 + 41, and is the sum of the first twelve prime numbers: 2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 197 is a centered heptagonal number, a centered figurate number that represents a heptagon with a dot in the center and all other dots surrounding the center dot in successive heptagonal layers 197 is a Schröder–Hipparchus number, counting for instance the number of ways of subdividing a heptagon by a non-crossing set of its diagonals. In other fields 197 is also: A police emergency telephone number in Tunisia Number enquiry telephone number in Nepal a song by Norwegian alternative rock group Major Parkinson from their self-titled debut album See also The year AD 197 or 197 BC List of highways numbered 197
https://en.wikipedia.org/wiki/199%20%28number%29
199 (one hundred [and] ninety-nine) is the natural number following 198 and preceding 200. In mathematics 199 is a centered triangular number. It is a prime number and the fourth part of a prime quadruplet: 191, 193, 197, 199. 199 is the smallest natural number that takes more than two iterations to compute its digital root as a repeated digit sum: Thus, its additive persistence is three, and it is the smallest number of persistence three. See also The year AD 199 or 199 BC List of highways numbered 199
https://en.wikipedia.org/wiki/Island%20restoration
The ecological restoration of islands, or island restoration, is the application of the principles of ecological restoration to islands and island groups. Islands, due to their isolation, are home to many of the world's endemic species, as well as important breeding grounds for seabirds and some marine mammals. Their ecosystems are also very vulnerable to human disturbance and particularly to introduced species, due to their small size. Island groups, such as New Zealand and Hawaii, have undergone substantial extinctions and losses of habitat. Since the 1950s several organisations and government agencies around the world have worked to restore islands to their original states; New Zealand has used them to hold natural populations of species that would otherwise be unable to survive in the wild. The principal components of island restoration are the removal of introduced species and the reintroduction of native species. Islands, endemism and extinction Isolated islands have been known to have greater levels of endemism since the 1970s when the theory of island biogeography, formulated by Robert MacArthur and E.O. Wilson, was developed. This higher occurrence of endemism is because isolation limits immigration of new species to the island, allowing new species to evolve separately from others on the mainland. For example, 71% of New Zealand's bird species (prior to human arrival) were endemic. As well as displaying greater levels of endemism, island species have characteristics that make them particularly vulnerable to human disturbance. Many island species evolved on small islands, or even restricted habitats on small islands. Small populations are vulnerable to even modest hunting, and restricted habitats are vulnerable to loss or modification of said habitat. More importantly, island species are often ecologically naive, that is they have not evolved alongside a predator, or have lost appropriate behavioural responses to predators. This often resulted in flightle
https://en.wikipedia.org/wiki/Dannie%20Heineman%20Prize%20for%20Astrophysics
The Dannie Heineman Prize for Astrophysics is jointly awarded each year by the American Astronomical Society and American Institute of Physics for outstanding work in astrophysics. It is funded by the Heineman Foundation in honour of Dannie Heineman. Recipients Source: AAS See also Dannie Heineman Prize for Mathematical Physics List of astronomy awards List of physics awards Prizes named after people
https://en.wikipedia.org/wiki/Interface%20Message%20Processor
The Interface Message Processor (IMP) was the packet switching node used to interconnect participant networks to the ARPANET from the late 1960s to 1989. It was the first generation of gateways, which are known today as routers. An IMP was a ruggedized Honeywell DDP-516 minicomputer with special-purpose interfaces and software. In later years the IMPs were made from the non-ruggedized Honeywell 316 which could handle two-thirds of the communication traffic at approximately one-half the cost. An IMP requires the connection to a host computer via a special bit-serial interface, defined in BBN Report 1822. The IMP software and the ARPA network communications protocol running on the IMPs was discussed in , the first of a series of standardization documents published by what later became the Internet Engineering Task Force (IETF). History The concept of an "Interface computer" was first proposed in 1966 by Donald Davies for the NPL network in England. The same idea was independently developed in early 1967 at a meeting of principal investigators for the Department of Defense's Advanced Research Projects Agency (ARPA) to discuss interconnecting machines across the country. Larry Roberts, who led the ARPANET implementation, initially proposed a network of host computers. Wes Clark suggested inserting "a small computer between each host computer and the network of transmission lines", i.e. making the IMP a separate computer. The IMPs were built by the Massachusetts-based company Bolt Beranek and Newman (BBN) in 1969. BBN was contracted to build four IMPs, the first being due at UCLA by Labor Day; the remaining three were to be delivered in one-month intervals thereafter, completing the entire network in a total of twelve months. When Massachusetts Senator Edward Kennedy learned of BBN's accomplishment in signing this million-dollar agreement, he sent a telegram congratulating the company for being contracted to build the "Interfaith Message Processor". The team worki
https://en.wikipedia.org/wiki/John%20R.%20Steel
John Robert Steel (born October 30, 1948) is an American set theorist at University of California, Berkeley (formerly at UCLA). He has made many contributions to the theory of inner models and determinacy. With Donald A. Martin, he proved projective determinacy, assuming the existence of sufficient large cardinals. He earned his Ph.D. in Logic & the Methodology of Science at Berkeley in 1977 under the joint supervision of John West Addison Jr. and Stephen G. Simpson. Awards In 1988, the Association for Symbolic Logic awarded him, Donald A. Martin and W. Hugh Woodin the Karp Prize for their work on the consistency of determinacy relative to large cardinals. In 2015, the European Set Theory Society awarded him and Ronald Jensen the Hausdorff Medal for their paper "K without the measurable". In 2012, Steel held the Gödel Lecture titled The hereditarily ordinal definable sets in models of determinacy.
https://en.wikipedia.org/wiki/Yiannis%20N.%20Moschovakis
Yiannis Nicholas Moschovakis (; born January 18, 1938) is a set theorist, descriptive set theorist, and recursion (computability) theorist, at UCLA. His book Descriptive Set Theory (North-Holland) is the primary reference for the subject. He is especially associated with the development of the effective, or lightface, version of descriptive set theory, and he is known for the Moschovakis coding lemma that is named after him. Biography Moschovakis earned his Ph.D. from the University of Wisconsin–Madison in 1963 under the direction of Stephen Kleene, with a dissertation entitled Recursive Analysis. In 2015, he was elected as a fellow of the American Mathematical Society "for contributions to mathematical logic, especially set theory and computability theory, and for exposition". For many years, he has split his time between UCLA and the University of Athens (he retired from the latter in July 2005). Moschovakis is married to Joan Moschovakis, with whom he gave the 2014 Lindström Lectures at the University of Gothenburg. Publications Second edition available online
https://en.wikipedia.org/wiki/Francis%20Heylighen
Francis Paul Heylighen (born 27 September 1960) is a Belgian cyberneticist investigating the emergence and evolution of intelligent organization. He presently works as a research professor at the Vrije Universiteit Brussel (the Dutch-speaking Free University of Brussels), where he directs the transdisciplinary "Center Leo Apostel" and the research group on "Evolution, Complexity and Cognition". He is best known for his work on the Principia Cybernetica Project, his model of the Internet as a global brain, and his contributions to the theories of memetics and self-organization. He is also known, albeit to a lesser extent, for his work on gifted people and their problems. Biography Heylighen was born on September 27, 1960, in Vilvoorde, Belgium. He received his high school education from the "Koninklijk Atheneum Pitzemburg" in Mechelen, in the section Latin-Mathematics. He received his MSc in mathematical physics in 1982 from the Vrije Universiteit Brussel (VUB), where he also received his PhD Summa cum Laude in Sciences in 1987 for his thesis, published in 1990, as "Representation and Change. A Metarepresentational Framework for the Foundations of Physical and Cognitive Science." In 1983 he started working as a researcher for the Belgian National Fund for Scientific Research (NFWO). In 1994 he became a tenured researcher at the NFWO and in 2001 a research professor at the VUB. Since 1995 he has been affiliated with the VUB's Center Leo Apostel for interdisciplinary studies. In 2004 he created the ECCO research group which he presently directs. Thanks to a grant from a private sponsor, in 2012 he additionally founded the Global Brain Institute at the Vrije Universiteit Brussel, becoming its first director. In 1989 Valentin Turchin and Cliff Joslyn founded the Principia Cybernetica Project, and Heylighen joined a year later. In 1993 he created the project's encyclopedic site, one of the first complex websites in the world. In 1996, Heylighen founded the "Global Brai
https://en.wikipedia.org/wiki/Longwood%20Medical%20and%20Academic%20Area
The Longwood Medical and Academic Area (also known as Longwood Medical Area, LMA, or simply Longwood) is a medical campus in Boston, Massachusetts. Flanking Longwood Avenue, LMA is adjacent to the Fenway–Kenmore, Audubon Circle, and Mission Hill neighborhoods, as well as the town of Brookline. It is most strongly associated with Harvard Medical School, the Harvard T.H. Chan School of Public Health, the Harvard School of Dental Medicine, and other medical facilities such as Harvard's teaching hospitals, but prominent non-Harvard institutions are located there as well. Long known as a global center of research, institutions in the Longwood Medical Area secured over $1.2 billion in NIH funds alone, in FY 2018 which exceeds funding received by 44 states. Hospitals and research institutions Beth Israel Deaconess Medical Center Boston Children's Hospital Brigham and Women's Hospital Dana–Farber Cancer Institute Joslin Diabetes Center Massachusetts Mental Health Center New England Baptist Hospital Wyss Institute for Biologically Inspired Engineering Schools and colleges Boston Latin School Emmanuel College Harvard Medical School Harvard School of Dental Medicine Harvard T.H. Chan School of Public Health Massachusetts College of Art and Design Massachusetts College of Pharmacy and Health Sciences Simmons University Wentworth Institute of Technology Boston University Wheelock College of Education & Human Development Winsor School Transportation LMA is served by two subway stations at opposite ends of Longwood Avenue: "Longwood" (on the MBTA Green Line's "D" branch) and "Longwood Medical Area" (on the "E" branch). Several public bus routes serve the area and commuter rail service is available at nearby Ruggles Station. MASCO offers shuttle buses (generally for affiliated personnel only) around the Longwood Medical Area and between Harvard's Cambridge Campus and the Medical Campus (M2). The M2 shuttle is free for passengers holding a Harvard ID. Energ
https://en.wikipedia.org/wiki/Common%20iliac%20vein
In human anatomy, the common iliac veins are formed by the external iliac veins and internal iliac veins. The left and right common iliac veins come together in the abdomen at the level of the fifth lumbar vertebra, forming the inferior vena cava. They drain blood from the pelvis and lower limbs. Both common iliac veins are accompanied along their course by common iliac arteries. Structure The external iliac vein and internal iliac vein unite in front of the sacroiliac joint to form the common iliac veins. Both common iliac veins ascend to form the inferior vena cava behind the right common iliac artery at the level of the fifth lumbar vertebra. The vena cava is to the right of the midline and therefore the left common iliac vein is longer than the right. The left common iliac vein occasionally travels upwards to the left of the aorta to the level of the kidney, where it receives the left renal vein and crosses in front of the aorta to join the inferior vena cava. The right common iliac vein is virtually vertical and lies behind and then lateral to its artery. Each common iliac vein receives iliolumbar veins, while the left also receives the median sacral vein which lies on the right of the corresponding artery. Clinical significance Overlying arterial structures may cause compression of the upper part of the left common iliac vein. Compression of the left common iliac vein against the fifth lumbar vertebral body by the right common iliac artery as the artery crosses in front of it traditionally happens in May–Thurner syndrome. Continuous pulsation of the common iliac artery may trigger an inflammatory response within the common iliac vein. The resulting intraluminal elastin and collagen deposition can cause intimal fibrosis and the formation of venous spurs and webs. This can lead to narrowing of the vein and cause persistent unilateral leg swelling, contributing to venous thromboembolism.
https://en.wikipedia.org/wiki/Prochirality
In stereochemistry, prochiral molecules are those that can be converted from achiral to chiral in a single step. An achiral species which can be converted to a chiral in two steps is called proprochiral. If two identical substituents are attached to a sp3-hybridized atom, the descriptors pro-R and pro-S are used to distinguish between the two. Promoting the pro-R substituent to higher priority than the other identical substituent results in an R chirality center at the original sp3-hybridized atom, and analogously for the pro-S substituent. A trigonal planar sp2-hybridized atom can be converted to a chiral center when a substituent is added to the re or si () face of the molecule. A face is labeled re if, when looking at that face, the substituents at the trigonal atom are arranged in increasing Cahn-Ingold-Prelog priority order (1 to 2 to 3) in a clockwise order, and si if the priorities increase in anti-clockwise order; note that the designation of the resulting chiral center as S or R depends on the priority of the incoming group. The concept of prochirality is necessary for understanding some aspects of enzyme stereospecificity. Alexander Ogston pointed out that when a symmetrical molecule is placed in an asymmetric environment, such as the surface of an enzyme, supposedly identically placed groups become distinguishable. In this way he showed that earlier exclusion of non-chiral citrate as a possible intermediate in the tricarboxylate cycle was mistaken.
https://en.wikipedia.org/wiki/Genetically%20modified%20crops
Genetically modified crops (GM crops) are plants used in agriculture, the DNA of which has been modified using genetic engineering methods. Plant genomes can be engineered by physical methods or by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. In most cases, the aim is to introduce a new trait to the plant which does not occur naturally in the species. Examples in food crops include resistance to certain pests, diseases, environmental conditions, reduction of spoilage, resistance to chemical treatments (e.g. resistance to a herbicide), or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. Farmers have widely adopted GM technology. Acreage increased from 1.7 million hectares in 1996 to 185.1 million hectares in 2016, some 12% of global cropland. As of 2016, major crop (soybean, maize, canola and cotton) traits consist of herbicide tolerance (95.9 million hectares) insect resistance (25.2 million hectares), or both (58.5 million hectares). In 2015, 53.6 million ha of Genetically modified maize were under cultivation (almost 1/3 of the maize crop). GM maize outperformed its predecessors: yield was 5.6 to 24.5% higher with less mycotoxins (−28.8%), fumonisin (−30.6%) and thricotecens (−36.5%). Non-target organisms were unaffected, except for Braconidae, represented by a parasitoid of European corn borer, the target of Lepidoptera active Bt maize. Biogeochemical parameters such as lignin content did not vary, while biomass decomposition was higher. A 2014 meta-analysis concluded that GM technology adoption had reduced chemical pesticide use by 37%, increased crop yields by 22%, and increased farmer profits by 68%. This reduction in pesticide use has been ecologically beneficial, but benefits may be reduced by overuse. Yield gains and pesticide reductions are larger for insect-resistant crops
https://en.wikipedia.org/wiki/Alexander%20S.%20Kechris
Alexander Sotirios Kechris (; born March 23, 1946) is a set theorist and logician at the California Institute of Technology. Contributions Kechris has made contributions to the theory of Borel equivalence relations and the theory of automorphism groups of uncountable structures. His research interests cover foundations of mathematics, mathematical logic and set theory and their interactions with analysis and dynamical systems. Kechris earned his Ph.D. at UCLA in 1972 under the direction of Yiannis N. Moschovakis, with a dissertation titled Projective Ordinals and Countable Analytic Sets. During his academic career he advised 23 PhD students and sponsored 20 postdoctoral researchers. In 2012, he became an Inaugural Fellow of the American Mathematical Society. Honors 1986 - Invited Speaker at the International Congress of Mathematicians in Berkeley (Mathematical Logic & Foundations) 1998 - Gödel Lecturer (Current Trends in Descriptive Set Theory). 2003 - Received the Karp Prize, along with Gregory Hjorth for joint work on Borel equivalence relations, in particular for their results on turbulence and countable Borel equivalence relations 2004 - Tarski Lecturer (New Connections Between Logic, Ramsey Theory and Topological Dynamics) Selected publications A. S. Kechris, "Classical Descriptive Set Theory", Springer-Verlag, 1995. H. Becker, A. S. Kechris, "The descriptive set theory of Polish group actions" (London Mathematical Society Lecture Note Series), University of Cambridge, 1996. A. S. Kechris, V. G. Pestov and S. Todorcevic, "Fraïssé limits, Ramsey theory and topological dynamics of automorphism groups", Geometric and Functional Analysis 15 (1) (2005), 106-189. A. S. Kechris, "Global Aspects of Ergodic Group Actions", Mathematical Surveys and Monographs, 160, American Mathematical Society, 2010.
https://en.wikipedia.org/wiki/Agaric
An agaric () is a type of fungus fruiting body characterized by the presence of a pileus (cap) that is clearly differentiated from the stipe (stalk), with lamellae (gills) on the underside of the pileus. In the UK, agarics are called "mushrooms" or "toadstools". In North America they are typically called "gilled mushrooms". "Agaric" can also refer to a basidiomycete species characterized by an agaric-type fruiting body. Archaically, agaric meant 'tree-fungus' (after Latin agaricum); however, that changed with the Linnaean interpretation in 1753 when Linnaeus used the generic name Agaricus for gilled mushrooms. Most species of agaricus belong to the order Agaricales in the subphylum Agaricomycotina. The exceptions, where agarics have evolved independently, feature largely in the orders Russulales, Boletales, Hymenochaetales, and several other groups of basidiomycetes. Old systems of classification placed all agarics in the Agaricales and some (mostly older) sources use "agarics" as the colloquial collective noun for the Agaricales. Contemporary sources now tend to use the term euagarics to refer to all agaric members of the Agaricales. "Agaric" is also sometimes used as a common name for members of the genus Agaricus, as well as for members of other genera; for example, Amanita muscaria is known by its common name "fly agaric".
https://en.wikipedia.org/wiki/Predictive%20learning
Predictive learning is a technique of machine learning in which an agent tries to build a model of its environment by trying out different actions in various circumstances. It uses knowledge of the effects its actions appear to have, turning them into planning operators. These allow the agent to act purposefully in its world. Predictive learning is one attempt to learn with a minimum of pre-existing mental structure. It may have been inspired by Piaget's account of how children construct knowledge of the world by interacting with it. Gary Drescher's book 'Made-up Minds' was seminal for the area. The idea that predictions and Unconscious inference are used by the brain to construct a model of the world, in which it can identify causes of percepts, is however even older and goes at least back to Hermann von Helmholtz. Those ideas were later picked up in the field of Predictive coding. Another related predictive learning theory is Jeff Hawkins' memory-prediction framework, which is laid out in his On Intelligence. See also Reinforcement learning Predictive coding
https://en.wikipedia.org/wiki/Global%20dimension
In ring theory and homological algebra, the global dimension (or global homological dimension; sometimes just called homological dimension) of a ring A denoted gl dim A, is a non-negative integer or infinity which is a homological invariant of the ring. It is defined to be the supremum of the set of projective dimensions of all A-modules. Global dimension is an important technical notion in the dimension theory of Noetherian rings. By a theorem of Jean-Pierre Serre, global dimension can be used to characterize within the class of commutative Noetherian local rings those rings which are regular. Their global dimension coincides with the Krull dimension, whose definition is module-theoretic. When the ring A is noncommutative, one initially has to consider two versions of this notion, right global dimension that arises from consideration of the right , and left global dimension that arises from consideration of the left . For an arbitrary ring A the right and left global dimensions may differ. However, if A is a Noetherian ring, both of these dimensions turn out to be equal to weak global dimension, whose definition is left-right symmetric. Therefore, for noncommutative Noetherian rings, these two versions coincide and one is justified in talking about the global dimension. Examples Let A = K[x1,...,xn] be the ring of polynomials in n variables over a field K. Then the global dimension of A is equal to n. This statement goes back to David Hilbert's foundational work on homological properties of polynomial rings; see Hilbert's syzygy theorem. More generally, if R is a Noetherian ring of finite global dimension k and A = R[x] is a ring of polynomials in one variable over R then the global dimension of A is equal to k + 1. A ring has global dimension zero if and only if it is semisimple. The global dimension of a ring A is less than or equal to one if and only if A is hereditary. In particular, a commutative principal ideal domain which is not a field has global
https://en.wikipedia.org/wiki/Jesse%20Sullivan
Jesse Sullivan (born c. 1966) is an American electrician best known for operating a fully robotic limb through a nerve-muscle graft, making him one of the first non-fictional cyborgs. His bionic arm, a prototype developed by the Rehabilitation Institute of Chicago, differs from most other prostheses, in that it does not use pull cables or nub switches to function and instead uses micro-computers to perform a much wider range of complex motions. It is also the first prototype which enables him to sense pressure. History As an electrician, Jesse Sullivan accidentally touched an active cable that contained 7,000-7,500 volts of electricity. In May 2001, he had to have both his arms amputated at the shoulder. Seven weeks after the amputation, Jesse Sullivan received matching bionic prostheses from Dr. Todd Kuiken of the Rehabilitation Institute of Chicago. Originally, they were operated from neural signals at the amputation sites, but Jesse Sullivan developed hyper-sensitivity from his skin grafts, causing great discomfort in those areas. Jesse Sullivan underwent neural surgery to graft nerves, which originally led to his arm, to his chest. The sensors for his bionic arms have been moved to the left side of his chest to receive signals from the newly grafted nerve endings. While the prototype is being strengthened, Jesse Sullivan does day-to-day tasks using an older model. See also Claudia Mitchell
https://en.wikipedia.org/wiki/Light-dragging%20effects
In 19th century physics, there were several situations in which the motion of matter might be said to drag light. This aether drag hypothesis was an attempt by classical physics to explain stellar aberration and the Fizeau experiment, but was discarded when Albert Einstein introduced his theory of relativity. Despite this, the expression light-dragging has remained in use somewhat, as discussed on this page. Under special relativity's simplified model Einstein assumes that light-dragging effects do not occur, and that the speed of light is independent of the speed of the emitting body's motion. However, the special theory of relativity does not deal with particulate matter effects or gravitational effects, nor does it provide a complete relativistic description of acceleration. When more realistic assumptions are made (that real objects are composed of particulate matter, and have gravitational properties), under general relativity's more sophisticated model the resulting descriptions include light-dragging effects. Einstein's theory of special relativity provides the solution to the Fizeau Experiment, which demonstrates the effect termed Fresnel drag whereby the velocity of light is modified by travelling through a moving medium. Einstein showed how the velocity of light in a moving medium is calculated, in the velocity-addition formula of special relativity. Einstein's theory of general relativity provides the solution to the other light-dragging effects, whereby the velocity of light is modified by the motion or the rotation of nearby masses. These effects all have one property in common: they are all velocity-dependent effects, whether that velocity be straight-line motion (causing frame-dragging) or rotational motion (causing rotation-dragging). Velocity-dependent effects Special relativity predicts that the velocity of light is modified by travelling through a moving medium. For a moving particulate body, light moving through the body's structure is know
https://en.wikipedia.org/wiki/Principal%20variation%20search
Principal variation search (sometimes equated with the practically identical NegaScout) is a negamax algorithm that can be faster than alpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing the minimax value of a node in a tree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage. NegaScout works best when there is a good move ordering. In practice, the move ordering is often determined by previous shallower searches. It produces more cutoffs than alpha–beta by assuming that the first explored node is the best. In other words, it supposes the first node is in the principal variation. Then, it can check whether that is true by searching the remaining nodes with a null window (also known as a scout window; when alpha and beta are equal), which is faster than searching with the regular alpha–beta window. If the proof fails, then the first node was not in the principal variation, and the search continues as normal alpha–beta. Hence, NegaScout works best when the move ordering is good. With a random move ordering, NegaScout will take more time than regular alpha–beta; although it will not explore any nodes alpha–beta did not, it will have to re-search many nodes. Alexander Reinefeld invented NegaScout several decades after the invention of alpha–beta pruning. He gives a proof of correctness of NegaScout in his book. Another search algorithm called SSS* can theoretically result in fewer nodes searched. However, its original formulation has practical issues (in particular, it relies heavily on an OPEN list for storage) and nowadays most chess engines still use a form of NegaScout in their search. Most chess engines use a transposition table in which the relevant part of the search tree is stored. This part of the tree has the same size as SSS*'s OPEN list would have. A reformula
https://en.wikipedia.org/wiki/Degenerate%20energy%20levels
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy (or simply the degeneracy) of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy. Degeneracy plays a fundamental role in quantum statistical mechanics. For an -particle system in three dimensions, a single energy level may correspond to several different wave functions or energy states. These degenerate states at the same level all have an equal probability of being filled. The number of such states gives the degeneracy of a particular energy level. Mathematics The possible states of a quantum mechanical system may be treated mathematically as abstract vectors in a separable, complex Hilbert space, while the observables may be represented by linear Hermitian operators acting upon them. By selecting a suitable basis, the components of these vectors and the matrix elements of the operators in that basis may be determined. If is a matrix, a non-zero vector, and is a scalar, such that , then the scalar is said to be an eigenvalue of and the vector is said to be the eigenvector corresponding to . Together with the zero vector, the set of all eigenvectors corresponding to a given eigenvalue form a subspace of , which is called the eigenspace of .
https://en.wikipedia.org/wiki/Docking%20%28molecular%29
In the field of molecular modeling, docking is a method which predicts the preferred orientation of one molecule to a second when a ligand and a target are bound to each other to form a stable complex. Knowledge of the preferred orientation in turn may be used to predict the strength of association or binding affinity between two molecules using, for example, scoring functions. The associations between biologically relevant molecules such as proteins, peptides, nucleic acids, carbohydrates, and lipids play a central role in signal transduction. Furthermore, the relative orientation of the two interacting partners may affect the type of signal produced (e.g., agonism vs antagonism). Therefore, docking is useful for predicting both the strength and type of signal produced. Molecular docking is one of the most frequently used methods in structure-based drug design, due to its ability to predict the binding-conformation of small molecule ligands to the appropriate target binding site. Characterisation of the binding behaviour plays an important role in rational design of drugs as well as to elucidate fundamental biochemical processes. Definition of problem One can think of molecular docking as a problem of “lock-and-key”, in which one wants to find the correct relative orientation of the “key” which will open up the “lock” (where on the surface of the lock is the key hole, which direction to turn the key after it is inserted, etc.). Here, the protein can be thought of as the “lock” and the ligand can be thought of as a “key”. Molecular docking may be defined as an optimization problem, which would describe the “best-fit” orientation of a ligand that binds to a particular protein of interest. However, since both the ligand and the protein are flexible, a “hand-in-glove” analogy is more appropriate than “lock-and-key”. During the course of the docking process, the ligand and the protein adjust their conformation to achieve an overall "best-fit" and this kind of confor
https://en.wikipedia.org/wiki/Square%20principle
In mathematical set theory, a square principle is a combinatorial principle asserting the existence of a cohering sequence of short closed unbounded (club) sets so that no one (long) club set coheres with them all. As such they may be viewed as a kind of incompactness phenomenon. They were introduced by Ronald Jensen in his analysis of the fine structure of the constructible universe L. Definition Define Sing to be the class of all limit ordinals which are not regular. Global square states that there is a system satisfying: is a club set of . ot If is a limit point of then and Variant relative to a cardinal Jensen introduced also a local version of the principle. If is an uncountable cardinal, then asserts that there is a sequence satisfying: is a club set of . If , then If is a limit point of then Jensen proved that this principle holds in the constructible universe for any uncountable cardinal κ. Notes Set theory Constructible universe
https://en.wikipedia.org/wiki/Lime%20sulfur
In horticulture, lime sulphur (American spelling lime sulfur) is mainly a mixture of calcium polysulfides and thiosulfate (plus other reaction by-products as sulfite and sulfate) formed by reacting calcium hydroxide with elemental sulfur, used in pest control. It can be prepared by boiling in water a suspension of poorly soluble calcium hydroxide (lime) and solid sulfur together with a small amount of surfactant to facilitate the dispersion of these solids in water. After elimination of any residual solids (flocculation, decantation and filtration), it is normally used as an aqueous solution, which is reddish-yellow in colour and has a distinctive offensive odour of hydrogen sulfide (H2S, rotten eggs). Synthesis reaction The exact chemical reaction leading to the synthesis of lime sulfur is poorly known and is generally written as: as reported in a document of the US Department of Agriculture (USDA). This vague reaction is puzzling because it involves the reduction of elemental sulfur and no reductant appears in the above mentioned equation while sulfur oxidation products are also mentioned. The initial pH of the solution imposed by poorly soluble hydrated lime is alkaline (pH = 12.5) while the final pH is in range 11–12, typical for sulfides which are also strong bases. When the hydrolysis of calcium sulfide is accounted for, the individual reactions for each of the by-products are: However, elemental sulfur can undergo a disproportionation reaction, also called dismutation. The first reaction resembles a disproportionation reaction. The inverse comproportionation reaction is the reaction occurring in the Claus process used for desulfurisation of oil and gas crude products in the refining industry: By rewriting the last reaction in the inverse direction one obtains a reaction consistent with what is observed in the lime sulfur global reaction: In alkaline conditions, it gives: and after simplification, or more exactly recycling, of water molecules in the a
https://en.wikipedia.org/wiki/Atlases%20of%20the%20flora%20and%20fauna%20of%20Britain%20and%20Ireland
The biodiversity of Great Britain and Ireland is one of the most well-studied geographical areas of its size in the world. This biota work has resulted in the publication of distribution atlases for many taxonomic groups. This page lists these publications. A full atlas is generally regarded as a definitive work on distribution, whereas a provisional atlas is typically produced as an interim stage to show survey progress. One of the bodies responsible for publishing a great number of distribution atlases is the Institute of Terrestrial Ecology. Each atlas presents 10 km2 distribution maps for the species within its scope. Maps typically use different symbols to signify records from differing time-periods - solid symbols for 10-km squares (hectads) that have recent records, and unfilled symbols for 10-km squares for which only older records exist, according to a defined cut-off date. The atlases are produced by the Biological Records Centre (BRC), which is run by the Institute of Terrestrial Ecology, part of the Centre for Ecology and Hydrology based at CEH Wallingford, Crowmarsh Gifford, Oxfordshire. The data used to produce the maps is gathered by volunteer biological recorders and collated by the BRC Recording Schemes. The atlases fall into two groups: Main Atlases are commercially published books, presenting the current state of knowledge for well-recorded groups. They typically include text information about the species, and other supporting material such as analyses of trends. They are usually produced only where a well-established recording scheme has been in operation for a significant period of time, and the scheme organisers believe that the data represent a comprehensive picture of the distribution of each species. Provisional Atlases give recorders an indication of progress and illustrate early results. Some of the later ones are quite detailed and less "provisional" - for example the Hoverfly Atlas, which provides charts of flight-period as well
https://en.wikipedia.org/wiki/ACARS
In aviation, ACARS (; an acronym for Aircraft Communications Addressing and Reporting System) is a digital datalink system for transmission of short messages between aircraft and ground stations via airband radio or satellite. The protocol was designed by ARINC and deployed in 1978, using the Telex format. More ACARS radio stations were added subsequently by SITA. History of ACARS Prior to the introduction of datalink in aviation, all communication between the aircraft and ground personnel was performed by the flight crew using voice communication, using either VHF or HF voice radios. In many cases, the voice-relayed information involved dedicated radio operators and digital messages sent to an airline teletype system or successor systems. Further, the hourly rates for flight and cabin crew salaries depended on whether the aircraft was airborne or not, and if on the ground whether it was at the gate or not. The flight crews reported these times by voice to geographically dispersed radio operators. Airlines wanted to eliminate self-reported times to preclude inaccuracies, whether accidental or deliberate. Doing so also reduced the need for human radio operators to receive the reports. In an effort to reduce crew workload and improve data integrity, the engineering department at ARINC introduced the ACARS system in July 1978, as an automated time clock system. Teledyne Controls produced the avionics and the launch customer was Piedmont Airlines. The original expansion of the abbreviation was "Arinc Communications Addressing and Reporting System". Later, it was changed to "Aircraft Communications, Addressing and Reporting System". The original avionics standard was ARINC 597, which defined an ACARS Management Unit consisting of discrete inputs for the doors, parking brake and weight on wheels sensors to automatically determine the flight phase and generate and send as telex messages. It also contained a MSK modem, which was used to transmit the reports over existing
https://en.wikipedia.org/wiki/List%20of%20Canadian%20flags
The Department of Canadian Heritage lays out protocol guidelines for the display of flags, including an order of precedence; these instructions are only conventional, however, and are generally intended to show respect for what are considered important symbols of the state or institutions. The sovereign's personal standard is supreme in the order of precedence, followed by those for the monarch's representatives (depending on jurisdiction), the personal flags of other members of the Royal Family, and then the national flag and provincial flags. Many museums across Canada display historic flags in their exhibits. The Canadian Museum of History, in Hull, Quebec has many culturally important flags in their collections. Settlers, Rails & Trails Inc., in Argyle, Manitoba holds the second largest exhibit - known as the Canadian Flag Collection. State National Ceremonial Provincial Territorial Royal Viceregal and administrative Governor general Lieutenant governors and commissioners Supreme Court of Canada Military and civilian law enforcement organizations Canadian Armed Forces Canadian Army Royal Canadian Navy Royal Canadian Air Force Canadian Special Operations Forces Command Canada Border Services Agency Canadian Coast Guard Police services Youth cadets organizations Civil Corporations Crown corporations Hudson's Bay Company Religious Ethnic groups Indigenous nations Blackfoot Cree Inuit Francophone peoples Other ethnic groups Municipal Historical National Royal Coronation standards Viceregal Civil ensigns Newfoundland Rebellions Other Proposed The following is a list of flags proposed for the Canadian state. Regional Official Unofficial House flags of Canadian freight companies Yacht clubs of Canada See also Canadian Heraldic Authority Canadian heraldry Canadian royal symbols Great Canadian Flag Debate List of Canadian provincial and territorial symbols National symbols of Canada
https://en.wikipedia.org/wiki/Sequence%20transformation
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods. Overview Classical examples for sequence transformations include the binomial transform, Möbius transform, Stirling transform and others. Definitions For a given sequence the transformed sequence is where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, i.e. for some which often depends on (cf. e.g. Binomial transform). In the simplest case, the and the are real or complex numbers. More generally, they may be elements of some vector space or algebra. In the context of acceleration of convergence, the transformed sequence is said to converge faster than the original sequence if where is the limit of , assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit . If the mapping is linear in each of its arguments, i.e., for for some constants (which may depend on n), the sequence transformation is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations. Examples Simplest examples of (linear) sequence transformations include shifting all elements, (resp. = 0 if n + k < 0) for a fixed k, and scalar multiplication of the sequence. A less trivial example would be the discrete convolution with a fixed sequence. A particular
https://en.wikipedia.org/wiki/Agaricus%20subrufescens
Agaricus subrufescens (syn. Agaricus blazei, Agaricus brasiliensis or Agaricus rufotegulis) is a species of mushroom, commonly known as almond mushroom, almond agaricus, mushroom of the sun, God's mushroom, mushroom of life, royal sun agaricus, jisongrong, or himematsutake (Chinese: , Japanese: , "princess matsutake") and by a number of other names. Agaricus subrufescens is edible, with a somewhat sweet taste and a fragrance of almonds. Taxonomy Agaricus subrufescens was first described by the American botanist Charles Horton Peck in 1893. During the late nineteenth and early twentieth centuries, it was cultivated for the table in the eastern United States. It was discovered again in Brazil during the 1970s, and misidentified as Agaricus blazei Murrill, a species originally described from Florida. It was soon marketed for its purported medicinal properties under various names, including ABM (for Agaricus blazei Murrill), cogumelo do sol (mushroom of the sun), cogumelo de Deus (mushroom of God), cogumelo de vida (mushroom of life), himematsutake, royal sun agaricus, Mandelpilz, and almond mushroom. In 2002, Didukh and Wasser correctly rejected the name A. blazei for this species, but unfortunately called the Brazilian fungus A. brasiliensis, a name that had already been used for a different species, Agaricus brasiliensis Fr. (1830). Richard Kerrigan undertook genetic and interfertility testing on several fungal strains, and showed that samples of the Brazilian strains called A. blazei and A. brasiliensis were genetically similar to, and interfertile with, North American populations of Agaricus subrufescens. These tests also found European samples called A. rufotegulis to be of the same species. Because A. subrufescens is the oldest name, it has taxonomical priority. Description Initially, the cap is hemispherical, later becoming convex, with a diameter of . The cap surface is covered with silk-like fibers, although in maturity it develops small scales (squamulose
https://en.wikipedia.org/wiki/211%20%28number%29
211 (two hundred [and] eleven) is the natural number following 210 and preceding 212. It is also a prime number. In mathematics 211 is an odd number. 211 is a primorial prime, the sum of three consecutive primes (), a Chen prime, a centered decagonal prime, and a self prime. 211 is the smallest prime separated by eight or more from the nearest primes (199 and 223). It is thus a balanced prime and an isolated prime. 211 is a repdigit in tetradecimal (111). In decimal, multiplying 211's digits results in a prime (); adding its digits results in a square (). Rearranging its digits, 211 becomes 121, which also is a square (). Adding any two of 211's digits will result in a prime (2 or 3). 211 is a super-prime. In science and technology 2-1-1 is special abbreviated telephone number reserved in Canada and the United States as an easy-to-remember three-digit telephone number. It is meant to provide quick information and referrals to health and human service organizations for both services from charities and from governmental agencies. In chemistry, 211 is also associated with E211, the preservative sodium benzoate. In religions In Islam, Sermon 211 is about the strength and greatness of Allah. In other fields 211 is also the California Penal Code section defining robbery. It is sometimes paired with 187, California PC section for murder. 211 is also an EDI (Electronic data interchange) document known as an Electronic Bill of Lading. 211 is also a nickname for Steel Reserve, a malt liquor alcoholic beverage. 211 is also SMTP status code for system status. +211 is the code for international direct-dial phone calls to South Sudan. See also 211 Crew
https://en.wikipedia.org/wiki/Phospholamban
Phospholamban, also known as PLN or PLB, is a micropeptide protein that in humans is encoded by the PLN gene. Phospholamban is a 52-amino acid integral membrane protein that regulates the calcium (Ca2+) pump in cardiac muscle cells. Function This protein is found as a pentamer and is a major substrate for the cAMP-dependent protein kinase (PKA) in cardiac muscle. In the unphosphorylated state, phospholamban is an inhibitor of cardiac muscle sarcoplasmic reticulum Ca2+-ATPase (SERCA2) which transports calcium from cytosol into the sarcoplasmic reticulum. When phosphorylated (by PKA) - disinhibition of Ca2+-ATPase of SR leads to faster Ca2+ uptake into the sarcoplasmic reticulum, thereby contributing to the lusitropic response elicited in heart by beta-agonists. The protein is a key regulator of cardiac diastolic function. Mutations in this gene are a cause of inherited human dilated cardiomyopathy with refractory congestive heart failure. When phospholamban is phosphorylated by PKA, its ability to inhibit SERCA2 is lost. Thus, activators of PKA, such as the beta-adrenergic agonist epinephrine (released by sympathetic stimulation), may enhance the rate of cardiac myocyte relaxation. In addition, since SERCA2 is more active, the next action potential will cause an increased release of calcium, resulting in increased contraction (positive inotropic effect). When phospholamban is not phosphorylated, such as when PKA is inactive, it can interact with and inhibit SERCA. Thus, the overall effect of unphosphorylated phospholamban is to decrease contractility and the rate of muscle relaxation, thereby decreasing stroke volume and heart rate, respectively. Clinical significance Gene knockout of phospholamban results in animals with hyperdynamic hearts, with little apparent negative consequence. Mutations in this gene are a cause of inherited human dilated cardiomyopathy with refractory congestive heart failure. Discovery Phospholamban was discovered by Arnold Mart
https://en.wikipedia.org/wiki/Pseudoholomorphic%20curve
In mathematics, specifically in topology and geometry, a pseudoholomorphic curve (or J-holomorphic curve) is a smooth map from a Riemann surface into an almost complex manifold that satisfies the Cauchy–Riemann equation. Introduced in 1985 by Mikhail Gromov, pseudoholomorphic curves have since revolutionized the study of symplectic manifolds. In particular, they lead to the Gromov–Witten invariants and Floer homology, and play a prominent role in string theory. Definition Let be an almost complex manifold with almost complex structure . Let be a smooth Riemann surface (also called a complex curve) with complex structure . A pseudoholomorphic curve in is a map that satisfies the Cauchy–Riemann equation Since , this condition is equivalent to which simply means that the differential is complex-linear, that is, maps each tangent space to itself. For technical reasons, it is often preferable to introduce some sort of inhomogeneous term and to study maps satisfying the perturbed Cauchy–Riemann equation A pseudoholomorphic curve satisfying this equation can be called, more specifically, a -holomorphic curve. The perturbation is sometimes assumed to be generated by a Hamiltonian (particularly in Floer theory), but in general it need not be. A pseudoholomorphic curve is, by its definition, always parametrized. In applications one is often truly interested in unparametrized curves, meaning embedded (or immersed) two-submanifolds of , so one mods out by reparametrizations of the domain that preserve the relevant structure. In the case of Gromov–Witten invariants, for example, we consider only closed domains of fixed genus and we introduce marked points (or punctures) on . As soon as the punctured Euler characteristic is negative, there are only finitely many holomorphic reparametrizations of that preserve the marked points. The domain curve is an element of the Deligne–Mumford moduli space of curves. Analogy with the classical Cauchy–Riemann equations The
https://en.wikipedia.org/wiki/Case%20presentation
A case presentation is a formal communication between health care professionals such as doctors and nurses regarding a patient's clinical information. Essential parts of a case presentation include: Identification Reason for consultation/admission Chief complaints (CC) - what made patients seek medical attention. History of present illness (HPI) - circumstances relating to chief complaints. Past medical history (PMHx) Past surgical history Current medications Allergies Family history (FHx) Social history (SocHx) Physical examination (PE) Laboratory results (Lab) Other investigations (imaging, biopsy etc.) Case summary and impression Management plans follow up in clinic or hospital Adherence of the patient to treatment success of the treatment or failure. causes of success or failure.
https://en.wikipedia.org/wiki/Self-shrinking%20generator
A self-shrinking generator is a pseudorandom generator that is based on the shrinking generator concept. Variants of the self-shrinking generator based on a linear-feedback shift register (LFSR) are studied for use in cryptography. Algorithm In difference to the shrinking generator, which uses a second feedback shift register to control the output of the first, the self-shrinking generator uses alternating output bits of a single register to control its final output. The procedure for clocking this kind of generator is as follows: Clock the LFSR twice to obtain a pair of bits as LFSR output. If the pair is 10 output a zero. If the pair is 11 output a one. Otherwise, output nothing. Return to step one. Example This example will use the connection polynomial x8 + x4 + x3 + x2 + 1, and an initial register fill of 1 0 1 1 0 1 1 0. Below table lists, for each iteration of the LFSR, its intermediate output before self-shrinking, as well as the final generator output. The tap positions defined by the connection polynomial are marked with blue headings. The state of the zeroth iteration represents the initial input. At the end of four iterations, the following sequence of intermediate bits is produced: 0110. The first pair of bits, 01, is discarded since it does not match either 10 or 11. The second pair of bits, 10, matches the second step of the algorithm so a zero is output. More bits are created by continuing to clock the LFSR and shrinking its output as described above. Cryptanalysis In their paper, Meier and Steffelbach prove that a LFSR-based self-shrinking generator with a connection polynomial of length L results in an output sequence period of at least 2L/2, and a linear complexity of at least 2L/2-1. Furthermore, they show that any self-shrinking generator can be represented as a shrinking-generator. The inverse is also true: Any shrinking generator can be implemented as a self-shrinking generator, although the resultant generator may not be of
https://en.wikipedia.org/wiki/Miracle%20Whip
Miracle Whip is a condiment manufactured by Kraft Heinz and sold throughout the United States and Canada. It is also sold by Mondelēz International (formerly also Kraft Foods) as "Miracel Whip" throughout Germany. It was developed as a less expensive alternative to mayonnaise in 1933. History Premiering at the Century of Progress World's Fair in Chicago in 1933, Miracle Whip soon became a success as a condiment for fruits, vegetables, and salads. Its success was bolstered by Kraft's advertising campaign, which included sponsorship of a series of two-hour radio programs. At the end of its introductory period, Miracle Whip was outselling all mayonnaise brands. According to Kraft archivist Becky Haglund Tousey, Kraft developed the product in house, using a patented "emulsifying machine", invented by Charles Chapman, to create a product that blended mayonnaise and less expensive salad dressing, sometimes called "boiled dressing" and "salad dressing spread". The machine, dubbed "Miracle Whip" by Chapman, ensured that the ingredients, including more than 20 spices, were thoroughly blended. Another story claims that Miracle Whip was invented in Salem, Illinois, at Max Crosset's Cafe, where it was called "Max Crossett's X-tra Fine Salad Dressing", and that Crosset sold it to Kraft Foods in 1931 for $300 (). While stating that Kraft did buy many salad dressings, Tousey disputes the claim that X-tra Fine was Miracle Whip. Since 1972, Miracle Whip has been sold as Miracel Whip in Germany. It was formerly produced by Kraft Foods, and is now made by Mondelēz International, in Bad Fallingbostel. Ingredients and nutrition Miracle Whip is made from water, soybean oil, high-fructose corn syrup, vinegar, modified corn starch, eggs, salt, natural flavor, mustard flour, potassium sorbate, spice, and dried garlic. The original Miracle Whip is produced using less oil compared to traditional mayonnaise, thus has around half of the calories. Due to added corn syrup it is also sweeter
https://en.wikipedia.org/wiki/Odonata%20Records%20Committee
The Odonata Records Committee is the recognised national body which verifies records of rare vagrant dragonflies in Britain. It was set up in 1998 and consists of six members. Its chairman is Adrian Parr. Decisions on records are published in Atropos and the Journal of the British Dragonfly Society.
https://en.wikipedia.org/wiki/Zebra%20%28medicine%29
Zebra is the American medical slang for arriving at a surprising, often exotic, medical diagnosis when a more commonplace explanation is more likely. It is shorthand for the aphorism coined in the late 1940s by Theodore Woodward, professor at the University of Maryland School of Medicine, who instructed his medical interns: "When you hear hoofbeats behind you, don't expect to see a zebra." (Since zebras are much rarer than horses in the United States, the sound of hoofbeats would almost certainly be from a horse.) By 1960, the aphorism was widely known in medical circles. The saying is a warning against the statistical Base rate fallacy where the likelihood of something like a disease among the population is not taken into consideration for an individual. Medical novices are predisposed to make rare diagnoses because of (a) the availability heuristic ("events more easily remembered are judged more probable") and (b) the phenomenon first enunciated in Rhetorica ad Herennium (), "the striking and the novel stay longer in the mind." Thus, the aphorism is an important caution against these biases when teaching medical students to weigh medical evidence. Diagnosticians have noted, however, that "zebra"-type diagnoses must nonetheless be held in mind until the evidence conclusively rules them out: This quote, however, falls into the Base rate fallacy as the odds of a rare disease in a single patient is indeed affected by likelihood of a person in the population having that disease. Comparable slang for an obscure and rare diagnosis in medicine is fascinoma. Examples Necrotic skin lesions in the United States are often diagnosed as loxoscelism (recluse spider bites), even in areas where Loxosceles species are rare or not present. This is a matter of concern because such misdiagnoses can delay correct diagnosis and treatment. Usage Ehlers-Danlos syndrome is considered a rare condition and those with it are known as medical zebras. The zebra was adopted across the w
https://en.wikipedia.org/wiki/Myelokathexis
Myelokathexis is a congenital disorder of the white blood cells that causes severe, chronic leukopenia (a reduction of circulating white blood cells) and neutropenia (a reduction of neutrophil granulocytes). The disorder is believed to be inherited in an autosomal dominant manner. Myelokathexis refers to retention (kathexis) of neutrophils in the bone marrow (myelo). The disorder shows prominent neutrophil morphologic abnormalities. Myelokathexis is amongst the diseases treated with bone marrow transplantation and cord blood stem cells. WHIM syndrome is a very rare variant of severe congenital neutropenia that presents with warts, hypogammaglobunemia, infections, and myelokathexis. A gain-of-function mutation resulting in a truncated form of CXCR4 is believed to be its cause. The truncated form of the receptor has a 2-fold increase in G-protein coupled intracellular signalling, and this mutation of the receptor can be identified by DNA sequencing. See also WHIM syndrome
https://en.wikipedia.org/wiki/Missing%20dollar%20riddle
The missing dollar riddle is a famous riddle that involves an informal fallacy. It dates to at least the 1930s, although similar puzzles are much older. Statement Although the wording and specifics can vary, the puzzle runs along these lines: Three guests check into a hotel room. The manager says the bill is $30, so each guest pays $10. Later the manager realizes the bill should only have been $25. To rectify this, he gives the bellhop $5 as five one-dollar bills to return to the guests. On the way to the guests' room to refund the money, the bellhop realizes that he cannot equally divide the five one-dollar bills among the three guests. As the guests are not aware of the total of the revised bill, the bellhop decides to just give each guest $1 back and keep $2 as a tip for himself, and proceeds to do so. As each guest got $1 back, each guest only paid $9, bringing the total paid to $27. The bellhop kept $2, which when added to the $27, comes to $29. So if the guests originally handed over $30, what happened to the remaining $1? There seems to be a discrepancy, as there cannot be two answers ($29 and $30) to the math problem. On the one hand it is true that the $25 in the register, the $3 returned to the guests, and the $2 kept by the bellhop add up to $30, but on the other hand, the $27 paid by the guests and the $2 kept by the bellhop add up to only $29. Solution The misdirection in this riddle is in the second half of the description, where unrelated amounts are added together and the person to whom the riddle is posed assumes those amounts should add up to 30, and is then surprised when they do not ⁠— ⁠there is, in fact, no reason why the (10 ⁠− ⁠1) ⁠× ⁠3 ⁠ + ⁠2 ⁠ = ⁠29 sum should add up to 30. The exact sum mentioned in the riddle is computed as: SUM = $9 (payment by Guest 1) + $9 (payment by Guest 2) + $9 (payment by Guest 3) + $2 (money in bellhop's pocket) The trick here is to realize that this is not a sum of t
https://en.wikipedia.org/wiki/Glazed%20architectural%20terra-cotta
Glazed architectural terra cotta is a ceramic masonry building material used as a decorative skin. It featured widely in the 'terracotta revival' from the 1880s until the 1930s. It was used in the UK, United States, Canada and Australia and is still one of the most common building materials found in U.S. urban environments. It is the glazed version of architectural terracotta; the material in both its glazed and unglazed versions is sturdy and relatively inexpensive, and can be molded into richly ornamented detail. Glazed terra-cotta played a significant role in architectural styles such as the Chicago School and Beaux-Arts architecture. History The material, also known in Great Britain as faience and sometimes referred to as "architectural ceramics", in the USA was closely associated with the work of Cass Gilbert, Louis Sullivan, and Daniel H. Burnham, among other architects. Buildings incorporating glazed terra-cotta include the Woolworth Building in New York City and the Wrigley Building in Chicago. Glazed architectural terra-cotta offered a modular, varied and relatively inexpensive approach to wall and floor construction. It was particularly adaptable to vigorous and rich ornamental detailing. It was created by Luca della Robbia (1400–1482), and was used in most of his works. Terra-cotta is an enriched molded clay brick or block. It was usually hollow cast in blocks which were open in the back, with internal stiffeners called webbing, substantially strengthening the hollow blocks with minimal weight increase. The blocks were finished with a glaze, with a clay wash or an aqueous solution of metal salts, before firing. Late 19th century advertising for the material promoted the durable, impervious and adaptable nature of glazed architectural terra-cotta. It could accommodate subtle nuances of modeling, texture and color. Compared with stone, it was easier to handle, quickly set and lower cost. The cost of producing the blocks, when compared to carving stone,
https://en.wikipedia.org/wiki/The%20Eidolon
The Eidolon was one of two games that were part of Lucasfilm Games' second wave in December 1985. The other was Koronis Rift. Both took advantage of the fractal technology developed for Rescue on Fractalus!, further enhancing it. In The Eidolon, Rescue'''s fractal mountains were turned upside down and became the inside of a cave. In addition to common cassette formats, the Atari and Commodore 64 versions were supplied on a floppy disk. One side had the Atari version, and the other had the Commodore 64 version. The Atari version required an XL/XE with 64kB or more memory. Plot The player discovers the Eidolon, a strange 19th-century vehicle, in an abandoned laboratory. As the player investigates this device, he is accidentally transported to another dimension and is trapped in a vast, maze-like cave. The creatures in this cave, sensing the energy emanating from the Eidolon, are woken from a long slumber, and the player soon finds that his only chance of survival lies in this mysterious vehicle and its powerful energy weapon. Gameplay The objective of The Eidolon is to successfully navigate through all of the game's levels, defeating the dragon guardian at the end of each level. The player navigates through each maze and collects energy orbs, which come in four different colors (red, yellow, green and blue). Along the way, various enemies wake up and attack the Eidolon, attempting to absorb its energy. Some enemies also fire orbs at the Eidolon, of which all but the red orbs can be absorbed by pressing the space bar at the right time to replenish the Eidolon's limited energy. Green orbs also have the power to transform other enemies into different kinds of enemies. Blue orbs can freeze enemies temporarily, giving the player a momentary advantage in a fight. Each level contains three diamonds, each guarded by an enemy of a specific color. After defeating each of these enemies and collecting the diamonds, the player can proceed to the dragon that guards the exit.
https://en.wikipedia.org/wiki/Gular%20skin
Gular skin (throat skin), in ornithology, is an area of featherless skin on birds that joins the lower mandible of the beak (or bill) to the bird's neck. Other vertebrate taxa may have a comparable anatomical structure that is referred to as either a gular sac, throat sac, vocal sac or gular fold. In birds Gular skin can be very prominent, for example in members of the order Phalacrocoraciformes as well as in pelicans (which likely share a common ancestor). In many species, the gular skin forms a flap, or gular pouch, which is generally used to store fish and other prey while hunting. In cormorants, the gular skin is often colored, contrasting with the otherwise plain black or black-and-white appearance of the bird. This presumably serves some function in social signalling, since the colors become more pronounced in breeding adults. In frigatebirds, the gular skin (or gular sac or throat sac) is used dramatically. During courtship display, the male forces air into the sac, causing it to inflate over a period of 20 minutes into a startling huge red balloon. Because cormorants are closer relatives of gannets and anhingas (which have no prominent gular pouch) than of frigatebirds or pelicans, it can be seen that the gular pouch is either plesiomorphic or was acquired by parallel evolution. In other vertebrates The orangutan is the only known great ape to have this characteristic, where it is only present in males. In addition, the walrus and some species of gibbon, such as the siamang, have a throat sac. Many amphibians will inflate their vocal sac to create certain vocalizations in order to communicate, scare off rivals (to proclaim territory or dominance), and to locate and attract a mate. The gular sac in this instance amplifies their voice to be heard louder and seemingly closer. Some species of lizard also have a gular fold and consequently, gular scales. The theropod dinosaur Pelecanimimus, which lived in the early Cretaceous Period 130 million years ago
https://en.wikipedia.org/wiki/Koronis%20Rift
Koronis Rift is a 1985 computer game from Lucasfilm Games. It was produced and designed by Noah Falstein. Originally developed for the Atari 8-bit family and the Commodore 64, Koronis Rift was ported to the Amstrad CPC, Apple II, MSX2, Tandy Color Computer 3, and ZX Spectrum. The Atari and C64 version shipped on a flippy disk, with one version of the game on each side. A cassette version was also released for the Commodore 64. The Atari version required computers with the GTIA chip installed in order to display properly. Koronis Rift was one of two games in Lucasfilm Games' second wave (December 1985). The other was The Eidolon. Both enhanced the fractal technology developed for Rescue on Fractalus!. In Koronis Rift, the Atari 8-bit family's additional colors (over those of the Commodore 64) allowed the programmers to gradually fade in the background rather than it suddenly popping in as in Rescue, an early example of depth cueing in a computer game. Gameplay The player controls a surface rover vehicle to enter several "rifts" on an alien planet which are effectively fractal mazes. A lost civilisation known as the Ancients has left strange machinery, so-called "hulks", within these rifts which are guarded by armed flying saucers of different design and color. Depending on their respective color, shields and gunshots of both the rover and the saucers are of varying effectiveness against each other; part of the game is figuring out which shield and weapon modules work best where. By means of a drone robot, the rover can retrieve modules with various functions (which are not immediately obvious) from nearby hulks. It can only be deployed when all attacking Guardian Saucers have been destroyed. The modules can then be installed in the rover, analyzed aboard the player's space ship, or sold; the rover can carry up to six different modules at a time which can be activated and de-activated as the player sees fit. A large variety of modules is available: Different we
https://en.wikipedia.org/wiki/Karel%20deLeeuw
Karel deLeeuw, or de Leeuw ( – ), was a mathematics professor at Stanford University, specializing in harmonic analysis and functional analysis. Life and career Born in Chicago, Illinois, he attended the Illinois Institute of Technology and the University of Chicago, earning a B.S. degree in 1950. He stayed at Chicago to earn an M.S. degree in mathematics in 1951, then went to Princeton University, where he obtained a Ph.D. degree in 1954. His thesis, titled "The relative cohomology structure of formations", was written under the direction of Emil Artin. After first teaching mathematics at Dartmouth College and the University of Wisconsin–Madison, he joined the Stanford University faculty in 1957, becoming a full professor in 1966. During sabbaticals and leaves he also spent time at the Institute for Advanced Study and at Churchill College, Cambridge (where he was a Fulbright Fellow). He was also a Member-at-Large of the Council of the American Mathematical Society. Death and legacy DeLeeuw was murdered by Theodore Streleski, a Stanford doctoral student for 19 years, whom he briefly advised. DeLeeuw's widow Sita deLeeuw was critical of media coverage of the crime, saying, "The media, in their eagerness to give Streleski a forum, become themselves accomplices in the murder—giving Streleski what he wanted in the first place." A memorial lecture series was established in 1978 by the Stanford Department of Mathematics to honor deLeeuw's memory. Selected publications
https://en.wikipedia.org/wiki/Rietveld%20refinement
Rietveld refinement is a technique described by Hugo Rietveld for use in the characterisation of crystalline materials. The neutron and X-ray diffraction of powder samples results in a pattern characterised by reflections (peaks in intensity) at certain positions. The height, width and position of these reflections can be used to determine many aspects of the material's structure. The Rietveld method uses a least squares approach to refine a theoretical line profile until it matches the measured profile. The introduction of this technique was a significant step forward in the diffraction analysis of powder samples as, unlike other techniques at that time, it was able to deal reliably with strongly overlapping reflections. The method was first implemented in 1967, and reported in 1969 for the diffraction of monochromatic neutrons where the reflection-position is reported in terms of the Bragg angle, 2θ. This terminology will be used here although the technique is equally applicable to alternative scales such as x-ray energy or neutron time-of-flight. The only wavelength and technique independent scale is in reciprocal space units or momentum transfer Q, which is historically rarely used in powder diffraction but very common in all other diffraction and optics techniques. The relation is Introduction The most common powder X-ray diffraction (XRD) refinement technique used today is based on the method proposed in the 1960s by Hugo Rietveld. The Rietveld method fits a calculated profile (including all structural and instrumental parameters) to experimental data. It employs the non-linear least squares method, and requires the reasonable initial approximation of many free parameters, including peak shape, unit cell dimensions and coordinates of all atoms in the crystal structure. Other parameters can be guessed while still being reasonably refined. In this way one can refine the crystal structure of a powder material from PXRD data. The successful outcome of the refi
https://en.wikipedia.org/wiki/223%20%28number%29
223 (two hundred [and] twenty-three) is the natural number following 222 and preceding 224. In mathematics 223 is a prime number. Among the 720 permutations of the numbers from 1 to 6, exactly 223 of them have the property that at least one of the numbers is fixed in place by the permutation and the numbers less than it and greater than it are separately permuted among themselves. In connection with Waring's problem, 223 requires the maximum number of terms (37 terms) when expressed as a sum of positive fifth powers, and is the only number that requires that many terms. In other fields .223 (disambiguation), the caliber of several firearm cartridges The years 223 and 223 BC The number of synodic months of a Saros
https://en.wikipedia.org/wiki/227%20%28number%29
227 (two hundred [and] twenty-seven) is the natural number between 226 and 228. It is also a prime number. In mathematics 227 is a twin prime and the start of a prime triplet (with 229 and 233). It is a safe prime, as dividing it by two and rounding down produces the Sophie Germain prime 113. It is also a regular prime, a Pillai prime, a Stern prime, and a Ramanujan prime. 227 and 229 form the first twin prime pair for which neither is a cluster prime. The 227th harmonic number is the first to exceed six. There are 227 different connected graphs with eight edges, and 227 independent sets in a 3 × 4 grid graph.
https://en.wikipedia.org/wiki/Binary%20combinatory%20logic
Binary combinatory logic (BCL) is a computer programming language that uses binary terms 0 and 1 to create a complete formulation of combinatory logic using only the symbols 0 and 1. Using the S and K combinators, complex boolean algebra functions can be made. BCL has applications in the theory of program-size complexity (Kolmogorov complexity). Definition S-K Basis Utilizing K and S combinators of the Combinatory logic, logical functions can be represented in as functions of combinators: Syntax Backus–Naur form: <term> ::= 00 | 01 | 1 <term> <term> Semantics The denotational semantics of BCL may be specified as follows: [ 00 ] == K [ 01 ] == S [ 1 <term1> <term2> ] == ( [<term1>] [<term2>] ) where "[...]" abbreviates "the meaning of ...". Here K and S are the KS-basis combinators, and ( ) is the application operation, of combinatory logic. (The prefix 1 corresponds to a left parenthesis, right parentheses being unnecessary for disambiguation.) Thus there are four equivalent formulations of BCL, depending on the manner of encoding the triplet (K, S, left parenthesis). These are (00, 01, 1) (as in the present version), (01, 00, 1), (10, 11, 0), and (11, 10, 0). The operational semantics of BCL, apart from eta-reduction (which is not required for Turing completeness), may be very compactly specified by the following rewriting rules for subterms of a given term, parsing from the left:  1100xy  → x 11101xyz → 11xz1yz where x, y, and z are arbitrary subterms. (Note, for example, that because parsing is from the left, 10000 is not a subterm of 11010000.) BCL can be used to replicate algorithms like Turing machines and Cellular automata, BCL is Turing complete. See also Iota and Jot
https://en.wikipedia.org/wiki/Grand%20Challenges
Grand Challenges are difficult but important problems set by various institutions or professions to encourage solutions or advocate for the application of government or philanthropic funds especially in the most highly developed economies and Grand challenges are more than ordinary research questions or priorities, they are end results or outcomes that are global in scale; very difficult to accomplish, yet offer hope of being ultimately tractable; demand an extensive number of research projects across many technical and non-technical disciplines and accompanied by well-defined metrics. Lastly, Grand challenges "require coordinated, collaborative, and collective efforts" and must capture "the popular imagination, and thus political support." In engineering Grand Challenges: A Strategic Plan for Bridge Engineering, initiative sponsored by the Highway Subcommittee on Bridges and Structures (HSCOBS) of the American Association of State Highway and Transportation Officials (AASHTO) started in 2000. Grand Challenges for Engineering, initiative sponsored by the National Academy of Engineering (NAE) for engineering problems in the next century. Global Grand Challenges, summit meetings sponsored by The National Academy of Engineering of the United States, The Royal Academy of Engineering of the United Kingdom, and the Chinese Academy of Engineering. ASCE Grand Challenge for Civil Engineering, initiative by the American Society of Civil Engineering's (ASCE) to enhance significantly the performance and life-cycle value of infrastructure by 2025. Grand Challenges for Disaster Reduction, initiative sponsored by the National Science and Technology Council, Committee on Environment and Natural Resources. In government and military DARPA Grand Challenge, initiative to develop technologies needed to create fully autonomous ground vehicles, capable of completing a substantial off-road course within a limited time. DARPA Urban Challenge, part of DARPA's Grand Chall
https://en.wikipedia.org/wiki/Organic%20peroxides
In organic chemistry, organic peroxides are organic compounds containing the peroxide functional group (). If the R′ is hydrogen, the compounds are called hydroperoxides, which are discussed in that article. The O−O bond of peroxides easily breaks, producing free radicals of the form (the dot represents an unpaired electron). Thus, organic peroxides are useful as initiators for some types of polymerization, such as the acrylic, unsaturated polyester, and vinyl ester resins used in glass-reinforced plastics. MEKP and benzoyl peroxide are commonly used for this purpose. However, the same property also means that organic peroxides can explosively combust. Organic peroxides, like their inorganic counterparts, are often powerful bleaching agents. Types of organic peroxides Organic peroxides are classified (i) by the presence or absence of a hydroxyl (-OH) terminus and (ii) by the presence of alkyl vs acyl substituents. One gap in the classes of organic peroxides is diphenyl peroxide. Quantum chemical calculations predict that it undergoes a nearly barrierless reaction akin to the benzidine rearrangement. Properties The O−O bond length in peroxides is about 1.45 Å, and the R−O−O angles (R = H, C) are about 110° (water-like). Characteristically, the C−O−O−R (R = H, C) dihedral angles are about 120°. The O−O bond is relatively weak, with a bond dissociation energy of , less than half the strengths of C−C, C−H, and C−O bonds. Biology Peroxides play important roles in biology. Many aspects of biodegradation or aging are attributed to the formation and decay of peroxides formed from oxygen in air. Countering these effects is an array of biological and artificial antioxidants. Hundreds of peroxides and hydroperoxides are known, being derived from fatty acids, steroids, and terpenes. Fatty acids form a number of 1,2-dioxenes. The biosynthesis prostaglandins proceeds via an endoperoxide, a class of bicyclic peroxides. In fireflies, oxidation of luciferins, which is cata
https://en.wikipedia.org/wiki/Basilic%20vein
The basilic vein is a large superficial vein of the upper limb that helps drain parts of the hand and forearm. It originates on the medial (ulnar) side of the dorsal venous network of the hand and travels up the base of the forearm, where its course is generally visible through the skin as it travels in the subcutaneous fat and fascia lying superficial to the muscles. The basilic vein terminates by uniting with the brachial veins to form the axillary vein. Anatomy Course As it ascends the medial side of the biceps in the arm proper (between the elbow and shoulder), the basilic vein normally perforates the brachial fascia (deep fascia) superior to the medial epicondyle, or even as high as mid-arm. Tributaries and anastomoses Near the region anterior to the cubital fossa (in the bend of the elbow joint), the basilic vein usually communicates with the cephalic vein (the other large superficial vein of the upper extremity) via the median cubital vein. The layout of superficial veins in the forearm is highly variable from person to person, and there is a profuse network of unnamed superficial veins that the basilic vein communicates with. Around the inferior border of the teres major muscle and just proximal to the basilic vein's termination, the anterior and posterior circumflex humeral veins drain into it. Clinical significance Venipuncture Along with other superficial veins in the forearm, the basilic vein is an acceptable site for venipuncture. Nevertheless, IV nurses sometimes refer to the basilic vein as the "virgin vein", since with the arm typically supinated during phlebotomy the basilic vein below the elbow becomes awkward to access, and is therefore infrequently used. Venous grafts Vascular surgeons sometimes utilize the basilic vein to create an AV (arteriovenous) fistula or AV graft for hemodialysis access in patients with kidney failure. Additional images See also Cephalic vein Median cubital vein External links Illustration
https://en.wikipedia.org/wiki/Coolgardie%20safe
The Coolgardie safe is a low-tech food storage unit, using evaporative cooling to prolong the life of whatever edibles are kept in it. It applies the basic principle of heat transfer which occurs during evaporation of water (see latent heat and heat of evaporation). It was named after the place where it was invented – the small mining town of Coolgardie, Western Australia, near Kalgoorlie-Boulder. History Coolgardie was the site of a gold rush in the early 1890s, before the Kalgoorlie-Boulder gold rush. For the prospectors who had rushed here to find their fortune, one challenge was to extend the life of their perishable foods – hence the invention of the Coolgardie safe. The safe was invented in the late 1890s by Arthur Patrick McCormick, who used the same principle as explorers and travelers in the Outback used to cool their canvas water bags: when the canvas bag is wet the fibers expand and it holds water. Some water seeps out and evaporates. It is most effective when air continually moves past it, such as when in a moving vehicle or when exposed to a breeze. This technology is commonly thought to have been adopted by explorer and scientist Thomas Mitchell, who had observed the way some Indigenous Australians used kangaroo skins to carry water. Principles of operation The Coolgardie safe was made of wire mesh, hessian, a wooden frame and had a hot dip galvanised iron tray on top. The galvanised iron tray was filled with water. The hessian bag was hung over the side with one of the ends in the tray to soak up the water. Gradually the hessian bag, acting as a wick, would draw water from the tray by the process of capillary action. When a breeze came it would pass through the wet bag and evaporate the water. This would cool the air inside the safe, and in turn cool the food stored in the safe. This cooling is due to the water in the hessian needing energy to change state and evaporate. This energy is taken from the interior of the safe (metal mesh), thus m
https://en.wikipedia.org/wiki/Eisbein
Eisbein (literally: 'ice bone') is a German culinary dish of ham hock, usually cured and slightly boiled. The German-language name has associations with the practice of using a pig's leg-bone for ice skating. In southern parts of Germany, the common preparation is known as Schweinshaxe, and it is usually roasted. The Polish dish or and the Swedish dish fläsklägg med rotmos are very similar, alternatively grilled on a barbecue; other similar dishes include the Swiss and the Austrian Stelze. Eisbein is usually sold already cured and sometimes smoked, and then used in simple hearty dishes. Numerous regional variations exist, for example in Berlin it is served with pease pudding. In Franconia, Eisbein is commonly served with mashed potatoes or sauerkraut, in Austria with horseradish and mustard instead. See also – also includes ham hock dishes
https://en.wikipedia.org/wiki/Covering%20system
In mathematics, a covering system (also called a complete residue system) is a collection of finitely many residue classes whose union contains every integer. Examples and definitions The notion of covering system was introduced by Paul Erdős in the early 1930s. The following are examples of covering systems: A covering system is called disjoint (or exact) if no two members overlap. A covering system is called distinct (or incongruent) if all the moduli are different (and bigger than 1). Hough and Nielsen (2019) proved that any distinct covering system has a modulus that is divisible by either 2 or 3. A covering system is called irredundant (or minimal) if all the residue classes are required to cover the integers. The first two examples are disjoint. The third example is distinct. A system (i.e., an unordered multi-set) of finitely many residue classes is called an -cover if it covers every integer at least times, and an exact -cover if it covers each integer exactly times. It is known that for each there are exact -covers which cannot be written as a union of two covers. For example, is an exact 2-cover which is not a union of two covers. The first example above is an exact 1-cover (also called an exact cover). Another exact cover in common use is that of odd and even numbers, or This is just one case of the following fact: For every positive integer modulus , there is an exact cover: Mirsky–Newman theorem The Mirsky–Newman theorem, a special case of the Herzog–Schönheim conjecture, states that there is no disjoint distinct covering system. This result was conjectured in 1950 by Paul Erdős and proved soon thereafter by Leon Mirsky and Donald J. Newman. However, Mirsky and Newman never published their proof. The same proof was also found independently by Harold Davenport and Richard Rado. Primefree sequences Covering systems can be used to find primefree sequences, sequences of integers satisfying the same recurrence relation as the
https://en.wikipedia.org/wiki/Mueller%20calculus
Mueller calculus is a matrix method for manipulating Stokes vectors, which represent the polarization of light. It was developed in 1943 by Hans Mueller. In this technique, the effect of a particular optical element is represented by a Mueller matrix—a 4×4 matrix that is an overlapping generalization of the Jones matrix. Introduction Disregarding coherent wave superposition, any fully polarized, partially polarized, or unpolarized state of light can be represented by a Stokes vector ; and any optical element can be represented by a Mueller matrix (M). If a beam of light is initially in the state and then passes through an optical element M and comes out in a state , then it is written If a beam of light passes through optical element M1 followed by M2 then M3 it is written given that matrix multiplication is associative it can be written Matrix multiplication is not commutative, so in general Mueller vs. Jones calculi With disregard for coherence, light which is unpolarized or partially polarized must be treated using the Mueller calculus, while fully polarized light can be treated with either the Mueller calculus or the simpler Jones calculus. Many problems involving coherent light (such as from a laser) must be treated with Jones calculus, however, because it works directly with the electric field of the light rather than with its intensity or power, and thereby retains information about the phase of the waves. More specifically, the following can be said about Mueller matrices and Jones matrices: Stokes vectors and Mueller matrices operate on intensities and their differences, i.e. incoherent superpositions of light; they are not adequate to describe either interference or diffraction effects. (...) Any Jones matrix [J] can be transformed into the corresponding Mueller–Jones matrix, M, using the following relation: , where * indicates the complex conjugate [sic], [A is:] and ⊗ is the tensor (Kronecker) product. (...) While the Jones matrix has eight i
https://en.wikipedia.org/wiki/Closure%20with%20a%20twist
Closure with a twist is a property of subsets of an algebraic structure. A subset of an algebraic structure is said to exhibit closure with a twist if for every two elements there exists an automorphism of and an element such that where "" is notation for an operation on preserved by . Two examples of algebraic structures which exhibit closure with a twist are the cwatset and the generalized cwatset, or GC-set. Cwatset In mathematics, a cwatset is a set of bitstrings, all of the same length, which is closed with a twist. If each string in a cwatset, C, say, is of length n, then C will be a subset of . Thus, two strings in C are added by adding the bits in the strings modulo 2 (that is, addition without carry, or exclusive disjunction). The symmetric group on n letters, , acts on by bit permutation: where is an element of and p is an element of . Closure with a twist now means that for each element c in C, there exists some permutation such that, when you add c to an arbitrary element e in the cwatset and then apply the permutation, the result will also be an element of C. That is, denoting addition without carry by , C will be a cwatset if and only if This condition can also be written as Examples All subgroups of — that is, nonempty subsets of which are closed under addition-without-carry — are trivially cwatsets, since we can choose each permutation pc to be the identity permutation. An example of a cwatset which is not a group is F = {000,110,101}. To demonstrate that F is a cwatset, observe that F + 000 = F. F + 110 = {110,000,011}, which is F with the first two bits of each string transposed. F + 101 = {101,011,000}, which is the same as F after exchanging the first and third bits in each string. A matrix representation of a cwatset is formed by writing its words as the rows of a 0-1 matrix. For instance a matrix representation of F is given by To see that F is a cwatset using this notation, note that where and denote permut
https://en.wikipedia.org/wiki/Leaf%20area%20index
Leaf area index (LAI) is a dimensionless quantity that characterizes plant canopies. It is defined as the one-sided green leaf area per unit ground surface area (LAI = leaf area / ground area, m2 / m2) in broadleaf canopies. In conifers, three definitions for LAI have been used: Half of the total needle surface area per unit ground surface area Projected (or one-sided, in accordance the definition for broadleaf canopies) needle area per unit ground area Total needle surface area per unit ground area The definition “half the total leaf area” pertains to biological processes, such as gas exchange, whereas the definition “projected leaf area” was disregarded because the projection of a given area in one direction may differ in another direction when leaves are not flat, thick, or 3D-shaped. Moreover, “ground surface area” is specifically defined as “horizontal ground surface area” to clarify LAI on a sloping surface. The definition “half the total leaf area per unit horizontal ground surface area” is suitable for all kinds of leaves and flat or sloping surfaces. A leaf area index (LAI) expresses the leaf area per unit ground or trunk surface area of a plant and is commonly used as an indicator of the growth rate of a plant. LAI is a complex variable that relates not only to the size of the canopy, but also to its density, and the angle at which leaves are oriented in relation to one another and to light sources. In addition, LAI varies with seasonal changes in plant activity, and is typically highest in the spring when new leaves are being produced and lowest in late summer or early fall when leaves senesce (and may be shed). The study of LAI is called "phyllometry." Interpretation and application LAI is a measure for the total area of leaves per unit ground area and directly related to the amount of light that can be intercepted by plants. It is an important variable used to predict photosynthetic primary production, evapotranspiration and as a reference tool f
https://en.wikipedia.org/wiki/Differential-linear%20attack
Introduced by Martin Hellman and Susan K. Langford in 1994, the differential-linear attack is a mix of both linear cryptanalysis and differential cryptanalysis. The attack utilises a differential characteristic over part of the cipher with a probability of 1 (for a few rounds—this probability would be much lower for the whole cipher). The rounds immediately following the differential characteristic have a linear approximation defined, and we expect that for each chosen plaintext pair, the probability of the linear approximation holding for one chosen plaintext but not the other will be lower for the correct key. Hellman and Langford have shown that this attack can recover 10 key bits of an 8-round DES with only 512 chosen plaintexts and an 80% chance of success. The attack was generalised by Eli Biham et al. to use differential characteristics with probability less than 1. Besides DES, it has been applied to FEAL, IDEA, Serpent, Camellia, and even the stream cipher Phelix.
https://en.wikipedia.org/wiki/Anti-diagonal%20matrix
In mathematics, an anti-diagonal matrix is a square matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-diagonal (sometimes Harrison diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal or bad diagonal). Formal definition An n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero Example An example of an anti-diagonal matrix is Properties All anti-diagonal matrices are also persymmetric. The product of two anti-diagonal matrices is a diagonal matrix. Furthermore, the product of an anti-diagonal matrix with a diagonal matrix is anti-diagonal, as is the product of a diagonal matrix with an anti-diagonal matrix. An anti-diagonal matrix is invertible if and only if the entries on the diagonal from the lower left corner to the upper right corner are nonzero. The inverse of any invertible anti-diagonal matrix is also anti-diagonal, as can be seen from the paragraph above. The determinant of an anti-diagonal matrix has absolute value given by the product of the entries on the diagonal from the lower left corner to the upper right corner. However, the sign of this determinant will vary because the one nonzero signed elementary product from an anti-diagonal matrix will have a different sign depending on whether the permutation related to it is odd or even: More precisely, the sign of the elementary product needed to calculate the determinant of an anti-diagonal matrix is related to whether the corresponding triangular number is even or odd. This is because the number of inversions in the permutation for the only nonzero signed elementary product of any n × n anti-diagonal matrix is always equal to the nth such number. See also Main diagonal, all off-diagonal elements are zero in a diagonal matrix. Exchange matrix, an anti-diagonal matrix with 1s along the counter-diagonal. External links Matrix calculator Sparse matrices Ma
https://en.wikipedia.org/wiki/Console%20%28computer%20games%29
A console is a command line interface where the personal computer game's settings and variables can be edited while the game is running. Consoles also usually display a log of warnings, errors, and other messages produced during the program's execution. Typically it can be toggled on or off and appears over the normal game view. The console is normally accessed by pressing the backtick key ` (frequently also called the ~ key; normally located below the ESC key) on QWERTY keyboards or the ² on AZERTY keyboards, and is usually hidden by default. In most cases it cannot be accessed unless enabled by either specifying a command-line argument when launching the game or by changing one of the game's configuration files. History A classic console is a box that scrolls down from the top of the screen. This style was made popular with Quake (1996). There are other forms of console: Quake III Arena has one or two consoles, depending on the platform the game was released for. The first is the internal console, which exists on all platforms. The second is an external console, created via the Windows API. The console printing function directs to both, likewise, both consoles can also have text input to them. The external console is used for dedicated servers and to log startup of the engine. Finally, the external console is also used to show errors and display debugging output should the game crash. Dark Engine's console shows output up to 4 lines in length and is accessed by pressing 3 particular keys at the same time. Lithtech's console has no output and is used mainly for entering cheat codes. ARK: Survival Evolved is an open world action and adventure survival video game is by Studio Wildcard. A lot of ARK commands will require the ‘Enable Cheats‘ command to be used before going further, as well as the Enable Cheats for Player command. It is an absolute must to use this before using to any other command. A single-line variant can be seen in games from The Sims serie
https://en.wikipedia.org/wiki/Trip-a-Tron
Trip-a-Tron is a light synthesizer written by Jeff Minter and published through his Llamasoft company in 1988. It was originally written for the Atari ST and later ported to the Amiga in 1990 by Andy Fowler. Description Trip-A-Tron was released as shareware, but also came in a commercial package with a 3-ring-bound manual and 2 game disks. The trial version contained no limitations, but registration was necessary to obtain the manual, which in turn was necessary to learn the script language ("KML" - supposedly "Keyboard Macro Language" and only coincidentally the phonetic equivalent of "camel") which drove the system. The software has a usable but quirky user interface, filled with in-jokes and references to Llamasoft mascots. For example, the button to exit from the MIDI editor is labelled "naff off", while the button to exit the file display is labelled with a sheep saying "Baa!"; the waveform editor colour cycles the words "Dead cool" above the waveform display, and the event sequencer displays an icon of a camel smoking a cigarette; and the image manipulation tool has a series of icons used to indicate how long the current operation is going to take: "Make the tea", "Have a fag", "Go to bed", "Go to sleep", "Go on holiday", "Go to Peru for six months", and "RIP"; and the scripting language command to set the length of drawn lines is "LLAMA". (The manual states: "I could have called the command LINELENGTH I suppose, but I like llamas so what the heck".) The manual is also written in a similar light, conversational style, but has been praised for nonetheless achieving a high degree of technical clarity. In spite of this the software is extremely usable and was recommended as one of the best light synthesizers available at the time. See also Psychedelia (light synthesizer) Virtual Light Machine Neon (light synthesizer)
https://en.wikipedia.org/wiki/Lorentz%20ether%20theory
What is now often called Lorentz ether theory (LET) has its roots in Hendrik Lorentz's "theory of electrons", which marked the end of the development of the classical aether theories at the end of the 19th and at the beginning of the 20th century. Lorentz's initial theory was created between 1892 and 1895 and was based on removing assumptions about aether motion. It explained the failure of the negative aether drift experiments to first order in v/c by introducing an auxiliary variable called "local time" for connecting systems at rest and in motion in the aether. In addition, the negative result of the Michelson–Morley experiment led to the introduction of the hypothesis of length contraction in 1892. However, other experiments also produced negative results and (guided by Henri Poincaré's principle of relativity) Lorentz tried in 1899 and 1904 to expand his theory to all orders in v/c by introducing the Lorentz transformation. In addition, he assumed that non-electromagnetic forces (if they exist) transform like electric forces. However, Lorentz's expression for charge density and current were incorrect, so his theory did not fully exclude the possibility of detecting the aether. Eventually, it was Henri Poincaré who in 1905 corrected the errors in Lorentz's paper and actually incorporated non-electromagnetic forces (including gravitation) within the theory, which he called "The New Mechanics". Many aspects of Lorentz's theory were incorporated into special relativity (SR) with the works of Albert Einstein and Hermann Minkowski. Today LET is often treated as some sort of "Lorentzian" or "neo-Lorentzian" interpretation of special relativity. The introduction of length contraction and time dilation for all phenomena in a "preferred" frame of reference, which plays the role of Lorentz's immobile aether, leads to the complete Lorentz transformation (see the Robertson–Mansouri–Sexl test theory as an example), so Lorentz covariance doesn't provide any experimentall
https://en.wikipedia.org/wiki/Steinhart%E2%80%93Hart%20equation
The Steinhart–Hart equation is a model of the resistance of a semiconductor at different temperatures. The equation is where is the temperature (in kelvins), is the resistance at (in ohms), , , and are the Steinhart–Hart coefficients, which vary depending on the type and model of thermistor and the temperature range of interest. Uses of the equation The equation is often used to derive a precise temperature of a thermistor, since it provides a closer approximation to actual temperature than simpler equations, and is useful over the entire working temperature range of the sensor. Steinhart–Hart coefficients are usually published by thermistor manufacturers. Where Steinhart–Hart coefficients are not available, they can be derived. Three accurate measures of resistance are made at precise temperatures, then the coefficients are derived by solving three simultaneous equations. Inverse of the equation To find the resistance of a semiconductor at a given temperature, the inverse of the Steinhart–Hart equation must be used. See the Application Note, "A, B, C Coefficients for Steinhart–Hart Equation". where Steinhart–Hart coefficients To find the coefficients of Steinhart–Hart, we need to know at-least three operating points. For this, we use three values of resistance data for three known temperatures. With , and values of resistance at the temperatures , and , one can express , and (all calculations): Developers of the equation The equation is named after John S. Steinhart and Stanley R. Hart who first published the relationship in 1968. Professor Steinhart (1929–2003), a fellow of the American Geophysical Union and of the American Association for the Advancement of Science, was a member of the faculty of University of Wisconsin–Madison from 1969 to 1991. Dr. Hart, a Senior Scientist at Woods Hole Oceanographic Institution since 1989 and fellow of the Geological Society of America, the American Geophysical Union, the Geochemical Society and the
https://en.wikipedia.org/wiki/Acoustic%20interferometer
An acoustic interferometer is an instrument that uses interferometry to measure the physical characteristics of sound waves in a gas or liquid. It may be used to measure velocity, wavelength, absorption, or impedance of the sound waves. The principle of operation is that a vibrating crystal creates ultrasonic waves that are radiated into the medium being analyzed. The waves strike a reflector placed parallel to the crystal. The waves are then reflected back to the source and measured. See also Acoustic microscopy Acoustic emission
https://en.wikipedia.org/wiki/Date-plum
Diospyros lotus, with common names date-plum, Caucasian persimmon, or lilac persimmon, is a widely cultivated species of the genus Diospyros, native to subtropical southwest Asia and southeast Europe. Its English name derives from the small fruit, which have a taste reminiscent of both plums and dates. It is among the oldest plants in cultivation. Distribution and ecology The species area extends from East Asia to the west of the Mediterranean, down to Spain. The date-plum is native to southwest Asia and southeast Europe. It was known to the ancient Greeks as "God's fruit" (, Diós pŷrós), hence the scientific name of the genus. Its English name probably derives from Persian Khormaloo خرمالو literally "date-plum", referring to the taste of this fruit which is reminiscent of both plums and dates. The fruit is called Amlok املوک in Pakistan and consumed dried. This species is one candidate for the "lotus tree" mentioned in The Odyssey: it was so delicious that those who ate it forgot about returning home and wanted to stay and eat lotus with the lotus-eaters. The tree grows in the lower and middle mountain zones in the Caucasus. They usually grow up to 600 m above sea level. In Central Asia, it rises higher—up to 2000 m. They rarely grow in stands but often grow with hackberry, ash, maple and other deciduous species. It is not demanding on the soil and can grow on rocky slopes but requires a well lit environment. It is cultivated at the limits of its range, as well as in the U.S. and North Africa. Biological description This is a tree height of 15–30 m with sloughing of aging bark. The leaves are shiny, leathery, oval-shaped with pointed ends, 5–15 cm long and 3–6 cm in width. The flowers are small, greenish, appearing in June to July. Fruits are berries with juicy flesh, yellow when ripe, 1–2 cm in diameter. Seeds with thin skin and a very hard endosperm. Usage Caucasian persimmon fruits are edible and contain much sugar, malic acid, and vitamins. They a
https://en.wikipedia.org/wiki/Smith%27s%20Prize
Smith's Prize was the name of each of two prizes awarded annually to two research students in mathematics and theoretical physics at the University of Cambridge from 1769. Following the reorganization in 1998, they are now awarded under the names Smith-Knight Prize and Rayleigh-Knight Prize. History The Smith Prize fund was founded by bequest of Robert Smith upon his death in 1768, having by his will left £3,500 of South Sea Company stock to the University. Every year two or more junior Bachelor of Arts students who had made the greatest progress in mathematics and natural philosophy were to be awarded a prize from the fund. The prize was awarded every year from 1769 to 1998 except 1917. From 1769 to 1885, the prize was awarded for the best performance in a series of examinations. In 1854 George Stokes included an examination question on a particular theorem that William Thomson had written to him about, which is now known as Stokes' theorem. T. W. Körner notes Only a small number of students took the Smith's prize examination in the nineteenth century. When Karl Pearson took the examination in 1879, the examiners were Stokes, Maxwell, Cayley, and Todhunter and the examinees went on each occasion to an examiner's dwelling, did a morning paper, had lunch there and continued their work on the paper in the afternoon. In 1885, the examination was renamed Part III, (now known as the Master of Advanced Study in Mathematics for students who studied outside of Cambridge before taking it) and the prize was awarded for the best submitted essay rather than examination performance. According to Barrow-Green By fostering an interest in the study of applied mathematics, the competition contributed towards the success in mathematical physics that was to become the hallmark of Cambridge mathematics during the second half of the nineteenth century. In the twentieth century, the competition stimulated postgraduate research in mathematics in Cambridge and the competition has play
https://en.wikipedia.org/wiki/Mickey%20Mouse%20Clubhouse
Mickey Mouse Clubhouse is an American interactive animated television series which was the first Mickey Mouse and computer-animated program for preschoolers. Produced by Disney Television Animation, the series was created by Disney veteran Bobs Gannaway. The series originally aired 125 episodes from May 5, 2006, to November 6, 2016, on the Disney Channel. It received generally positive reviews from critics. On August 18, 2023, a revival was revealed to be in production, and is set to be released in 2025. Premise Mickey, Minnie, Donald, Daisy, Goofy, and Pluto interact with the viewer to stimulate problem solving during a self contained story. Once the problem of the episode has been explained, Mickey invites the viewers to join him at the Mousekedoer, a giant Mickey-head-shaped computer whose main function is to distribute the day's Mouseketools, a collection of tools needed to solve the day's problem, to Mickey. One of them is a "Mystery Mouseketool" represented by a Question Mark, in which, when the words "Mystery Mouseketool" are said, the question mark changes into the Mouseketool the viewer gets to use. Another one is a "Mouseke-Think-About-It Tool" represented by a silhouette of Mickey's head with rotating gears, in which characters must think of what to use before telling the Tool "Mouseke-Think-About-It-Tool, we pick the (object)". Once the tools have been shown to Mickey on the Mousekedoer screen, they are quickly downloaded to Toodles, a small, Mickey-head-shaped flying extension of the Mousekedoer. By calling "Oh, Toodles!" Mickey summons him to pop up from where he is hiding and fly up to the screen so the viewer can pick which tool Mickey needs for the current situation. Rhymes are used throughout the show. For example, in "Mickey's Silly Problem", when the "Silly switch" turned on, Mickey for some reason, spoke in rhymes for half of the episode. The show features two original songs performed by American alternative rock band They Might Be Giants,
https://en.wikipedia.org/wiki/Viscosity%20solution
In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE). It has been found that the viscosity solution is the natural solution concept to use in many applications of PDE's, including for example first order equations arising in dynamic programming (the Hamilton–Jacobi–Bellman equation), differential games (the Hamilton–Jacobi–Isaacs equation) or front evolution problems, as well as second-order equations such as the ones arising in stochastic optimal control or stochastic differential games. The classical concept was that a PDE over a domain has a solution if we can find a function u(x) continuous and differentiable over the entire domain such that , , , satisfy the above equation at every point. If a scalar equation is degenerate elliptic (defined below), one can define a type of weak solution called viscosity solution. Under the viscosity solution concept, u does not need to be everywhere differentiable. There may be points where either or does not exist and yet u satisfies the equation in an appropriate generalized sense. The definition allows only for certain kind of singularities, so that existence, uniqueness, and stability under uniform limits, hold for a large class of equations. Definition There are several equivalent ways to phrase the definition of viscosity solutions. See for example the section II.4 of Fleming and Soner's book or the definition using semi-jets in the Users Guide. Degenerate elliptic An equation in a domain is defined to be degenerate elliptic if for any two symmetric matrices and such that is positive definite, and any values of , and , we have the inequality . For example, (where denotes the Laplacian) is degenerate elliptic since in this case, , and the trace of is the sum of its eigenvalues. Any real first- order equation is
https://en.wikipedia.org/wiki/Paravector
The name paravector is used for the combination of a scalar and a vector in any Clifford algebra, known as geometric algebra among physicists. This name was given by J. G. Maks in a doctoral dissertation at Technische Universiteit Delft, Netherlands, in 1989. The complete algebra of paravectors along with corresponding higher grade generalizations, all in the context of the Euclidean space of three dimensions, is an alternative approach to the spacetime algebra (STA) introduced by David Hestenes. This alternative algebra is called algebra of physical space (APS). Fundamental axiom For Euclidean spaces, the fundamental axiom indicates that the product of a vector with itself is the scalar value of the length squared (positive) Writing and introducing this into the expression of the fundamental axiom we get the following expression after appealing to the fundamental axiom again which allows to identify the scalar product of two vectors as As an important consequence we conclude that two orthogonal vectors (with zero scalar product) anticommute The three-dimensional Euclidean space The following list represents an instance of a complete basis for the space, which forms an eight-dimensional space, where the multiple indices indicate the product of the respective basis vectors, for example The grade of a basis element is defined in terms of the vector multiplicity, such that According to the fundamental axiom, two different basis vectors anticommute, or in other words, This means that the volume element squares to Moreover, the volume element commutes with any other element of the algebra, so that it can be identified with the complex number , whenever there is no danger of confusion. In fact, the volume element along with the real scalar forms an algebra isomorphic to the standard complex algebra. The volume element can be used to rewrite an equivalent form of the basis as Paravectors The corresponding paravector basis that combines a real scal
https://en.wikipedia.org/wiki/PCB%20NC%20formats
PCB NC drill files convey PCB drilling and routing information. The NC formats were originally designed by CNC drill and route machine vendors as proprietary input formats for their equipment, and are known under their company name: Excellon, Hitachi, Sieb & Meyer, Posalux, etc. These formats are similar as they are based on RS-274-C and related to G-code. In 1985 IPC published a generic standard NC format, IPC-NC-349. Later XNC was designed, a simple strict subset of IPC-NC-349, designed not for driving machines but for exchanging drill information between CAD and CAM. They are collectively referred to as (PCB) NC files. The NC files are primarily used to drive CNC machines, and they are adequate for that task. They are also used to exchange design information between CAD and CAM, for which they are not adequate: essential information such as plating and drill span is missing. Furthermore, the NC output in CAD systems is often poorly implemented, resulting in poor registration between drill holes and copper layers and other problems. To exchange data between CAD and CAM it is more preferred to use the Gerber format. The quality of the Gerber file output software is typically good, and Gerber supports attributes to transfer meta-information such as plating and span. IPC-NC-349 format The IPC-NC-349 format is the only IPC standard governing drill and routing formats. XNC is a strict subset of IPC-NC-349, Excellon a big superset. Many indefinite NC files pick some elements of the IPC standard. A digital rights managed copy of the specification is available from the IPC website, for a fee. It is targeted at input for drill/rout machines, not CAD to CAM data exchange. XNC format The XNC format is strict subset of the IPC-NC-349 specification targeted at data exchange between CAD and CAM. The name XNC format stands for Exchange NC format. As a strict subset, it is highly compatible with existing software. Its purpose is to address the current chaos of different
https://en.wikipedia.org/wiki/KSTR-DT
KSTR-DT (channel 49) is a television station licensed to Irving, Texas, United States, broadcasting the Spanish-language UniMás network to the Dallas–Fort Worth metroplex. It is owned and operated by TelevisaUnivision alongside Garland-licensed Univision owned-and-operated station KUVN-DT (channel 23). Both stations share studios on Bryan Street in downtown Dallas, while KSTR-DT's transmitter is located in Cedar Hill, Texas. History Early history The station first signed on the air on April 17, 1984 as KLTJ-TV (the call letters stood for "Keep Looking to Jesus"). Founded by Eldred Thomas, owner of radio station KVTT-FM (91.7, now KKXT), it originally maintained a religious programming format as an affiliate of the Trinity Broadcasting Network (TBN). In early 1986, Thomas sold the station to Silver King Broadcasting, the broadcasting arm of the Home Shopping Network (HSN). As a result of the sale, the station became an affiliate of HSN in September of that year; this left TBN without an outlet in the Dallas–Fort Worth metroplex for the next five months, until it launched owned-and-operated station KDTX-TV (channel 58) in February 1987. On June 1, 1987, the station changed its call letters to KHSX (standing for "Home Shopping in Texas"). On November 27, 1995, veteran television executive Barry Diller announced that he would acquire the Home Shopping Network and Silver King Communications, which owned HSN-affiliated stations in several other larger media markets. The purchase was finalized on December 19, 1996, ten months after the transaction received approval by the Federal Communications Commission (FCC) on March 11. Two years later in 1997, HSN purchased the USA Network, and renamed its broadcast television subsidiary as USA Broadcasting, as part of a corporate rebranding borrowing from the identity of its new cable channel property. That year, KHSX began carrying a one-hour block of programming from business news channel Bloomberg Information Television (now si
https://en.wikipedia.org/wiki/Murderous%20Maths
Murderous Maths is a series of British educational books by author Kjartan Poskitt. Most of the books in the series are illustrated by illustrator Philip Reeve, with the exception of "The Secret Life of Codes", which is illustrated by Ian Baker, "Awesome Arithmetricks" illustrated by Daniel Postgate and Rob Davis, and "The Murderous Maths of Everything", also illustrated by Rob Davis. The Murderous Maths books have been published in over 25 countries. The books, which are aimed at children aged 8 and above, teach maths, spanning from basic arithmetic to relatively complex concepts such as the quadratic formula and trigonometry. The books are written in an informal similar style to the Horrible Histories, Horrible Science and Horrible Geography series, involving evil geniuses, gangsters, and a generally comedic tone. Development The first two books of the series were originally part of "The Knowledge" (now "Totally") series, itself a spin-off of Horrible Histories. However, these books were eventually redesigned and they, as well as the rest of the titles in the series, now use the Murderous Maths banner. According to Poskitt, "these books have even found their way into schools and proved to be a boost to GCSE studies". The books are also available in foreign editions, including: German, Spanish, Polish, Czech, Greek, Dutch, Norwegian, Turkish, Croatian, Italian, Lithuanian, Korean, Danish, Hungarian, Finnish, Thai and Portuguese (Latin America). In 2009, the books were redesigned again, changing the cover art style and the titles of most of the books in the series. Poskitt's goal, according to the Murderous Maths website, is to write books that are "something funny to read", have "good amusing illustrations", include "tricks", and "explaining the maths involved as clearly as possible". He adds that although he doesn't "work to any government imposed curriculum or any stage achievement levels", he has "been delighted to receive many messages of support and thanks
https://en.wikipedia.org/wiki/Gromov%E2%80%93Witten%20invariant
In mathematics, specifically in symplectic topology and algebraic geometry, Gromov–Witten (GW) invariants are rational numbers that, in certain situations, count pseudoholomorphic curves meeting prescribed conditions in a given symplectic manifold. The GW invariants may be packaged as a homology or cohomology class in an appropriate space, or as the deformed cup product of quantum cohomology. These invariants have been used to distinguish symplectic manifolds that were previously indistinguishable. They also play a crucial role in closed type IIA string theory. They are named after Mikhail Gromov and Edward Witten. The rigorous mathematical definition of Gromov–Witten invariants is lengthy and difficult, so it is treated separately in the stable map article. This article attempts a more intuitive explanation of what the invariants mean, how they are computed, and why they are important. Definition Consider the following: X: a closed symplectic manifold of dimension 2k, A: a 2-dimensional homology class in X, g: a non-negative integer, n: a non-negative integer. Now we define the Gromov–Witten invariants associated to the 4-tuple: (X, A, g, n). Let be the Deligne–Mumford moduli space of curves of genus g with n marked points and denote the moduli space of stable maps into X of class A, for some chosen almost complex structure J on X compatible with its symplectic form. The elements of are of the form: , where C is a (not necessarily stable) curve with n marked points x1, ..., xn and f : C → X is pseudoholomorphic. The moduli space has real dimension Let denote the stabilization of the curve. Let which has real dimension . There is an evaluation map The evaluation map sends the fundamental class of to a d-dimensional rational homology class in Y, denoted In a sense, this homology class is the Gromov–Witten invariant of X for the data g, n, and A. It is an invariant of the symplectic isotopy class of the symplectic manifold X. To interpret the Gromov–W
https://en.wikipedia.org/wiki/Superfecundation
Superfecundation is the fertilization of two or more ova from the same cycle by sperm from separate acts of sexual intercourse, which can lead to twin babies from two separate biological fathers. The term superfecundation is derived from fecund, meaning able to produce offspring. Homopaternal superfecundation is fertilization of two separate ova from the same father, leading to fraternal twins, while heteropaternal superfecundation is a form of atypical twinning where, genetically, the twins are half siblings – sharing the same mother, but with different fathers. Conception Sperm cells can live inside a female's body for up to five days, and once ovulation occurs, the egg remains viable for 12–48 hours before it begins to disintegrate. Superfecundation most commonly happens within hours or days of the first instance of fertilization with ova released during the same cycle. Ovulation is normally suspended during pregnancy to prevent further ova becoming fertilized and to help increase the chances of a full-term pregnancy. However, if an ovum is atypically released after the female was already impregnated when previously ovulating, a chance of a second pregnancy occurs, albeit at a different stage of development. This is known as superfetation. Heteropaternal superfecundation Heteropaternal superfecundation is common in animals such as cats and dogs. Stray dogs can produce litters in which every puppy has a different sire. Though rare in humans, cases have been documented. In one study on humans, the frequency was 2.4% among dizygotic twins whose parents had been involved in paternity suits. Cases in Greek mythology Greek mythology holds many cases of superfecundation: Leda lies with both her husband Tyndareus and with the god Zeus, the latter in the guise of a swan. Nine months later, she bears two daughters: Clytemnestra by Tyndareus and Helen by Zeus. This happens again; this time Leda bears two sons: Castor by Tyndareus and Pollux by Zeus. Alcmene lies with
https://en.wikipedia.org/wiki/IBM%20Parallel%20Sysplex
In computing, a Parallel Sysplex is a cluster of IBM mainframes acting together as a single system image with z/OS. Used for disaster recovery, Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 systems to share a workload for high performance and high availability. Sysplex In 1990, IBM mainframe computers introduced the concept of a Systems Complex, commonly called a Sysplex, with MVS/ESA SPV4.1. This allows authorized components in up to eight logical partitions (LPARs) to communicate and cooperate with each other using the XCF protocol. Components of a Sysplex include: A common time source to synchronize all member systems' clocks. This can involve either a Sysplex timer (Model 9037), or the Server Time Protocol (STP) Global Resource Serialization (GRS), which allows multiple systems to access the same resources concurrently, serializing where necessary to ensure exclusive access Cross System Coupling Facility (XCF), which allows systems to communicate peer-to-peer Couple Data Sets (CDS) Users of a (base) Sysplex include: Console services – allowing one to merge multiple MCS consoles from the different members of the Sysplex, providing a single system image for Operations Automatic Restart Manager (ARM) – Policy to direct automatic restart of failed jobs or started tasks on the same system if it is available or on another LPAR in the Sysplex Sysplex Failure Manager (SFM) – Policy that specifies automated actions to take when certain failures occur such as loss of a member of a Sysplex or when reconfiguring systems Workload Manager (WLM) – Policy based performance management of heterogeneous workloads across one or more z/OS images or even on AIX Global Resource Serialization (GRS) - Communication – allows use of XCF links instead of dedicated channels for GRS, and Dynamic RNLs Tivoli OPC – Hot standby support for the controller RACF (IBM's mainframe security software product) – Sysplex-wide
https://en.wikipedia.org/wiki/Logical%20partition
A logical partition (LPAR) is a subset of a computer's hardware resources, virtualized as a separate computer. In effect, a physical machine can be partitioned into multiple logical partitions, each hosting a separate instance of an operating system. PR/SM Formally, LPAR designates the mode of operation or an individual logical partition, whereas PR/SM is the commercial designation of the feature. PR/SM (Processor Resource/System Manager) is a type-1 Hypervisor (a virtual machine monitor) that allows multiple logical partitions to share physical resources such as CPUs, I/O channels and LAN interfaces; when sharing channels, the LPARs can share I/O devices such as direct access storage devices (DASD). PR/SM is integrated with all IBM System z machines. Similar facilities exist on the IBM Power Systems family, and its predecessors. IBM introduced PR/SM in 1988 with the IBM 3090 processors. IBM developed the concept of hypervisors in their CP-40 and CP-67, and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions of VM to exploit it. PR/SM is a type-1 Hypervisor based on the CP component of VM/XA that runs directly on the machine level and allocates system resources across LPARs to share physical resources. It is a standard feature on IBM Z and IBM LinuxONE machines. IBM introduced a related, simplified, optional feature called Dynamic Partition Manager (DPM) on its IBM z13 and first generation IBM LinuxONE machines. DPM provides Web-based user interfaces for many LPAR-related configuration and monitoring tasks. History IBM developed the concept of hypervisors (virtual machines in CP-40 and CP-67) and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction (designed specifically for the execution of virtual machines) as part of 370-XA architecture on the 3081,
https://en.wikipedia.org/wiki/Daylight
Daylight is the combination of all direct and indirect sunlight during the daytime. This includes direct sunlight, diffuse sky radiation, and (often) both of these reflected by Earth and terrestrial objects, like landforms and buildings. Sunlight scattered or reflected by astronomical objects is generally not considered daylight. Therefore, daylight excludes moonlight, despite it being reflected indirect sunlight. Definition Daylight is present at a particular location, to some degree, whenever the Sun is above the local horizon. (This is true for slightly more than 50% of the Earth at any given time. For an explanation of why it is not exactly half, see here). However, the outdoor illuminance can vary from 120,000 lux for direct sunlight at noon, which may cause eye pain, to less than 5 lux for thick storm clouds with the Sun at the horizon (even <1 lux for the most extreme case), which may make shadows from distant street lights visible. It may be darker under unusual circumstances like a solar eclipse or very high levels of atmospheric particulates, which include smoke (see New England's Dark Day), dust, and volcanic ash. Intensity in different conditions For comparison, nighttime illuminance levels are: For a table of approximate daylight intensity in the Solar System, see sunlight. See also
https://en.wikipedia.org/wiki/T-tubule
T-tubules (transverse tubules) are extensions of the cell membrane that penetrate into the center of skeletal and cardiac muscle cells. With membranes that contain large concentrations of ion channels, transporters, and pumps, T-tubules permit rapid transmission of the action potential into the cell, and also play an important role in regulating cellular calcium concentration. Through these mechanisms, T-tubules allow heart muscle cells to contract more forcefully by synchronising calcium release from the sarcoplasmic reticulum throughout the cell. T-tubule structure and function are affected beat-by-beat by cardiomyocyte contraction, as well as by diseases, potentially contributing to heart failure and arrhythmias. Although these structures were first seen in 1897, research into T-tubule biology is ongoing. Structure T-tubules are tubules formed from the same phospholipid bilayer as the surface membrane or sarcolemma of skeletal or cardiac muscle cells. They connect directly with the sarcolemma at one end before travelling deep within the cell, forming a network of tubules with sections running both perpendicular (transverse) to and parallel (axially) to the sarcolemma. Due to this complex orientation, some refer to T-tubules as the transverse-axial tubular system. The inside or lumen of the T-tubule is open at the cell surface, meaning that the T-tubule is filled with fluid containing the same constituents as the solution that surrounds the cell (the extracellular fluid). Rather than being just a passive connecting tube, the membrane that forms T-tubules is highly active, being studded with proteins including L-type calcium channels, sodium-calcium exchangers, calcium ATPases and Beta adrenoceptors. T-tubules are found in both atrial and ventricular cardiac muscle cells (cardiomyocytes), in which they develop in the first few weeks of life. They are found in ventricular muscle cells in most species, and in atrial muscle cells from large mammals. In card
https://en.wikipedia.org/wiki/Kalpana%20%28supercomputer%29
Kalpana was a supercomputer at NASA Ames Research Center operated by the NASA Advanced Supercomputing (NAS) Division and named in honor of astronaut Kalpana Chawla, who was killed in the Space Shuttle Columbia disaster and had worked as an engineer at Ames Research Center prior to joining the Space Shuttle program. It was built in late 2003 and dedicated on May 12, 2004. Kalpana was the world's first single-system image (SSI) Linux supercomputer, based on SGI's Altix 3000 architecture and 512 Intel Itanium 2 processors. It was originally built in a joint effort by the NASA Jet Propulsion Laboratory, Ames Research Center (AMC), and Goddard Space Flight Center to perform high-res ocean analysis with the ECCO (Estimating the Circulation and Climate of the Ocean) Consortium model. The supercomputer was "used to develop substantially more capable simulation models to better assess the evolution and behavior of the Earth's climate system," said NASA's Deputy Associate Administrator for Earth Science, Ghassem Asrar in 2004. It served as one of several testbed systems NASA purchased to determine what architecture to proceed with for new supercomputing projects and lead to the purchase and construction of the Columbia supercomputer, named in honor of the STS-107 crew lost in 2003. In July 2004 the Kalpana system was integrated, as the first node, into the 20-node supercomputer.
https://en.wikipedia.org/wiki/Gross%20anatomy
Gross anatomy is the study of anatomy at the visible or macroscopic level. The counterpart to gross anatomy is the field of histology, which studies microscopic anatomy. Gross anatomy of the human body or other animals seeks to understand the relationship between components of an organism in order to gain a greater appreciation of the roles of those components and their relationships in maintaining the functions of life. The study of gross anatomy can be performed on deceased organisms using dissection or on living organisms using medical imaging. Education in the gross anatomy of humans is included training for most health professionals. Techniques of study Gross anatomy is studied using both invasive and noninvasive methods with the goal of obtaining information about the macroscopic structure and organisation of organs and organ systems. Among the most common methods of study is dissection, in which the corpse of an animal or a human cadaver is surgically opened and its organs studied. Endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the subject, may be used to explore the internal organs and other structures of living animals. The anatomy of the circulatory system in a living animal may be studied noninvasively via angiography, a technique in which blood vessels are visualised after being injected with an opaque dye. Other means of study include radiological techniques of imaging, such as X-ray and MRI. In medical and healthcare professional education Most health profession schools, such as medical, physician assistant, and dental schools, require that students complete a practical (dissection) course in gross human anatomy. Such courses aim to educate students in basic human anatomy and seek to establish anatomical landmarks that may later be used to aid medical diagnosis. Many schools provide students with cadavers for investigation by dissection, aided by dissection manuals, as well as cadaveric atlases (e.g. Ne
https://en.wikipedia.org/wiki/Kalpana%2C%20Inc.
Kalpana, Inc., was a computer-networking equipment manufacturer located in Silicon Valley which operated during the 1980s and 1990s. Its co-founders, Vinod Bhardwaj, an entrepreneur of Indian origin, and Larry Blair named the company after Bhardwaj's wife, Kalpana, whose name means "imagination" in Sanskrit. Charles Giancarlo was Kalpana's vice president of products and corporate development, became its General Manager, and went on to roles at Cisco Systems and Silver Lake Partners. In 1989 and 1990, Kalpana introduced the first multiport Ethernet switch, its seven-port EtherSwitch. The invention of Ethernet switching made Ethernet networks faster, cheaper, and easier to manage. Multi-port network switches became common, gradually replacing Ethernet hubs for almost all applications, and enabled an easy transition to 100-megabit Fast Ethernet and later Gigabit Ethernet. Kalpana also invented EtherChannel, which provides higher inter-switch bandwidth by running several links in parallel. This innovation, more generally called link aggregation, was also widely adopted throughout the industry. Kalpana also invented the Virtual LAN concept as closed broadcast domains, which was later replaced by 802.1Q. Cisco Systems acquired Kalpana in 1994. Product Kalpana produced two models of Ethernet switch, the EPS-700 and the EPS-1500. See also List of acquisitions by Cisco