source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/VideoCrypt
VideoCrypt is a cryptographic, smartcard-based conditional access television encryption system that scrambles analogue pay-TV signals. It was introduced in 1989 by News Datacom and was used initially by Sky TV and subsequently by several other broadcasters on SES' Astra satellites at 19.2° east. Users Versions Three variants of the VideoCrypt system were deployed in Europe: VideoCrypt I for the UK and Irish market and VideoCrypt II for continental Europe. The third variant, VideoCrypt-S was used on a short-lived BBC Select service. The VideoCrypt-S system differed from the typical VideoCrypt implementation as it used line shuffle scrambling. Sky NZ and Sky Fiji may use different versions of the VideoCrypt standard. Sky NZ used NICAM stereo for many years until abandoning it when the Sky DTH technology started replacing Sky UHF. Operating principle The system scrambles the picture using a technique known as "line cut-and-rotate". Each line that made up each picture (video frame) is cut at one of 256 possible "cut points", and the two halves of each line are swapped around for transmission. The series of cutpoints is determined by a pseudo-random sequence. Channels were decoded using a pseudorandom number generator (PRNG) sequence stored on a smart card (aka Viewing Card). To decode a channel the decoder would read the smart card to check if the card is authorised for the specific channel. If not, a message would appear on screen. Otherwise the decoder seeds the card's PRNG with a seed transmitted with the video signal to generate the correct sequence of cut points. The system also included a cryptographic element called the Fiat Shamir Zero Knowledge Test. This element was a routine in the smartcard that would prove to the decoder that the card was indeed a genuine card. The basic model was that the decoder would present the card with a packet of data (the question or challenge) which the card would process and effectively return the result (the answer) to
https://en.wikipedia.org/wiki/Loren%20Kohnfelder
Loren Kohnfelder invented what is today called public key infrastructure (PKI) in his May 1978 MIT S.B. (BSCSE) thesis, which described a practical means of using public key cryptography to secure network communications. The Kohnfelder thesis introduced the terms 'certificate' and 'certificate revocation list' as well as introducing numerous other concepts now established as important parts of PKI. The X.509 certificate specification that provides the basis for SSL, S/MIME and most modern PKI implementations are based on the Kohnfelder thesis. He was also the co-creator, with Praerit Garg, of the STRIDE model of security threats, widely used in threat modeling. In 2021 he published Designing Secure Software with No Starch Press. He maintains a medium blog.
https://en.wikipedia.org/wiki/Library%20sort
Library sort or gapped insertion sort is a sorting algorithm that uses an insertion sort, but with gaps in the array to accelerate subsequent insertions. The name comes from an analogy: Suppose a librarian were to store their books alphabetically on a long shelf, starting with the As at the left end, and continuing to the right along the shelf with no spaces between the books until the end of the Zs. If the librarian acquired a new book that belongs to the B section, once they find the correct space in the B section, they will have to move every book over, from the middle of the Bs all the way down to the Zs in order to make room for the new book. This is an insertion sort. However, if they were to leave a space after every letter, as long as there was still space after B, they would only have to move a few books to make room for the new one. This is the basic principle of the Library Sort. The algorithm was proposed by Michael A. Bender, Martín Farach-Colton, and Miguel Mosteiro in 2004 and was published in 2006. Like the insertion sort it is based on, library sort is a comparison sort; however, it was shown to have a high probability of running in O(n log n) time (comparable to quicksort), rather than an insertion sort's O(n2). There is no full implementation given in the paper, nor the exact algorithms of important parts, such as insertion and rebalancing. Further information would be needed to discuss how the efficiency of library sort compares to that of other sorting methods in reality. Compared to basic insertion sort, the drawback of library sort is that it requires extra space for the gaps. The amount and distribution of that space would depend on implementation. In the paper the size of the needed array is (1 + ε)n, but with no further recommendations on how to choose ε. Moreover, it is neither adaptive nor stable. In order to warrant the high-probability time bounds, it must randomly permute the input, which changes the relative order of equal elements
https://en.wikipedia.org/wiki/Inhibitor%20of%20DNA-binding%20protein
Inhibitor of DNA-binding/differentiation proteins, also known as ID proteins comprise a family of proteins that heterodimerize with basic helix-loop-helix (bHLH) transcription factors to inhibit DNA binding of bHLH proteins. ID proteins also contain the HLH-dimerization domain but lack the basic DNA-binding domain and thus regulate bHLH transcription factors when they heterodimerize with bHLH proteins. The first helix-loop-helix proteins identified were named E-proteins because they bind to Ephrussi-box (E-box) sequences. In normal development, E proteins form dimers with other bHLH transcription factors, allowing transcription to occur. However, in cancerous phenotypes, ID proteins can regulate transcription by binding E proteins, so no dimers can be formed and transcription is inactive. E proteins are members of the class I bHLH family and form dimers with bHLH proteins from class II to regulate transcription. Four ID proteins exist in humans: ID1, ID2, ID3, and ID4. The ID homologue gene in Drosophila is called extramacrochaetae (EMC) and encodes a transcription factor of the helix-loop-helix family that lacks a DNA binding domain. EMC regulates cell proliferation, formation of organs like the midgut, and wing development. ID proteins could be potential targets for systemic cancer therapies without inhibiting the functioning of most normal cells because they are highly expressed in embryonic stem cells, but not in differentiated adult cells. Evidence suggests that ID proteins are overexpressed in many types of cancer. For example, ID1 is overexpressed in pancreatic, breast, and prostate cancers. ID2 is upregulated in neuroblastoma, Ewing’s sarcoma, and squamous cell carcinoma of the head and neck. Function ID proteins are key regulators of development where they function to prevent premature differentiation of stem cells. By inhibiting the formation of E-protein dimers that promote differentiation, ID proteins can regulate the timing of differentiation of stem
https://en.wikipedia.org/wiki/Phantasy%20Star%20Universe
(PSU) is an action role-playing video game developed by Sega's Sonic Team for the Microsoft Windows, PlayStation 2 and Xbox 360 platforms. It was released in Japan for the PC and PlayStation 2 on August 31, 2006; the Xbox 360 version was released there on December 14, 2006. Its North American release was in October 2006, in all formats. The European release date was November 24 the same year, while the Australian release date was November 30. Phantasy Star Universe is similar to the Phantasy Star Online (PSO) games, but takes place in a different time period and location, and has many new features. Like most of the PSO titles, PSU was playable in both a persistent online network mode and a fully featured, single-player story mode. Plot Ethan Waber, the main character, and his younger sister, Lumia Waber, are at the celebration of the 100th anniversary of the Alliance Space Fleet on the GUARDIANS Space station. The celebration is interrupted when a mysterious meteor shower almost destroys the entire fleet. During an evacuation, Ethan and Lumia divert from the main evacuation route; collapsing rubble separates Lumia from Ethan. Ethan then meets up with a GUARDIAN named Leo, but they are attacked by a strange creature that paralyzes Leo. Ethan kills the creature. After killing multiple creatures and saving people, Ethan finds Lumia and they leave the station. Ethan reveals that he dislikes the GUARDIANS organization because his father died on a mission. Leo, impressed with Ethan's abilities, persuades Ethan to join the GUARDIANS. Ethan and his classmate Hyuga Ryght are trained by a GUARDIAN named Karen Erra, who leads them against the S.E.E.D, the monsters that came from the meteors. After being hired to accompany a scientist to a RELICS site of an ancient, long-dead civilization, they find out that the SEED are attracted to a power source called A-Photons, which the ancients used and that the solar system has just rediscovered. Karen discovers she is the sister of
https://en.wikipedia.org/wiki/Velocity%20potential
A velocity potential is a scalar potential used in potential flow theory. It was introduced by Joseph-Louis Lagrange in 1788. It is used in continuum mechanics, when a continuum occupies a simply-connected region and is irrotational. In such a case, where denotes the flow velocity. As a result, can be represented as the gradient of a scalar function : is known as a velocity potential for . A velocity potential is not unique. If is a velocity potential, then is also a velocity potential for , where is a scalar function of time and can be constant. In other words, velocity potentials are unique up to a constant, or a function solely of the temporal variable. The Laplacian of a velocity potential is equal to the divergence of the corresponding flow. Hence if a velocity potential satisfies Laplace equation, the flow is incompressible. Unlike a stream function, a velocity potential can exist in three-dimensional flow. Usage in acoustics In theoretical acoustics, it is often desirable to work with the acoustic wave equation of the velocity potential instead of pressure and/or particle velocity . Solving the wave equation for either field or field does not necessarily provide a simple answer for the other field. On the other hand, when is solved for, not only is found as given above, but is also easily found—from the (linearised) Bernoulli equation for irrotational and unsteady flow—as See also Vorticity Hamiltonian fluid mechanics Potential flow Potential flow around a circular cylinder Notes External links Joukowski Transform Interactive WebApp Continuum mechanics Physical quantities
https://en.wikipedia.org/wiki/Upward%20continuation
Upward continuation is a method used in oil exploration and geophysics to estimate the values of a gravitational or magnetic field by using measurements at a lower elevation and extrapolating upward, assuming continuity. This technique is commonly used to merge different measurements to a common level so as to reduce scatter and allow for easier analysis. See also Petroleum geology
https://en.wikipedia.org/wiki/Xsupplicant
Xsupplicant is a supplicant that allows a workstation to authenticate with a RADIUS server using 802.1X and the Extensible Authentication Protocol (EAP). It can be used for computers with wired or wireless LAN connections to complete a strong authentication before joining the network and supports the dynamic assignment of WEP keys. Overview Xsupplicant up to version 1.2.8 was designed to run on Linux clients as a command line utility. Version 1.3.X and greater are designed to run on Windows XP and are currently being ported to Linux/BSD systems, and include a robust graphical user interface, and also includes Network Access Control (NAC) functionality from Trusted Computing Group's Trusted Network Connect NAC. Xsupplicant was chosen by the OpenSea Alliance, dedicated to developing, promoting, and distributing an open source 802.1X supplicant. Xsupplicant supports the following EAP types: EAP-MD5 LEAP EAP-MSCHAPv2 EAP-OTP EAP-PEAP (v0 and v1) EAP-SIM EAP-TLS EAP-TNC EAP-TTLSv0 (PAP/CHAP/MS-CHAP/MS-CHAPv2/EAP) EAP-AKA EAP-GTC EAP-FAST (partial) Xsupplicant is primarily maintained by Chris Hessing. See also wpa_supplicant
https://en.wikipedia.org/wiki/Index%20ellipsoid
In crystal optics, the index ellipsoid (also known as the optical indicatrix or sometimes as the dielectric ellipsoid) is a geometric construction which concisely represents the refractive indices and associated polarizations of light, as functions of the orientation of the wavefront, in a doubly-refractive crystal (provided that the crystal does not exhibit optical rotation). When this ellipsoid is cut through its center by a plane parallel to the wavefront, the resulting intersection (called a central section or diametral section) is an ellipse whose major and minor semiaxes have lengths equal to the two refractive indices for that orientation of the wavefront, and have the directions of the respective polarizations as expressed by the electric displacement vector . The principal semiaxes of the index ellipsoid are called the principal refractive indices. It follows from the sectioning procedure that each principal semiaxis of the ellipsoid is generally not the refractive index for propagation in the direction of that semiaxis, but rather the refractive index for wavefronts tangential to that direction, with the vector parallel to that direction, propagating perpendicular to that direction. Thus the direction of propagation (normal to the wavefront) to which each principal refractive index applies is in the plane perpendicular to the associated principal semiaxis. Terminology The index ellipsoid is not to be confused with the index surface, whose radius vector (from the origin) in any direction is indeed the refractive index for propagation in that direction; for a birefringent medium, the index surface is the two-sheeted surface whose two radius vectors in any direction have lengths equal to the major and minor semiaxes of the diametral section of the index ellipsoid by a plane normal to that direction. If we let denote the principal semiaxes of the index ellipsoid, and choose a Cartesian coordinate system in which these semiaxes are respectively in the , ,
https://en.wikipedia.org/wiki/Guillermo%20Owen
Guillermo Owen (born 1938) is a Colombian mathematician, and professor of applied mathematics at the Naval Postgraduate School in Monterey, California, known for his work in game theory. He is also the son of the Mexican Poet and Diplomat Gilberto Owen. Biography Guillermo Owen was born May 4, 1938, in Bogotá, Colombia, and obtained a B.S. degree from Fordham University in 1958, and a Ph.D. degree from Princeton University under the guidance of Dr. Harold Kuhn in 1962. Owen has taught at Fordham University (1961–1969), Rice University (1969–1977) and Los Andes University in Colombia (1978–1982, 2008), apart from having given lectures in many universities in Europe and Latin America. He is currently holding the position of Distinguished Professor of applied mathematics at the Naval Postgraduate School in Monterey, California. Owen is member of the Colombian Academy of Sciences, The Royal Academy of Arts and Sciences of Barcelona, and the Third World Academy of Sciences. He is associate editor of the International Journal of Game Theory, and fellow of the International Game Theory Society. Honors and awards The Escuela Naval Almirante Padilla of Cartagena gave him an honorary degree of Naval Science Professional in June 2004. Owen was named Honorary President of the XIV Latin Ibero American Congress on Operations Research - CLAIO 2008. Cartagena, Colombia, September 2008. The university of Lower Normandy, in Caen, France, gave him an honorary doctorate in October 2017. Publications Owen has authored, translated and/or edited thirteen books, and approximately one hundred and forty papers published in journals such as Management Science, Operations Research, International Journal of Game Theory, American political Science Review, and Mathematical Programming, among others. Owen's books include: 1968. Game theory. Academic Press 1970. Finite mathematics and calculus; mathematics for the social and management sciences. With M. Evans Munroe. 1983. Information p
https://en.wikipedia.org/wiki/Security%20Administrator%20Tool%20for%20Analyzing%20Networks
Security Administrator Tool for Analyzing Networks (SATAN) was a free software vulnerability scanner for analyzing networked computers. SATAN captured the attention of a broad technical audience, appearing in PC Magazine and drawing threats from the United States Department of Justice. It featured a web interface, complete with forms to enter targets, tables to display results, and context-sensitive tutorials that appeared when a vulnerability had been found. Naming For those offended by the name SATAN, the software contained a special command called repent, which rearranged the letters in the program's acronym from "SATAN" to "SANTA". Description The tool was developed by Dan Farmer and Wietse Venema. Neil Gaiman drew the artwork for the SATAN documentation. SATAN was designed to help systems administrators automate the process of testing their systems for known vulnerabilities that can be exploited via the network. This was particularly useful for networked systems with multiple hosts. Like most security tools, it was useful for good or malicious purposes – it was also useful to would-be intruders looking for systems with security holes. SATAN was written mostly in Perl and utilized a web browser such as Netscape, Mosaic or Lynx to provide the user interface. This easy to use interface drove the scanning process and presents the results in summary format. As well as reporting the presence of vulnerabilities, SATAN also gathered large amounts of general network information, such as which hosts are connected to subnets, what types of machines they are and which services they offered. Status SATAN's popularity diminished after the 1990s. It was released in 1995 and development has ceased. In 2006, SecTools.Org conducted a security popularity poll and developed a list of 100 network security analysis tools in order of popularity based on the responses of 3,243 people. Results suggest that SATAN has been replaced by nmap, Nessus and to a lesser degree SARA (Securi
https://en.wikipedia.org/wiki/Range%20of%20motion
Range of motion (or ROM) is the linear or angular distance that a moving object may normally travel while properly attached to another. In biomechanics and strength training, ROM refers to the angular distance and direction a joint can move between the flexed position and the extended position. The act of attempting to increase this distance through therapeutic exercises (range of motion therapy—stretching from flexion to extension for physiological gain) is also sometimes called range of motion. In mechanical engineering, it is (also called range of travel or ROT) used particularly when talking about mechanical devices, such as a sound volume control knob. In biomechanics Measuring range of motion Each specific joint has a normal range of motion that is expressed in degrees. The reference values for the normal ROM in individuals differ slightly depending on age and gender. For example, as an individual ages, they typically lose a small amount of ROM. Analog and traditional devices to measure range of motion in the joints of the body include the goniometer and inclinometer which use a stationary arm, protractor, fulcrum, and movement arm to measure angle from axis of the joint. As measurement results will vary by the degree of resistance, two levels of range of motion results are recorded in most cases. Recent technological advances in 3D motion capture technology allow for the measurement of joints concurrently, which can be used to measure a patient's active range of motion. Limited range of motion Limited range of motion refers to a joint that has a reduction in its ability to move. The reduced motion may be a problem with the specific joint or it may be caused by injury or diseases such as osteoarthritis, rheumatoid arthritis, or other types of arthritis. Pain, swelling, and stiffness associated with arthritis can limit the range of motion of a particular joint and impair function and the ability to perform usual daily activities. Limited range of moti
https://en.wikipedia.org/wiki/Rasch%20model%20estimation
Estimation of a Rasch model is used to estimate the parameters of the Rasch model. Various techniques are employed to estimate the parameters from matrices of response data. The most common approaches are types of maximum likelihood estimation, such as joint and conditional maximum likelihood estimation. Joint maximum likelihood (JML) equations are efficient, but inconsistent for a finite number of items, whereas conditional maximum likelihood (CML) equations give consistent and unbiased item estimates. Person estimates are generally thought to have bias associated with them, although weighted likelihood estimation methods for the estimation of person parameters reduce the bias. Rasch model The Rasch model for dichotomous data takes the form: where is the ability of person and is the difficulty of item . Joint maximum likelihood Let denote the observed response for person n on item i. The probability of the observed data matrix, which is the product of the probabilities of the individual responses, is given by the likelihood function The log-likelihood function is then where is the total raw score for person n, is the total raw score for item i, N is the total number of persons and I is the total number of items. Solution equations are obtained by taking partial derivatives with respect to and and setting the result equal to 0. The JML solution equations are: where . The resulting estimates are biased, and no finite estimates exist for persons with score 0 (no correct responses) or with 100% correct responses (perfect score). The same holds for items with extreme scores, no estimates exists for these as well. This bias is due to a well known effect described by Kiefer & Wolfowitz (1956). It is of the order , and a more accurate (less biased) estimate of each is obtained by multiplying the estimates by . Conditional maximum likelihood The conditional likelihood function is defined as in which is the elementary symmetric function of order r, wh
https://en.wikipedia.org/wiki/Ensemble%20forecasting
Ensemble forecasting is a method used in or within numerical weather prediction. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form of Monte Carlo analysis. The multiple simulations are conducted to account for the two usual sources of uncertainty in forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by the chaotic nature of the evolution equations of the atmosphere, which is often referred to as sensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemble spread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of any dynamical system, and not just for weather prediction. Instances Today ensemble predictions are commonly made at most of the major operational weather prediction facilities worldwide, including: National Centers for Environmental Prediction (NCEP of the US) European Centre for Medium-Range Weather Forecasts (ECMWF) United Kingdom Met Office Météo-France Environment Canada Japan Meteorological Agency Bureau of Meteorology (Australia) China Meteorological Administration (CMA) Korea Meteorological Administration CPTEC (Brazil) Ministry of Earth Sciences (IMD, IITM & NCMRWF) (India) Experimental ensemble forecasts are made at a number of universities, such as the University of Washington, and ensemble forecasts in the US are also generated by the US Navy and Air Force. There are various ways of viewing the data such as spaghetti plots, ensemble means or Postage Stamps where a number o
https://en.wikipedia.org/wiki/Simple%20set
In computability theory, a subset of the natural numbers is called simple if it is computably enumerable (c.e.) and co-infinite (i.e. its complement is infinite), but every infinite subset of its complement is not c.e.. Simple sets are examples of c.e. sets that are not computable. Relation to Post's problem Simple sets were devised by Emil Leon Post in the search for a non-Turing-complete c.e. set. Whether such sets exist is known as Post's problem. Post had to prove two things in order to obtain his result: that the simple set A is not computable, and that the K, the halting problem, does not Turing-reduce to A. He succeeded in the first part (which is obvious by definition), but for the other part, he managed only to prove a many-one reduction. Post's idea was validated by Friedberg and Muchnik in the 1950s using a novel technique called the priority method. They give a construction for a set that is simple (and thus non-computable), but fails to compute the halting problem. Formal definitions and some properties In what follows, denotes a standard uniformly c.e. listing of all the c.e. sets. A set is called immune if is infinite, but for every index , we have . Or equivalently: there is no infinite subset of that is c.e.. A set is called simple if it is c.e. and its complement is immune. A set is called effectively immune if is infinite, but there exists a recursive function such that for every index , we have that . A set is called effectively simple if it is c.e. and its complement is effectively immune. Every effectively simple set is simple and Turing-complete. A set is called hyperimmune if is infinite, but is not computably dominated, where is the list of members of in order. A set is called hypersimple if it is simple and its complement is hyperimmune. Notes
https://en.wikipedia.org/wiki/History%20of%20the%20Teller%E2%80%93Ulam%20design
The Teller–Ulam design is a technical concept behind modern thermonuclear weapons, also known as hydrogen bombs. The design – the details of which are military secrets and known to only a handful of major nations – is History Teller's "Super" The idea of using the energy from a fission device to begin a fusion reaction was first proposed by the Italian physicist Enrico Fermi to his colleague Edward Teller in the fall of 1941 during what would soon become the Manhattan Project, the World War II effort by the United States and United Kingdom to develop the first nuclear weapons. Teller soon was a participant at Robert Oppenheimer's 1942 summer conference on the development of a fission bomb held at the University of California, Berkeley, where he guided discussion towards the idea of creating his "Super" bomb, which would hypothetically be many times more powerful than the yet-undeveloped fission weapon. Teller assumed creating the fission bomb would be nothing more than an engineering problem, and that the "Super" provided a much more interesting theoretical challenge. For the remainder of the war the effort was focused on first developing fission weapons. Nevertheless, Teller continued to pursue the "Super", to the point of neglecting work assigned to him for the fission weapon at the secret Los Alamos lab where he worked. (Much of the work Teller declined to do was given instead to Klaus Fuchs, who was later discovered to be a spy for the Soviet Union.) Teller was given some resources with which to study the "Super", and contacted his friend Maria Göppert-Mayer to help with laborious calculations relating to opacity. The "Super", however, proved elusive, and the calculations were incredibly difficult to perform, especially since there was no existing way to run small-scale tests of the principles involved (in comparison, the properties of fission could be more easily probed with cyclotrons, newly created nuclear reactors, and various other tests). Even though
https://en.wikipedia.org/wiki/Prone%20position
Prone position () is a body position in which the person lies flat with the chest down and the back up. In anatomical terms of location, the dorsal side is up, and the ventral side is down. The supine position is the 180° contrast. Etymology The word prone, meaning "naturally inclined to something, apt, liable," has been recorded in English since 1382; the meaning "lying face-down" was first recorded in 1578, but is also referred to as "lying down" or "going prone." Prone derives from the Latin , meaning "bent forward, inclined to," from the adverbial form of the prefix pro- "forward." Both the original, literal, and the derived figurative sense were used in Latin, but the figurative is older in English. Anatomy In anatomy, the prone position is a position of the body lying face down. It is opposed to the supine position which is face up. Using the terms defined in the anatomical position, the ventral side is down, and the dorsal side is up. Concerning the forearm, prone refers to that configuration where the palm of the hand is directed posteriorly, and the radius and ulna are crossed. Researchers observed that the expiratory reserve volume measured at relaxation volume increased from supine to prone by the factor of 0.15. Shooting In competitive shooting, the prone position is the position of a shooter lying face down on the ground. It is considered the easiest and most accurate position as the ground provides extra stability. It is one of the positions in three positions events. For many years (1932-2016), the only purely prone Olympic event was the 50 meter rifle prone; however, this has since been dropped from the Olympic program. Both men and women still have the 50 meter rifle three positions as an Olympic shooting event. Prone position is often used in military combat as, like in competitive shooting, the prone position provides the best accuracy and stability. Many first-person shooter video games also allow the player character to go into the
https://en.wikipedia.org/wiki/Intergluteal%20cleft
The intergluteal cleft or just gluteal cleft, also known by a number of synonyms, including natal cleft, butt crack, and cluneal cleft, is the groove between the buttocks that runs from just below the sacrum to the perineum, so named because it forms the visible border between the external rounded protrusions of the gluteus maximus muscles. Other names are the anal cleft, crena analis, crena interglutealis, and rima ani. Colloquially the intergluteal cleft is known as bum crack (UK) or butt crack (US). The intergluteal cleft is located superior to the anus. There are several disorders that can affect the intergluteal cleft including inverse psoriasis, caudal regression syndrome, and pilonidal disease. See also Anal canal Anatomical terms of location Buttock cleavage Rectum
https://en.wikipedia.org/wiki/Cesare%20Burali-Forti
Cesare Burali-Forti (13 August 1861 – 21 January 1931) was an Italian mathematician, after whom the Burali-Forti paradox is named. Biography Burali-Forti was born in Arezzo, and was an assistant of Giuseppe Peano in Turin from 1894 to 1896, during which time he discovered a theorem which Bertrand Russell later realised contradicted a previously proved result by Georg Cantor. The contradiction came to be called the Burali-Forti paradox of Cantorian set theory. He died in Turin. Books by C. Burali-Forti Analyse vectorielle générale: Applications à la mécanique et à la physique. with Roberto Marcolongo (Mattéi & co., Pavia, 1913). Corso di geometria analitico-proiettiva per gli allievi della R. Accademia Militare (G. B. Petrini di G. Gallizio, Torino, 1912). Geometria descrittiva (S. Lattes & c., Torino, 1921). Introduction à la géométrie différentielle, suivant la méthode de H. Grassmann (Gauthier-Villars, 1897). Lezioni Di Geometria Metrico-Proiettiva (Fratelli Bocca, Torino, 1904). Meccanica razionale with Tommaso Boggio (S. Lattes & c., Torino, 1921). Logica Matematica (Hoepli, Milano, 1894). Complete listing of publications and bibliography, 8 pages. Bibliography Primary literature in English translation: Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879-1931. Harvard Univ. Press. 1897. "A question on transfinite numbers," 104-11. 1897. "On well-ordered classes," 111-12. Secondary literature: Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots 1870-1940. Princeton Uni. Press.
https://en.wikipedia.org/wiki/Semi-Thue%20system
In theoretical computer science and mathematical logic a string rewriting system (SRS), historically called a semi-Thue system, is a rewriting system over strings from a (usually finite) alphabet. Given a binary relation between fixed strings over the alphabet, called rewrite rules, denoted by , an SRS extends the rewriting relation to all strings in which the left- and right-hand side of the rules appear as substrings, that is , where , , , and are strings. The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Thus they constitute a natural framework for solving the word problem for monoids and groups. An SRS can be defined directly as an abstract rewriting system. It can also be seen as a restricted kind of a term rewriting system. As a formalism, string rewriting systems are Turing complete. The semi-Thue name comes from the Norwegian mathematician Axel Thue, who introduced systematic treatment of string rewriting systems in a 1914 paper. Thue introduced this notion hoping to solve the word problem for finitely presented semigroups. Only in 1947 was the problem shown to be undecidable— this result was obtained independently by Emil Post and A. A. Markov Jr. Definition A string rewriting system or semi-Thue system is a tuple where is an alphabet, usually assumed finite. The elements of the set (* is the Kleene star here) are finite (possibly empty) strings on , sometimes called words in formal languages; we will simply call them strings here. is a binary relation on strings from , i.e., Each element is called a (rewriting) rule and is usually written . If the relation is symmetric, then the system is called a Thue system. The rewriting rules in can be naturally extended to other strings in by allowing substrings to be rewritten according to . More formally, the one-step rewriting relation relation induced by on for any strings : if and only if there exist such that , , and . Since is a relation on ,
https://en.wikipedia.org/wiki/List%20of%20anatomical%20isthmi
In anatomy, isthmus refers to a constriction between organs. This is a list of anatomical isthmi: Aortic isthmus, section of the aortic arch Cavo-tricuspid isthmus of the right atrium of the heart, a body of fibrous tissue in the lower atrium between the inferior vena cava, and the tricuspid valve Isthmus, the ear side of the eustachian tube Isthmus, narrowed part between the trunk and the splenium of the corpus callosum Isthmus, formation of the shell membrane in birds oviduct's Isthmus lobe, lobe in the prostate Isthmus of cingulate gyrus Isthmus of fauces, opening at the back of the mouth into the throat Isthmus organizer, secondary organizer region at the junction of the midbrain and metencephalon Isthmus tubae uterinae, links the fallopian tube to the uterus Kronig isthmus, band of resonance representing the apex of lung Thyroid isthmus, thin band of tissue connecting some of the lobes that make up the thyroid Uterine isthmus, inferior-posterior part of uterus Isthmus Anatomical isthmus
https://en.wikipedia.org/wiki/Evolution%20of%20human%20intelligence
The evolution of human intelligence is closely tied to the evolution of the human brain and to the origin of language. The timeline of human evolution spans approximately seven million years, from the separation of the genus Pan until the emergence of behavioral modernity by 50,000 years ago. The first three million years of this timeline concern Sahelanthropus, the following two million concern Australopithecus and the final two million span the history of the genus Homo in the Paleolithic era. Many traits of human intelligence, such as empathy, theory of mind, mourning, ritual, and the use of symbols and tools, are somewhat apparent in great apes, although they are in much less sophisticated forms than what is found in humans like the great ape language. History Hominidae The great apes (hominidae) show some cognitive and empathic abilities. Chimpanzees can make tools and use them to acquire foods and for social displays; they have mildly complex hunting strategies requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can learn to use symbols and understand aspects of human language including some relational syntax, concepts of number and numerical sequence. One common characteristic that is present in species of "high degree intelligence" (i.e. dolphins, great apes, and humans - Homo sapiens) is a brain of enlarged size. Along with this, there is a more developed neocortex, a folding of the cerebral cortex, and von Economo neurons. Said neurons are linked to social intelligence and the ability to gauge what another is thinking or feeling and are also present in bottlenose dolphins. Homininae Around 10 million years ago, the Earth's climate entered a cooler and drier phase, which led eventually to the Quaternary glaciation beginning some 2.6 million years ago. One consequence of this was that the north African tropical forest began to retreat, being replaced first by open grasslands and eventually by
https://en.wikipedia.org/wiki/Alessandro%20Padoa
Alessandro Padoa (14 October 1868 – 25 November 1937) was an Italian mathematician and logician, a contributor to the school of Giuseppe Peano. He is remembered for a method for deciding whether, given some formal theory, a new primitive notion is truly independent of the other primitive notions. There is an analogous problem in axiomatic theories, namely deciding whether a given axiom is independent of the other axioms. The following description of Padoa's career is included in a biography of Peano: He attended secondary school in Venice, engineering school in Padua, and the University of Turin, from which he received a degree in mathematics in 1895. Although he was never a student of Peano, he was an ardent disciple and, from 1896 on, a collaborator and friend. He taught in secondary schools in Pinerolo, Rome, Cagliari, and (from 1909) at the Technical Institute in Genoa. He also held positions at the Normal School in Aquila and the Naval School in Genoa, and, beginning in 1898, he gave a series of lectures at the Universities of Brussels, Pavia, Berne, Padua, Cagliari, and Geneva. He gave papers at congresses of philosophy and mathematics in Paris, Cambridge, Livorno, Parma, Padua, and Bologna. In 1934 he was awarded the ministerial prize in mathematics by the Accademia dei Lincei (Rome). The congresses in Paris in 1900 were particularly notable. Padoa's addresses at these congresses have been well remembered for their clear and unconfused exposition of the modern axiomatic method in mathematics. In fact, he is said to be "the first … to get all the ideas concerning defined and undefined concepts completely straight". Congressional addresses Philosophers' congress At the International Congress of Philosophy Padoa spoke on "Logical Introduction to Any Deductive Theory". He says during the period of elaboration of any deductive theory we choose the ideas to be represented by the undefined symbols and the facts to be stated by the unproved propositions; but, wh
https://en.wikipedia.org/wiki/Mario%20Pieri
Mario Pieri (22 June 1860 – 1 March 1913) was an Italian mathematician who is known for his work on foundations of geometry. Biography Pieri was born in Lucca, Italy, the son of Pellegrino Pieri and Ermina Luporini. Pellegrino was a lawyer. Pieri began his higher education at University of Bologna where he drew the attention of Salvatore Pincherle. Obtaining a scholarship, Pieri transferred to Scuola Normale Superiore in Pisa. There he took his degree in 1884 and worked first at a technical secondary school in Pisa. When an opportunity arose at the military academy in Turin to teach projective geometry, Pieri moved there and, by 1888, he was also an assistant instructor in the same subject at the University of Turin. By 1891, he had become libero docente at the university, teaching elective courses. Pieri continued to teach in Turin until 1900 when, through competition, he was awarded the position of extraordinary professor at University of Catania on the island of Sicily. Von Staudt's Geometrie der Lage (1847) was a much admired text on projective geometry. In 1889 Pieri translated it as Geometria di Posizione, a publication that included a study of the life and work of von Staudt written by Corrado Segre, the initiator of the project. Pieri also came under the influence of Giuseppe Peano at Turin. He contributed to the Formulario mathematico, and Peano placed nine of Pieri's papers for publication with the Academy of Sciences of Turin between 1895 and 1912. They shared a passion for reducing geometric ideas to their logical form and expressing these ideas symbolically. In 1898 Pieri wrote I principii della geometria di posizione composti in un sistema logico-deduttivo. It progressively introduced independent axioms: based on nineteen sequentially independent axioms – each independent of the preceding ones – which are introduced one by one as they are needed in the development, thus allowing the reader to determine on which axioms a given theorem depends. Pie
https://en.wikipedia.org/wiki/Gino%20Fano
Gino Fano (5 January 18718 November 1952) was an Italian mathematician, best known as the founder of finite geometry. He was born to a wealthy Jewish family in Mantua, in Italy and died in Verona, also in Italy. Fano made various contributions on projective and algebraic geometry. His work in the foundations of geometry predates the similar, but more popular, work of David Hilbert by about a decade. He was the father of physicist Ugo Fano and electrical engineer Robert Fano and uncle to physicist and mathematician Giulio Racah. Mathematical work Fano was an early writer in the area of finite projective spaces. In his article on proving the independence of his set of axioms for projective n-space, among other things, he considered the consequences of having a fourth harmonic point be equal to its conjugate. This leads to a configuration of seven points and seven lines contained in a finite three-dimensional space with 15 points, 35 lines and 15 planes, in which each line contained only three points. All the planes in this space consist of seven points and seven lines and are now known as Fano planes: Fano went on to describe finite projective spaces of arbitrary dimension and prime orders. In 1907 Gino Fano contributed two articles to Part III of Klein's encyclopedia. The first (SS. 221–88) was a comparison of analytic geometry and synthetic geometry through their historic development in the 19th century. The second (SS. 282–388) was on continuous groups in geometry and group theory as a unifying principle in geometry. Notes
https://en.wikipedia.org/wiki/Vagrancy%20%28biology%29
Vagrancy is a phenomenon in biology whereby an individual animal (usually a bird) appears well outside its normal range; they are known as vagrants. The term accidental is sometimes also used. There are a number of poorly understood factors which might cause an animal to become a vagrant, including internal causes such as navigatory errors (endogenous vagrancy) and external causes such as severe weather (exogenous vagrancy). Vagrancy events may lead to colonisation and eventually to speciation. Birds In the Northern Hemisphere, adult birds (possibly inexperienced younger adults) of many species are known to continue past their normal breeding range during their spring migration and end up in areas further north (such birds are termed spring overshoots). In autumn, some young birds, instead of heading to their usual wintering grounds, take "incorrect" courses and migrate through areas which are not on their normal migration path. For example, Siberian passerines which normally winter in Southeast Asia are commonly found in Northwest Europe, e.g. Arctic warblers in Britain. This is reverse migration, where the birds migrate in the opposite direction to that expected (say, flying north-west instead of south-east). The causes of this are unknown, but genetic mutation or other anomalies relating to the bird's magnetic sensibilities is suspected. Other birds are sent off course by storms, such as some North American birds blown across the Atlantic Ocean to Europe. Birds can also be blown out to sea, become physically exhausted, land on a ship and end up being carried to the ship's destination. While many vagrant birds do not survive, if sufficient numbers wander to a new area they can establish new populations. Many isolated oceanic islands are home to species that are descended from landbirds blown out to sea, Hawaiian honeycreepers and Darwin's finches being prominent examples. Insects Vagrancy in insects is recorded from many groups—it is particularly well-stu
https://en.wikipedia.org/wiki/Tunnel%20broker
In the context of computer networking, a tunnel broker is a service which provides a network tunnel. These tunnels can provide encapsulated connectivity over existing infrastructure to another infrastructure. There are a variety of tunnel brokers, including IPv4 tunnel brokers, though most commonly the term is used to refer to an IPv6 tunnel broker as defined in . IPv6 tunnel brokers typically provide IPv6 to sites or end users over IPv4. In general, IPv6 tunnel brokers offer so called 'protocol 41' or proto-41 tunnels. These are tunnels where IPv6 is tunneled directly inside IPv4 packets by having the protocol field set to '41' (IPv6) in the IPv4 packet. In the case of IPv4 tunnel brokers IPv4 tunnels are provided to users by encapsulating IPv4 inside IPv6 as defined in . Automated configuration Configuration of IPv6 tunnels is usually done using the Tunnel Setup Protocol (TSP), or using Tunnel Information Control protocol (TIC). A client capable of this is AICCU (Automatic IPv6 Connectivity Client Utility). In addition to IPv6 tunnels TSP can also be used to set up IPv4 tunnels. NAT issues Proto-41 tunnels (direct IPv6 in IPv4) may not operate well situated behind NATs. One way around this is to configure the actual endpoint of the tunnel to be the DMZ on the NAT-utilizing equipment. Another method is to either use AYIYA or TSP, both of which send IPv6 inside UDP packets, which is able to cross most NAT setups and even firewalls. A problem that still might occur is that of the timing-out of the state in the NAT machine. As a NAT remembers that a packet went outside to the Internet it allows another packet to come back in from the Internet that is related to the initial proto-41 packet. When this state expires, no other packets from the Internet will be accepted. This therefore breaks the connectivity of the tunnel until the user's host again sends out a packet to the tunnel broker. Dynamic endpoints When the endpoint isn't a static IP address, the use
https://en.wikipedia.org/wiki/Susana%20Urbina
Susana Urbina (born 1946) is a Peruvian-American psychologist. She received her Ph.D. in Psychometrics from Fordham University in 1972 and was licensed in Florida in 1976. She currently teaches at University of North Florida, where her principal areas of teaching and research are psychological testing and assessment. Urbina is a fellow of Division 5 (Evaluation, Measurement, and Statistics) of the American Psychological Association (APA) and of the Society for Personality Assessment. She has chaired the Committee on Psychological Tests and Assessment and the Committee on Professional Practice and Standards of the APA. In 1995, Urbina was part of an 11-member APA task force led by Ulric Neisser which published "Intelligence: Knowns and Unknowns," a report written in response to The Bell Curve. Publications Urbina S. Psychological Testing: Seventh Edition, Study Guide. Macmillan Pub Co; 6th edition (June 1, 1989) Urbina S. Psychological Testing: Study Guide. Prentice Hall; 7th edition (September 1, 1997) Urbina S. (2014) Essentials of Psychological Testing. Wiley; 2nd edition External links Susana Urbina website via University of North Florida American women psychologists 21st-century American psychologists Fordham University alumni People from Jacksonville, Florida University of North Florida faculty Place of birth missing (living people) 1946 births Living people American women academics 21st-century American women scientists 20th-century American psychologists Quantitative psychologists 20th-century American women scientists American people of Peruvian descent
https://en.wikipedia.org/wiki/Stromquist%20moving-knives%20procedure
The Stromquist moving-knives procedure is a procedure for envy-free cake-cutting among three players. It is named after Walter Stromquist who presented it in 1980. This procedure was the first envy-free moving knife procedure devised for three players. It requires four knives but only two cuts, so each player receives a single connected piece. There is no natural generalization to more than three players which divides the cake without extra cuts. The resulting partition is not necessarily efficient. Procedure A referee moves a sword from left to right over the cake, hypothetically dividing it into small left piece and a large right piece. Each player moves a knife over the right piece, always keeping it parallel to the sword. The players must move their knives in a continuous manner, without making any "jumps". When any player shouts "cut", the cake is cut by the sword and by whichever of the players' knives happens to be the central one of the three (that is, the second in order from the sword). Then the cake is divided in the following way: The piece to the left of the sword, which we denote Left, is given to the player who first shouted "cut". We call this player the "shouter" and the other two players the "quieters". The piece between the sword and the central knife, which we denote Middle, is given to the remaining player whose knife is closest to the sword. The remaining piece, Right, is given to the third player. Strategy Each player can act in a way that guarantees that—according to their own measure—no other player receives more than them: Always hold your knife such that it divides the part to the right of the sword to two pieces that are equal in your eyes (hence, your knife initially divides the entire cake to two equal parts and then moves rightwards as the sword moves rightwards). Shout 'cut' when Left becomes equal to the piece you are about to receive if you remain quiet (i.e. if your knife is leftmost, shout 'cut' if Left=Middle; if your
https://en.wikipedia.org/wiki/Austin%20moving-knife%20procedures
The Austin moving-knife procedures are procedures for equitable division of a cake. To each of n partners, they allocate a piece of the cake which this partner values as exactly of the cake. This is in contrast to proportional division procedures, which give each partner at least of the cake, but may give more to some of the partners. When , the division generated by Austin's procedure is an exact division and it is also envy-free. Moreover, it is possible to divide the cake to any number k of pieces which both partners value as exactly 1/k. Hence, it is possible to divide the cake between the partners in any fraction (e.g. give 1/3 to Alice and 2/3 to George). When , the division is neither exact nor envy-free, since each partner only values his own piece as , but may value other pieces differently. The main mathematical tool used by Austin's procedure is the intermediate value theorem (IVT). Two partners and half-cakes The basic procedures involve partners who want to divide a cake such that each of them gets exactly one half. Two knives procedure For the sake of description, call the two players Alice and George, and assume the cake is rectangular. Alice places one knife on the left of the cake and a second parallel to it on the right where she judges it splits the cake in two. Alice moves both knives to the right in a way that the part between the two knives always contains half of the cake's value in her eyes (while the physical distance between the knives may change). George says "stop!" when he thinks that half the cake is between the knives. How can we be sure that George can say "stop" at some point? Because if Alice reaches the end, she must have her left knife positioned where the right knife started. The IVT establishes that George must be satisfied the cake is halved at some point. A coin is tossed to select between two options: either George receives the piece between the knives and Alice receives the two pieces at the flanks, or vice ver
https://en.wikipedia.org/wiki/Adjusted%20winner%20procedure
Adjusted Winner (AW) is a procedure for envy-free item allocation. Given two agents and some goods, it returns a partition of the goods between the two agents with the following properties: Envy-freeness: Each agent believes that his share of the goods is at least as good as the other share; Equitability: The "relative happiness levels" of both agents from their shares are equal; Pareto-optimality: no other allocation is better for one agent and at least as good for the other agent; At most one good has to be shared between the agents. For two agents, Adjusted Winner is the only Pareto optimal and equitable procedure that divides at most a single good. The procedure can be used in divorce settlements and partnership dissolutions, as well as international conflicts. The procedure was designed by Steven Brams and Alan D. Taylor. It was first published in their book on fair division and later in a stand-alone book. The algorithm has been commercialized through the FairOutcomes website. AW was patented in the United States but that patent has expired. Method Each partner is given the list of goods and an equal number of points (e.g. 100 points) to distribute among them. He or she assigns a value to each good and submits it sealed to an arbiter. The arbiter, or a computer program, assigns each item to the high bidder. If both partners have the same number of points, then we are done. Otherwise, call the partner who has more points "winner" and the other partner "loser". Order the goods in increasing order of the ratio value-for-winner / value-for-loser. Start moving goods in this order from the winner to the loser, until the point-totals become "almost" equal, i.e., moving one more good from the winner to the loser will make the winner have less points than the loser. At this point, divide the next good between the winner and the loser such that their totals are the same. Use cases While there is no account of AW actually being used to resolve disputes,
https://en.wikipedia.org/wiki/Chore%20division
Chore division is a fair division problem in which the divided resource is undesirable, so that each participant wants to get as little as possible. It is the mirror-image of the fair cake-cutting problem, in which the divided resource is desirable so that each participant wants to get as much as possible. Both problems have heterogeneous resources, meaning that the resources are nonuniform. In cake division, cakes can have edge, corner, and middle pieces along with different amounts of frosting. Whereas in chore division, there are different chore types and different amounts of time needed to finish each chore. Similarly, both problems assume that the resources are divisible. Chores can be infinitely divisible, because the finite set of chores can be partitioned by chore or by time. For example, a load of laundry could be partitioned by the number of articles of clothing and/or by the amount of time spent loading the machine. The problems differ, however, in the desirability of the resources. The chore division problem was introduced by Martin Gardner in 1978. Chore division is often called fair division of bads, in contrast to the more common problem called "fair division of goods" (an economic bad is the opposite of an economic good). Another name is dirty work problem. The same resource can be either good or bad, depending on the situation. For example, suppose the resource to be divided is the back-yard of a house. In a situation of dividing inheritance, this yard would be considered good, since each heir would like to have as much land as possible, so it is a cake-cutting problem. But in a situation of dividing house-chores such as lawn-mowing, this yard would be considered bad, since each child would probably like to have as little land as possible to mow, so it is a chore-cutting problem. Some results from fair cake-cutting can be easily translated to the chore-cutting scenario. For example, the divide and choose procedure works equally well in both probl
https://en.wikipedia.org/wiki/Charles%20River%20Laboratories
Charles River Laboratories International, Inc., is an American pharmaceutical company specializing in a variety of preclinical and clinical laboratory, gene therapy and cell therapy services for the Pharmaceutical, Medical device and Biotechnology industries. It also supplies assorted biomedical products, outsourcing services, and animals for research and development in the pharmaceutical industry (for example, contract research organization services) and offers support in the fields of basic research, drug discovery, safety and efficacy, clinical support, and manufacturing. According to the company, it supported the development of approximately 85% of novel FDA-approved drugs in 2021. Its customers include leading pharmaceutical, biotechnology, agrochemical, government, and academic organization around the globe. The company has over 90 facilities, operates in 20 countries, and employs approximately 18,400 people worldwide. Charles River Laboratories is often criticized by animal rights activists who condemn the company's usage of dogs and non-human primates for pharmaceutical purposes. The company is also a major harvester of horseshoe crab blood. History Charles River was founded in 1947 by Henry Foster, a young veterinarian who purchased one thousand rat cages from a Virginia farm and set up a one-person laboratory in Boston overlooking the Charles River. To fulfill the regional need for laboratory animal models, he bred, fed, and cared for the animals and personally delivered them to local researchers. In 1955, the company's headquarters were relocated to their current home in Wilmington, Massachusetts. The organization became an international entity in 1966 by opening a new animal production facility in France. The first commercial, comprehensive genetic monitoring program was implemented by Charles River in 1981. Three years later, they were acquired by Bausch & Lomb. In 1988, the organization expanded its portfolio to include the creation of transgen
https://en.wikipedia.org/wiki/NBName
NBName (note capitalization) is a computer program that can be used to carry out denial-of-service attacks that can disable NetBIOS services on Windows machines. It was written by Sir Dystic of CULT OF THE DEAD COW (cDc) and released July 29, 2000 at the DEF CON 8 convention in Las Vegas. The program decodes and provides the user with all NetBIOS name packets it receives on UDP port 137. Its many command line options can effectively disable a NetBIOS network and prevent computers from rejoining it. According to Sir Dystic, "NBName can disable entire LANs and prevent machines from rejoining them...nodes on a NetBIOS network infected by the tool will think that their names already are being used by other machines. 'It should be impossible for everyone to figure out what is going on,' he added."
https://en.wikipedia.org/wiki/Halo%20%28religious%20iconography%29
A halo (; also called a nimbus, aureole, glory, or gloriole) is a crown of light rays, circle or disk of light that surrounds a person in art. It has been used in the iconography of many religions to indicate holy or sacred figures, and has at various periods also been used in images of rulers and heroes. In the religious art of Ancient Greece, Ancient Rome, Christianity, Hinduism, and Buddhism among other religions, sacred persons may be depicted with a halo in the form of a circular glow, or flames in Asian art, around the head or around the whole body—this last one is often called a mandorla. Halos may be shown as almost any colour or combination of colours, but are most often depicted as golden, yellow or white when representing light or red when representing flames. Ancient Mesopotamia Sumerian religious literature frequently speaks of ( in Akkadian), a "brilliant, visible glamour which is exuded by gods, heroes, sometimes by kings, and also by temples of great holiness and by gods' symbols and emblems." Ancient Greek world Homer describes a more-than-natural light around the heads of heroes in battle. Depictions of Perseus in the act of slaying Medusa, with lines radiating from his head, appear on a white-ground toiletry box and on a slightly later red-figured vase in the style of Polygnotos, . On painted wares from south Italy, radiant lines or simple haloes appear on a range of mythic figures: Lyssa, a personification of madness; a sphinx; a sea demon; and Thetis, the sea-nymph who was mother to Achilles. The Colossus of Rhodes was a statue of the sun-god Helios and had his usual radiate crown (copied for the Statue of Liberty). Hellenistic rulers are often shown wearing radiate crowns that seem clearly to imitate this effect. Asian art In India, use of the halo might date back to the second half of the second millennium BC. Two figures appliqued on a pottery vase fragment from Daimabad's Malwa phase (1600–1400 BC) have been interpreted as a holy f
https://en.wikipedia.org/wiki/Super%2035
Super 35 (originally known as Superscope 235) is a motion picture film format that uses exactly the same film stock as standard 35 mm film, but puts a larger image frame on that stock by using the space normally reserved for the optical analog sound track. History Super 35 was revived from a similar Superscope variant known as Superscope 235, which was originally developed by the Tushinsky Brothers (who founded Superscope Inc. in 1954) for RKO in 1954. The first film to be shot in Superscope was Vera Cruz, a western film produced by Hecht-Lancaster Productions and distributed through United Artists. When cameraman Joe Dunton was preparing to shoot Dance Craze in 1980, he chose to revive the Superscope format by using a full silent-standard gate and slightly optically recentering the lens port (to adjust for the inclusion of the area of the optic soundtrack -the gray track on left side of the illustration). These two characteristics are central to the format. It was adopted by Hollywood starting with Greystoke in 1984, under the format name Super Techniscope. It also received much early publicity for making the cockpit shots in Top Gun possible, since it was otherwise impossible to fit 35 mm cameras with large anamorphic lenses into the small free space in the cockpit. Later, as other camera rental houses and labs started to embrace the format, Super 35 became popular in the mid-1990s, and is now considered a ubiquitous production process, with usage on well over a thousand feature films. It is often the standard production format for television shows, music videos, and commercials. Since none of these require a release print, it is unnecessary to reserve space for an optical soundtrack. When composing for 1.85:1, it was known as Super 1.85, since it was larger than standard 1.85. When composing for 2.39:1, it was often typical to employ either a "common center", which keeps the 2.39 extraction area at the center of the film that results in extra headroom if o
https://en.wikipedia.org/wiki/Shared%20web%20hosting%20service
A shared web hosting service is a web hosting service where many websites reside on one web server connected to the Internet. The overall cost of server maintenance is spread over many customers. By using shared hosting, the website will share a physical server with one or more other websites. Description The service usually includes system administration as it is shared by many users. This is a benefit for users who do not want to deal with it, but a hindrance to power users who want more control. In general, shared hosting will be inappropriate for users who require extensive software development outside what the hosting provider supports. Almost all applications intended to be on a standard web server work fine with a shared web hosting service. But on the other hand, shared hosting is cheaper than other types of hosting such as dedicated server hosting. Shared hosting usually has usage limits and hosting providers should have extensive reliability features in place. Shared hosting services typically offer basic web statistics support, email and webmail services, auto script installations, updated PHP and MySQL, basic after-sale technical support that is included with a monthly subscription. It also typically uses a web-based control panel system. Most of the large hosting companies use their own custom-developed control panel or cPanel. Control panels and web interfaces can cause controversy however since web hosting companies sometimes sell the right to use their control panel system to others. Attempting to recreate the functionality of a specific control panel is common, which leads to many lawsuits over patent infringement. Shared web hosting services In shared hosting, the provider is generally responsible for managing servers, installing server software, security updates, technical support, and other aspects of the service. Most servers are based on the Linux operating system and LAMP (software bundle). Some providers offer Microsoft Windows-based or Fr
https://en.wikipedia.org/wiki/Taegeuk
Taegeuk (, ) is a Korean term meaning "supreme ultimate", although it can also be translated as "great polarity / duality". The term and its overall concept is related to the Chinese Taiji (Wade-Giles: T'ai-chi). The symbol was chosen for the design of the Korean national flag in the 1880s. It substitutes the black and white color scheme often seen in most taijitu illustrations with blue and red, respectively, along with a horizontal separator, as opposed to vertical. South Koreans commonly refer to their national flag as (), where gi () means "flag" or "banner". This particular color-themed symbol is typically associated with Korean traditions and represents balance in the universe; the red half represents positive cosmic forces, and the blue half represents the complementary or opposing, negative cosmic forces. It is also used in Korean shamanism, Confucianism, Taoism, and Buddhism. History The diagram has been existent for the majority of written Korean history. The origins of the interlocking-sinusoid design in Korea can be traced to as early as the Goguryeo or Silla period, e.g. in the decoration of a sword, dated to the 5th or 6th century, recovered from the grave of Michu of Silla, or an artifact with the pattern of similar age found in the Bogam-ri tombs of Baekje at Naju, South Jeolla Province in 2008. In the compound of Gameunsa, a temple built in AD 628 during the reign of King Jinpyeong of Silla, a stone object, perhaps the foundation of a pagoda, is carved with the design. In Gojoseon, the ancient kingdom of Joseon, the design was used to express the hope for harmony of yin and yang. It is likely due to the earliest spread of ancient Chinese culture in Gojoseon, especially during the early Zhou dynasty. Today the is usually associated with Korean tradition and represents balance in the universe, as mentioned in the previous section (red is , or positive cosmic forces, and blue is , or negative cosmic forces). Among its many religious connotati
https://en.wikipedia.org/wiki/Seiberg%20duality
In quantum field theory, Seiberg duality, conjectured by Nathan Seiberg in 1994, is an S-duality relating two different supersymmetric QCDs. The two theories are not identical, but they agree at low energies. More precisely under a renormalization group flow they flow to the same IR fixed point, and so are in the same universality class. It is an extension to nonabelian gauge theories with N=1 supersymmetry of Montonen–Olive duality in N=4 theories and electromagnetic duality in abelian theories. The statement of Seiberg duality Seiberg duality is an equivalence of the IR fixed points in an N=1 theory with SU(Nc) as the gauge group and Nf flavors of fundamental chiral multiplets and Nf flavors of antifundamental chiral multiplets in the chiral limit (no bare masses) and an N=1 chiral QCD with Nf-Nc colors and Nf flavors, where Nc and Nf are positive integers satisfying . A stronger version of the duality relates not only the chiral limit but also the full deformation space of the theory. In the special case in which the IR fixed point is a nontrivial interacting superconformal field theory. For a superconformal field theory, the anomalous scaling dimension of a chiral superfield where R is the R-charge. This is an exact result. The dual theory contains a fundamental "meson" chiral superfield M which is color neutral but transforms as a bifundamental under the flavor symmetries. The dual theory contains the superpotential . Relations between the original and dual theories Being an S-duality, Seiberg duality relates the strong coupling regime with the weak coupling regime, and interchanges chromoelectric fields (gluons) with chromomagnetic fields (gluons of the dual gauge group), and chromoelectric charges (quarks) with nonabelian 't Hooft–Polyakov monopoles. In particular, the Higgs phase is dual to the confinement phase as in the dual superconducting model. The mesons and baryons are preserved by the duality. However, in the electric theory the meson
https://en.wikipedia.org/wiki/Heirloom%20tomato
An heirloom tomato (also called heritage tomato in the UK) is an open-pollinated, non-hybrid heirloom cultivar of tomato. They are classified as family heirlooms, commercial heirlooms, mystery heirlooms, or created heirlooms. They usually have a shorter shelf life and are less disease resistant than hybrids. They are grown for various reasons: for food, historical interest, access to wider varieties, and by people who wish to save seeds from year to year, as well as for their taste. Taste Many heirloom tomatoes are sweeter and lack a genetic mutation that gives tomatoes a uniform red color at the cost of the fruit's taste. Varieties bearing that mutation which have been favored by industry since the 1940sthat is, tomatoes which are not heirloomsfeature fruits with lower levels of carotenoids and a decreased ability to make sugar within the fruit. Cultivars Heirloom tomato cultivars can be found in a wide variety of colors, shapes, flavors, and sizes. Some heirloom cultivars can be prone to cracking or lack disease resistance. As with most garden plants, cultivars can be acclimated over several gardening seasons to thrive in a geographical location through careful selection and seed saving. Some of the most famous examples include Aunt Ruby's German Green, Banana Legs, Big Rainbow, Black Krim, Brandywine, Cherokee Purple, Chocolate Cherry, Costoluto Genovese, Garden Peach, Gardener's Delight, Green Zebra, Hawaiian Pineapple, Hillbilly, Lollypop, Marglobe, Matt's Wild Cherry, Mortgage Lifter, Mr. Stripey, Neville Tomatoes, Paul Robeson, Pruden's Purple, Red Currant, San Marzano, Silvery Fir Tree, Three Sisters, and Yellow Pear. Seed collecting Heirloom seeds "breed true," unlike the seeds of hybridized plants. Both sides of the DNA in an heirloom variety come from a common stable cultivar. Heirloom tomato varieties are open pollinating, so cross-pollination can occur. Generally, tomatoes most likely to cross are those with potato leaves, double flowers (found o
https://en.wikipedia.org/wiki/Neural%20gas
Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition, image processing or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis. Algorithm Given a probability distribution of data vectors and a finite number of feature vectors . With each time step , a data vector randomly chosen from is presented. Subsequently, the distance order of the feature vectors to the given data vector is determined. Let denote the index of the closest feature vector, the index of the second closest feature vector, and the index of the feature vector most distant to . Then each feature vector is adapted according to with as the adaptation step size and as the so-called neighborhood range. and are reduced with increasing . After sufficiently many adaptation steps the feature vectors cover the data space with minimum representation error. The adaptation step of the neural gas can be interpreted as gradient descent on a cost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online) k-means clustering a much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes. Variants A number of variants of the neural gas algorithm exists in the literature so as to mitigate some of its shortcomings. More notable is perhaps Bernd Fritzke's growing neural gas, but also one should m
https://en.wikipedia.org/wiki/Shlaer%E2%80%93Mellor%20method
The Shlaer–Mellor method, also known as object-oriented systems analysis (OOSA) or object-oriented analysis (OOA) is an object-oriented software development methodology introduced by Sally Shlaer and Stephen Mellor in 1988. The method makes the documented analysis so precise that it is possible to implement the analysis model directly by translation to the target architecture, rather than by elaborating model changes through a series of more platform-specific models. In the new millennium the Shlaer–Mellor method has migrated to the UML notation, becoming Executable UML. Overview The Shlaer–Mellor method is one of a number of software development methodologies which arrived in the late 1980s. Most familiar were object-oriented analysis and design (OOAD) by Grady Booch, object modeling technique (OMT) by James Rumbaugh, object-oriented software engineering by Ivar Jacobson and object-oriented analysis (OOA) by Shlaer and Mellor. These methods had adopted a new object-oriented paradigm to overcome the established weaknesses in the existing structured analysis and structured design (SASD) methods of the 1960s and 1970s. Of these well-known problems, Shlaer and Mellor chose to address: The complexity of designs generated through the use of structured analysis and structured design (SASD) methods. The problem of maintaining analysis and design documentation over time. Before publication of their second book in 1991 Shlaer and Mellor had stopped naming their method "Object-Oriented Systems Analysis" in favor of just "Object-Oriented Analysis". The method started focusing on the concept of Recursive Design (RD), which enabled the automated translation aspect of the method. What makes Shlaer–Mellor unique among the object-oriented methods is: the degree to which object-oriented semantic decomposition is taken, the precision of the Shlaer–Mellor Notation used to express the analysis, and the defined behavior of that analysis model at run-time. The general solut
https://en.wikipedia.org/wiki/Thinking%20in%20Java
Thinking in Java () is a book about the Java programming language, written by Bruce Eckel and first published in 1998. Prentice Hall published the 4th edition of the work in 2006. The book represents a print version of Eckel’s “Hands-on Java” seminar. Bruce Eckel wrote “On Java8” as a sequel for Thinking in Java and it is available in Google Play as an ebook. Publishing history Eckel has made various versions of the book publicly available online. Reception Tech Republic says: "The particularly cool thing about Thinking in Java is that even though a large amount of information is covered at a rapid pace, it is somehow all easily absorbed and understood. This is a testament to both Eckel’s obvious mastery of the subject and his skilled writing style." Linux Weekly News praised the book in its review. CodeSpot says: "Thinking in Java is a must-read book, especially if you want to do programming in Java programing language or learn Object-Oriented Programming (OOP)." Awards Thinking in Java has won multiple awards from professional journals: 1998 Java Developers Journal Editors Choice Award for Best Book Jolt Productivity Award, 1999 2000 JavaWorld Readers Choice Award for Best Book 2001 JavaWorld Editors Choice Award for Best Book 2003 Software Development Magazine Jolt Award for Best Book 2003 Java Developers Journal Readers Choice Award for Best Book 2007 Java Developer’s Journal Readers’ Choice Best Book External links Official site
https://en.wikipedia.org/wiki/Aiming%20point
In field artillery, the accuracy of indirect fire depends on the use of aiming points. In air force terminology the aiming point (or A.P.) refers to holding the intersection of the cross hairs on a bombsight when fixed at a specific target. An indirect fire aiming point provides a point of angular reference to aim a gun in the required horizontal direction – azimuth. Until the 1980s aiming points were essential for indirect fire artillery. They are also used by mortars and machine guns firing indirectly. Description An essential requirement of an aiming point is that it be at a sufficient distance from the gun using it. The reason for this is that, while firing, guns, particularly towed guns, move back a short distance – perhaps a foot, as their spades embed and may move more in soft ground. When they traverse their barrels their sights also move because they are not at the point of pivot. All this means that if the aiming point is too close then the angle to the aiming point changes. This aims the guns off-target, possibly up to several hundred meters. For gun-laying purposes a distance of a few kilometers from gun to aiming point is sufficient. An aiming point would be a sharply defined and easily distinguished feature, such the edge of an obvious building. However, this presents problems in featureless areas, in bad visibility or at night and putting lights on distant aiming points is seldom practical. Therefore, methods of simulating a distant aiming point are required. History The earliest form of aiming point was a pair of aiming posts for each gun, almost in line with one another when viewed through the gun's sight, and placed about from the gun. There were at least two ways of using these, but the simplest is to aim the sight midway between them. Before the First World War the French introduced the collimateur. During that war the British introduced their first paralloscope, which was a horizontal mirror placed a few feet from the gun; the l
https://en.wikipedia.org/wiki/Gnotobiosis
Gnotobiosis (from Greek roots gnostos "known" and bios "life") refers to an engineered state of an organism in which all forms of life (i.e., microorganisms) in or on it, including its microbiota, have been identified. The term gnotobiotic organism, or gnotobiote, can refer to a model organism that is colonized with a specific community of known microorganisms (isobiotic or defined flora animal) or that contains no microorganisms (germ-free) often for experimental purposes. The study of gnotobiosis and the generation of various types of gnotobiotic model organisms as tools for studying interactions between host organisms and microorganisms is referred to as gnotobiology. History The concept and field of gnotobiology was born of a debate between Louis Pasteur and Marceli Nencki in the late 19th century, in which Pasteur argued that animal life needed bacteria to succeed while Nencki argued that animals would be healthier without bacteria, but it wasn't until 1960 that the Association for Gnotobiotics was formed. Early attempts in gnotobiology were limited by inadequate equipment and nutritional knowledge, however, advancements in nutritional sciences, animal anatomy and physiology, and immunology have allowed for the improvement of gnotobiotic technologies. Methods Guinea pigs were the first germ-free animal model described in 1896 by George Nuttall and Hans Thierfelder, establishing techniques still used today in gnotobiology. Early methods for maintaining sterile environments involved sterile glass jars and gloveboxes, which developed into a conversation surrounding uniformity of the methods in the field at the 1939 symposium on Micrurgical and Germ-free Methods at the University of Notre Dame. Many early (1930-1950s) accomplishments in gnotobiology came from Notre Dame University, The University of Lund, and Nagoya University. The Laboratories of Bacteriology at the University of Notre Dame (known as LOBUND) was founded by John J. Cavanaugh and is cited for ma
https://en.wikipedia.org/wiki/Route%20reestablishment%20notification
Route Reestablishment Notification (RRN) is a type of notification that is used in some communications protocols that use time-division multiplexing. Network protocols
https://en.wikipedia.org/wiki/Conserved%20quantity
A conserved quantity is a property or value that remains constant over time in a system even when changes occur in the system. In mathematics, a conserved quantity of a dynamical system is formally defined as a function of the dependent variables, the value of which remains constant along each trajectory of the system. Not all systems have conserved quantities, and conserved quantities are not unique, since one can always produce another such quantity by applying a suitable function, such as adding a constant, to a conserved quantity. Since many laws of physics express some kind of conservation, conserved quantities commonly exist in mathematical models of physical systems. For example, any classical mechanics model will have mechanical energy as a conserved quantity as long as the forces involved are conservative. Differential equations For a first order system of differential equations where bold indicates vector quantities, a scalar-valued function H(r) is a conserved quantity of the system if, for all time and initial conditions in some specific domain, Note that by using the multivariate chain rule, so that the definition may be written as which contains information specific to the system and can be helpful in finding conserved quantities, or establishing whether or not a conserved quantity exists. Hamiltonian mechanics For a system defined by the Hamiltonian , a function f of the generalized coordinates q and generalized momenta p has time evolution and hence is conserved if and only if . Here denotes the Poisson bracket. Lagrangian mechanics Suppose a system is defined by the Lagrangian L with generalized coordinates q. If L has no explicit time dependence (so ), then the energy E defined by is conserved. Furthermore, if , then q is said to be a cyclic coordinate and the generalized momentum p defined by is conserved. This may be derived by using the Euler–Lagrange equations. See also Conservative system Lyapunov function Hamiltonian sy
https://en.wikipedia.org/wiki/Nightshade%20%281985%20video%20game%29
Nightshade is an action video game developed and published by Ultimate Play the Game. It was first released for the ZX Spectrum in 1985, and was then ported to the Amstrad CPC and BBC Micro later that year. It was also ported to the MSX exclusively in Japan in 1986. In the game, the player assumes the role of a knight who sets out to destroy four demons in a plague-infested village. The game features scrolling isometric gameplay, an improvement over its flip-screen-driven predecessors, Knight Lore and Alien 8, all thanks to an enhanced version of the Ultimate Play the Game's Filmation game engine, branded Filmation II. The game received positive reviews upon release; critics praised its gameplay traits, graphics and colours, however one critic was divided over its perceived similarities to its predecessors. Gameplay The game is presented in an isometric format. The player assumes the role of a knight who enters the plague-infested village of Nightshade to vanquish four demons who reside within. Additionally, all residents from the village have been transformed into vampires and other supernatural creatures. Contact with these monsters infects the knight, with repeated contact turning the character from white to yellow and then to green, which will lead to the character's death. The knight may be hit up to three times by an enemy, however the fourth hit will result in a life being deducted. The objective of the game is to locate and destroy four specific demons. Each demon is vulnerable to a particular object which must be collected by the player: a hammer, a Bible, a crucifix and an hourglass. Once the four items have been collected, the player must track down a specific demon and cast the correct item at it in order to destroy it. Once all four demons have been destroyed, the game will end. In order to defend against other enemies such as vampires and monsters, the player can arm themselves with "antibodies", which can then be thrown at enemies. Antibodies can
https://en.wikipedia.org/wiki/Modified%20Huffman%20coding
Modified Huffman coding is used in fax machines to encode black-on-white images (bitmaps). It combines the variable-length codes of Huffman coding with the coding of repetitive data in run-length encoding. The basic Huffman coding provides a way to compress files that have much repeating data, like a file containing text, where the alphabet letters are the repeating objects. However, a single scan line contains only two kinds of elements white pixels and black pixels which can be represented directly as a 0 and 1. This "alphabet" of only two symbols is too small to directly apply the Huffman coding. But if we first use run-length encoding, we can have more objects to encode. Here is an example taken from the article on run-length encoding: A hypothetical scan line, with B representing a black pixel and W representing white, might read as follows: WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW With a run-length encoding (RLE) data compression algorithm applied to the above hypothetical scan line, it can be rendered as follows: 12W1B12W3B24W1B14W Here we see that we have, in addition to the two items "white" and "black", several different numbers. These numbers provide plenty of additional items to use, so the Huffman coding can be directly applied to the sequence above to reduce the size even more. See also Fax compression External links Lossless compression algorithms
https://en.wikipedia.org/wiki/IBM%20System%20z9
IBM System z9 is a line of IBM mainframe computers. The first models were available on September 16, 2005. The System z9 also marks the end of the previously used eServer zSeries naming convention. It was also the last mainframe computer that NASA ever used. Background System z9 is a mainframe using the z/Architecture, previously known as ESAME. z/Architecture is a 64-bit architecture which replaces the previous 31-bit-addressing/32-bit-data ESA/390 architecture while remaining completely compatible with it as well as the older 24-bit-addressing/32-bit-data System/360 architecture. The primary advantage of this arrangement is that memory intensive applications like DB2 are no longer bounded by 31-bit memory restrictions while older applications can run without modifications. Name change With the announcement of the System z9 Business Class server, IBM has renamed the System z9 109 as the System z9 Enterprise Class server. IBM documentation abbreviates them as the z9 BC and z9 EC, respectively. Notable differences There are several functional enhancements in the System z9 compared to its zSeries predecessors. Some of the differences include: Support Element & HMC The Support Element is the most direct and lowest level way to access a mainframe. It circumvents even the Hardware Management Console and the operating system running on the mainframe. The HMC is a PC connected to the mainframe and emulates the Support Element. All preceding zSeries mainframes used a modified version of OS/2 with custom software to provide the interface. System z9's HMC no longer uses OS/2, but instead uses a modified version of Linux with an OS/2 lookalike interface to ease transition as well as a new interface. Unlike the previous HMC application on OS/2, the new HMC is web-based which means that even local access is done via a web browser. Remote HMC access is available, although only over an SSL encrypted HTTP connection. The web-based nature means that there is no longer a
https://en.wikipedia.org/wiki/Modular%20Approach%20to%20Software%20Construction%20Operation%20and%20Test
The Modular Approach to Software Construction Operation and Test (MASCOT) is a software engineering methodology developed under the auspices of the United Kingdom Ministry of Defence starting in the early 1970s at the Royal Radar Establishment and continuing its evolution over the next twenty years. The co-originators of MASCOT were Hugo Simpson and Ken Jackson (currently with Telelogic). Where most methodologies tend to concentrate on bringing rigour and structure to a software project's functional aspects, MASCOT's primary purpose is to emphasise the architectural aspects of a project. Its creators purposely avoided saying anything about the functionality of the software being developed, and concentrated on the real-time control and interface definitions between concurrently running processes. MASCOT was successfully used in a number of defence systems, most notably the Rapier ground-to-air missile system of the British Army. Although still in use on systems in the field, it never reached critical success and has been subsequently overshadowed by object oriented design methodologies based on UML. A British Standards Institution (BSI) standard was drafted for version 3 of the methodology, but was never ratified. Copies of the draft standard can be still obtained from the BSI. MASCOT in the field The UK Ministry of Defence has been the primary user of the MASCOT method through its application in significant military systems, and at one stage mandated its use for new operational systems. Examples include the Rapier missile system, and various Royal Navy Command & Control Systems. The Future of the Method MASCOT's principles continue to evolve in the academic community (principally at the DCSC) and the aerospace industry Matra BAe Dynamics, through research into temporal aspects of software design and the expression of system architectures, most notably in the DORIS (Data-Oriented Requirements Implementation Scheme) method and implementation protocols. Work has a
https://en.wikipedia.org/wiki/Data%20assimilation
Data assimilation is a mathematical discipline that seeks to optimally combine theory (usually in the form of a numerical model) with observations. There may be a number of different goals sought – for example, to determine the optimal state estimate of a system, to determine initial conditions for a numerical forecast model, to interpolate sparse observation data using (e.g. physical) knowledge of the system being observed, to set numerical parameters based on training a model from observed data. Depending on the goal, different solution methods may be used. Data assimilation is distinguished from other forms of machine learning, image analysis, and statistical methods in that it utilizes a dynamical model of the system being analyzed. Data assimilation initially developed in the field of numerical weather prediction. Numerical weather prediction models are equations describing the dynamical behavior of the atmosphere, typically coded into a computer program. In order to use these models to make forecasts, initial conditions are needed for the model that closely resemble the current state of the atmosphere. Simply inserting point-wise measurements into the numerical models did not provide a satisfactory solution. Real world measurements contain errors both due to the quality of the instrument and how accurately the position of the measurement is known. These errors can cause instabilities in the models that eliminate any level of skill in a forecast. Thus, more sophisticated methods were needed in order to initialize a model using all available data while making sure to maintain stability in the numerical model. Such data typically includes the measurements as well as a previous forecast valid at the same time the measurements are made. If applied iteratively, this process begins to accumulate information from past observations into all subsequent forecasts. Because data assimilation developed out of the field of numerical weather prediction, it initially gained
https://en.wikipedia.org/wiki/Muon%20spin%20spectroscopy
Muon spin spectroscopy, also known as µSR, is an experimental technique based on the implantation of spin-polarized muons in matter and on the detection of the influence of the atomic, molecular or crystalline surroundings on their spin motion. The motion of the muon spin is due to the magnetic field experienced by the particle and may provide information on its local environment in a very similar way to other magnetic resonance techniques, such as electron spin resonance (ESR or EPR) and, more closely, nuclear magnetic resonance (NMR). Introduction Muon spin spectroscopy is an atomic, molecular and condensed matter experimental technique that exploits nuclear detection methods. In analogy with the acronyms for the previously established spectroscopies NMR and ESR, muon spin spectroscopy is also known as µSR. The acronym stands for muon spin rotation, relaxation, or resonance, depending respectively on whether the muon spin motion is predominantly a rotation (more precisely a precession around a still magnetic field), a relaxation towards an equilibrium direction, or a more complex dynamic dictated by the addition of short radio frequency pulses. µSR does not require any radio-frequency technique to align the probing spin. More generally speaking, muon spin spectroscopy includes any study of the interactions of the muon's magnetic moment with its surroundings when implanted into any kind of matter. Its two most notable features are its ability to study local environments, due to the short effective range of muon interactions with matter, and the characteristic time-window (10−13 – 10−5 s) of the dynamical processes in atomic, molecular and condensed media. The closest parallel to µSR is "pulsed NMR", in which one observes time-dependent transverse nuclear polarization or the so-called "free induction decay" of the nuclear polarization. However, a key difference is that in µSR one uses a specifically implanted spin (the muon's) and does not rely on internal nuclea
https://en.wikipedia.org/wiki/Spin%20polarization
In particle physics, spin polarization is the degree to which the spin, i.e., the intrinsic angular momentum of elementary particles, is aligned with a given direction. This property may pertain to the spin, hence to the magnetic moment, of conduction electrons in ferromagnetic metals, such as iron, giving rise to spin-polarized currents. It may refer to (static) spin waves, preferential correlation of spin orientation with ordered lattices (semiconductors or insulators). It may also pertain to beams of particles, produced for particular aims, such as polarized neutron scattering or muon spin spectroscopy. Spin polarization of electrons or of nuclei, often called simply magnetization, is also produced by the application of a magnetic field. Curie law is used to produce an induction signal in electron spin resonance (ESR or EPR) and in nuclear magnetic resonance (NMR). Spin polarization is also important for spintronics, a branch of electronics. Magnetic semiconductors are being researched as possible spintronic materials. The spin of free electrons is measured either by a LEED image from a clean wolfram-crystal (SPLEED) or by an electron microscope composed purely of electrostatic lenses and a gold foil as a sample. Back scattered electrons are decelerated by annular optics and focused onto a ring shaped electron multiplier at about 15°. The position on the ring is recorded. This whole device is called a Mott-detector. Depending on their spin the electrons have the chance to hit the ring at different positions. 1% of the electrons are scattered in the foil. Of these 1% are collected by the detector and then about 30% of the electrons hit the detector at the wrong position. Both devices work due to spin orbit coupling. The circular polarization of electromagnetic fields is due to spin polarization of their constituent photons. In the most generic context, spin polarization is any alignment of the components of a non-scalar (vectorial, tensorial, spinor) field w
https://en.wikipedia.org/wiki/Chakalaka
Chakalaka is a South African vegetable relish, usually spicy, that is traditionally served with bread, pap, samp, stews, or curries. Chakalaka is said to have originated in the townships of Johannesburg or in the gold mines surrounding Johannesburg, when Mozambican mineworkers leaving their shift cooked tinned produce (tomatoes, beans) with chili to produce a spicy Portuguese-style relish to accompany pap. Many variations of Chakalaka exist, depending on region and family tradition. Some versions include beans, cabbage and butternut. For example, canned baked beans, canned tomatoes, onion, garlic, and curry paste can be used to make the dish. It is frequently served at a braai (barbecue) or with a Sunday lunch. It can be served cold or at room temperature. See also Indian pickle List of African dishes
https://en.wikipedia.org/wiki/Pseudomonas%20aeruginosa
Pseudomonas aeruginosa is a common encapsulated, Gram-negative, aerobic–facultatively anaerobic, rod-shaped bacterium that can cause disease in plants and animals, including humans. A species of considerable medical importance, P. aeruginosa is a multidrug resistant pathogen recognized for its ubiquity, its intrinsically advanced antibiotic resistance mechanisms, and its association with serious illnesses – hospital-acquired infections such as ventilator-associated pneumonia and various sepsis syndromes. The organism is considered opportunistic insofar as serious infection often occurs during existing diseases or conditions – most notably cystic fibrosis and traumatic burns. It generally affects the immunocompromised but can also infect the immunocompetent as in hot tub folliculitis. Treatment of P. aeruginosa infections can be difficult due to its natural resistance to antibiotics. When more advanced antibiotic drug regimens are needed adverse effects may result. It is citrate, catalase, and oxidase positive. It is found in soil, water, skin flora, and most human-made environments throughout the world. It thrives not only in normal atmospheres, but also in low-oxygen atmospheres, thus has colonized many natural and artificial environments. It uses a wide range of organic material for food; in animals, its versatility enables the organism to infect damaged tissues or those with reduced immunity. The symptoms of such infections are generalized inflammation and sepsis. If such colonizations occur in critical body organs, such as the lungs, the urinary tract, and kidneys, the results can be fatal. Because it thrives on moist surfaces, this bacterium is also found on and in medical equipment, including catheters, causing cross-infections in hospitals and clinics. It is also able to decompose hydrocarbons and has been used to break down tarballs and oil from oil spills. P. aeruginosa is not extremely virulent in comparison with other major pathogenic bacterial species 
https://en.wikipedia.org/wiki/Common%20Access%20Card
The Common Access Card, also commonly referred to as the CAC, is the standard identification for Active Duty United States Defense personnel. The card itself is a smart card about the size of a credit card. Defense personnel that use the CAC include the Selected Reserve and National Guard, United States Department of Defense (DoD) civilian employees, United States Coast Guard (USCG) civilian employees and eligible DoD and USCG contractor personnel. It is also the principal card used to enable physical access to buildings and controlled spaces, and it provides access to defense computer networks and systems. It also serves as an identification card under the Geneva Conventions (especially the Third Geneva Convention). In combination with a personal identification number, a CAC satisfies the requirement for two-factor authentication: something the user knows combined with something the user has. The CAC also satisfies the requirements for digital signature and data encryption technologies: authentication, integrity and non-repudiation. The CAC is a controlled item. As of 2008, DoD has issued over 17 million smart cards. This number includes reissues to accommodate changes in name, rank, or status and to replace lost or stolen cards. As of the same date, approximately 3.5 million unterminated or active CACs are in circulation. DoD has deployed an issuance infrastructure at over 1,000 sites in more than 25 countries around the world and is rolling out more than one million card readers and associated middleware. Issuance The CAC is issued to Active United States Armed Forces (Regular, Reserves and National Guard) in the Department of Defense and the U.S. Coast Guard; DoD civilians; USCG civilians; non-DoD/other government employees and State Employees of the National Guard; and eligible DoD and USCG contractors who need access to DoD or USCG facilities and/or DoD computer network systems: Active Duty U.S. Armed Forces (to include Cadets and Midshipmen of the U.S. Se
https://en.wikipedia.org/wiki/Physical%20optics
In physics, physical optics, or wave optics, is the branch of optics that studies interference, diffraction, polarization, and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effects such as quantum noise in optical communication, which is studied in the sub-branch of coherence theory. Principle Physical optics is also the name of an approximation commonly used in optics, electrical engineering and applied physics. In this context, it is an intermediate method between geometric optics, which ignores wave effects, and full wave electromagnetism, which is a precise theory. The word "physical" means that it is more physical than geometric or ray optics and not that it is an exact physical theory. This approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation, in that the details of the problem are treated as a perturbation. In optics, it is a standard way of estimating diffraction effects. In radio, this approximation is used to estimate some effects that resemble optical effects. It models several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since this is a high-frequency approximation, it is often more accurate in optics than for radio. In optics, it typically consists of integrating ray-estimated field over a lens, mirror or aperture to calculate the transmitted or scattered field. In radar scattering it usually means taking the current that would be found on a tangent plane of similar material as the current at each point on the front, i. e. the geometrically illuminated part, of a scatterer. Current on the shadowed parts is taken as zero. The approximate scattered field is then obtained by an integral over these approximate currents. This is useful for bodies with large smooth con
https://en.wikipedia.org/wiki/High-frequency%20approximation
A high-frequency approximation (or "high energy approximation") for scattering or other wave propagation problems, in physics or engineering, is an approximation whose accuracy increases with the size of features on the scatterer or medium relative to the wavelength of the scattered particles. Classical mechanics and geometric optics are the most common and extreme high frequency approximation, where the wave or field properties of, respectively, quantum mechanics and electromagnetism are neglected entirely. Less extreme approximations include, the WKB approximation, physical optics, the geometric theory of diffraction, the uniform theory of diffraction, and the physical theory of diffraction. When these are used to approximate quantum mechanics, they are called semiclassical approximations. See also Electromagnetic modeling Scattering Scattering, absorption and radiative transfer (optics)
https://en.wikipedia.org/wiki/Vancomycin-resistant%20Enterococcus
Vancomycin-resistant Enterococcus, or vancomycin-resistant enterococci (VRE), are bacterial strains of the genus Enterococcus that are resistant to the antibiotic vancomycin. Mechanism of acquired resistance Six different types of vancomycin resistance are shown by enterococcus: Van-A, Van-B, Van-C, Van-D, Van-E and Van-G. The significance is that Van-A VRE is resistant to both vancomycin and teicoplanin, Van-B VRE is resistant to vancomycin but susceptible to teicoplanin, and Van-C is only partly resistant to vancomycin. The mechanism of resistance to vancomycin found in enterococcus involves the alteration of the peptidoglycan synthesis pathway. The D-alanyl-D-lactate variation results in the loss of one hydrogen-bonding interaction (four, as opposed to five for D-alanyl-D-alanine) being possible between vancomycin and the peptide. The D-alanyl-D-serine variation causes a six-fold loss of affinity between vancomycin and the peptide, likely due to steric hindrance. To become vancomycin-resistant, vancomycin-sensitive enterococci typically obtain new DNA in the form of plasmids or transposons which encode genes that confer vancomycin resistance. This acquired vancomycin resistance is distinguished from the natural vancomycin resistance of certain enterococcal species including E. gallinarum and E. . Diagnosis Once the individual has VRE, it is important to ascertain which strain. Screening Screening for VRE can be accomplished in a number of ways. For inoculating peri-rectal/anal swabs or stool specimens directly, one method uses bile esculin azide agar plates containing 6 μg/ml of vancomycin. Black colonies should be identified as an enterococcus to species level and further confirmed as vancomycin resistant by an MIC method before reporting as VRE. Vancomycin resistance can be determined for enterococcal colonies available in pure culture by inoculating a suspension of the organism onto a commercially available brain heart infusion agar (BHIA) plate conta
https://en.wikipedia.org/wiki/Balkan%20Mathematical%20Olympiad
The Balkan Mathematical Olympiad (BMO) is an international contest of winners of high-school national competitions from European countries. Participants (incomplete) Albania BMO 1991: 1.Julian Mulla 2.Erion Dasho 3.Elton Bojaxhi 4.Enkel Hysnelaj BMO 1993: 1.Gjergji Guri 2.Jonada Rrapo 3.Ermir Qeli 4.Mirela Ciperiani 5.Gjergji Sugari 6.Pirro Bracka BMO 1997 1.Alkid Ademi 2.Ermal Rexhepi 3.Aksel Bode 4.Gerard Gjonaj 5.Amarda Shehu BMO 2002: 1.Deni Raco 2.Evarist Byberi 3.Arlind Kopliku 4.Kreshnik Xhangolli 5.Dritan Tako 6.Erind Angjeliu BMO 2006: 1.Eni Duka 2.Erion Dula 3.Keler Marku 4.Klevis Ymeri 5.Anri Rembeci 6.Gjergji Zaimi BMO 2008: 1.Tedi Aliaj 2.Sindi Shkodrani 3.Disel Spahia 4.Arbeg Gani 5.Arnold Spahiu BMO 2009: 1.Andi Reci 2.Ridgers Mema 3.Arbana Grembi 4.Niko Kaso 5.Erixhen Sula 6.Ornela Xhelili BMO 2010: 1.Andi Nika 2.Olsi Leka 3.Florida Ahmetaj 4.Ledio Bidaj 5.Endrit Shehaj 6.Fatjon Gerra BMO 2011: 1.Florida Ahmetaj 2.Erjona Topalli 3.Keti Veliaj 4.Disel Spahija 5.Franc Hodo 6.Ridgers Mema BMO 2012: 1.Boriana Gjura 2.Fatjon Gera 3.Erjona Topalli 4.Gledis Kallço 5.Florida Ahmetaj 6.Genti Gjika BMO 2013: 1.Antonino Sota 2.Boriana Gjura 3.Ardis Cani 4.Gledis Kallço 5.Enis Barbullushi BMO 2014: 1.Gent Gjika 2.Boriana Gjura 3.Gledis Kallço 4.Antonino Sota 5.Enis Barbullushi 6.Geri Shehu BMO 2015: 1.Gledis Kallço 2.Ana Peçini 3.Alboreno Voci 4.Selion Haxhi 5.Naisila Puka 6.Enes Kristo BMO 2016: 1.Gledis Kallço 2.Ana Peçini 3.Fjona Parllaku 5.Stefan Haxhillazi 5.Kevin Isufa 6.Barjol Lami BMO 2016/Albania B: Laura Sheshi 2.Gledis Zeneli 3.Liana Shpani 4.Enea Prifti 5.Jovan Shandro 6.Aleksandros Ruci BMO 2017: 1.Enea Prifti 2.Barjol Lami 3.Stefan Haxhillazi 4.Rei Myderrizi 5.Safet Hoxha 6.Lorenc Bushi Republic of North Macedonia BMO 2001: 1.Ilija Jovceski 2.Todor Ristov 3.Kire Trivodaliev 4.Riste Gligorov 5.Zoran Dimov 6.Irina Panovska BMO 2002: 1.Ilija Jovceski: Silver Medal
https://en.wikipedia.org/wiki/John%20Armstrong%20%28model%20railroader%29
John H. Armstrong (November 18, 1920 – July 28, 2004) was a mechanical engineer, inventor, editor, prolific author, and model railroader best known for layout design and operations. He was married for 44 years to Ellen Palmer. They had four children. Early life He was born and raised in Canandaigua, New York, and began designing his Canandaigua Southern Railroad model layout when he was 14 years old. After earning a mechanical engineering degree from Purdue University, he settled in Silver Spring, Maryland in the late 1940s. He was employed at the Naval Ordnance Laboratory of the United States Navy in White Oak, Maryland and contributed to the design of weapons systems for nuclear submarines. Following his retirement from the Navy, he was a contributing editor for Railway Age magazine for ten years. Model railroad construction In evenings and on weekends he began building his Canandaigua Southern Railroad O scale layout in the basement of the modest Armstrong family home, carefully cutting the cross-ties from balsa wood, setting them on rail-beds made from scale-sized gravel, and then laying out each length of track and carefully nailing it into place with tiny railroad spikes to scale that were hammered into the cross-ties one at a time. Armstrong was extensively published in the U.S. railroading press, publishing 13 books including "Railroad: What It Is, What It Does" (1978) and articles for Model Railroader and trade magazine Railway Age. He has over 300 references in the Trains Magazine Index. Influence on the hobby Armstrong wrote many books and articles on the subject of railroading and model railroading. His personal model railroad, the Canandaigua Southern, was the subject of many newspaper and magazine articles by other writers. In the late 1940s, Armstrong submitted a track plan to a contest sponsored by the magazine Model Railroader. His plan was so successful that it led to an invitation to contribute an article to the magazine on the Canandaig
https://en.wikipedia.org/wiki/Fundamental%20lemma%20of%20the%20calculus%20of%20variations
In mathematics, specifically in the calculus of variations, a variation of a function can be concentrated on an arbitrarily small interval, but not a single point. Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function . The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose concentrated on an interval on which keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed. Basic version If a continuous function on an open interval satisfies the equality for all compactly supported smooth functions on , then is identically zero. Here "smooth" may be interpreted as "infinitely differentiable", but often is interpreted as "twice continuously differentiable" or "continuously differentiable" or even just "continuous", since these weaker statements may be strong enough for a given task. "Compactly supported" means "vanishes outside for some , such that "; but often a weaker statement suffices, assuming only that (or and a number of its derivatives) vanishes at the endpoints , ; in this case the closed interval is used. Version for two given functions If a pair of continuous functions f, g on an interval (a,b) satisfies the equality for all compactly supported smooth functions h on (a,b), then g is differentiable, and g''' = f  everywhere. The special case for g = 0 is just the basic version. Here is the special case for f = 0 (often sufficient). If a continuous function g on an interval (a,b) satisfies the equality for all smooth functions h on (a,b) such that , then g is constant. If, in addition, continuous differentiability
https://en.wikipedia.org/wiki/Transitive%20set
In set theory, a branch of mathematics, a set is called transitive if either of the following equivalent conditions hold: whenever , and , then . whenever , and is not an urelement, then is a subset of . Similarly, a class is transitive if every element of is a subset of . Examples Using the definition of ordinal numbers suggested by John von Neumann, ordinal numbers are defined as hereditarily transitive sets: an ordinal number is a transitive set whose members are also transitive (and thus ordinals). The class of all ordinals is a transitive class. Any of the stages and leading to the construction of the von Neumann universe and Gödel's constructible universe are transitive sets. The universes and themselves are transitive classes. This is a complete list of all finite transitive sets with up to 20 brackets: Properties A set is transitive if and only if , where is the union of all elements of that are sets, . If is transitive, then is transitive. If and are transitive, then and are transitive. In general, if is a class all of whose elements are transitive sets, then and are transitive. (The first sentence in this paragraph is the case of .) A set that does not contain urelements is transitive if and only if it is a subset of its own power set, The power set of a transitive set without urelements is transitive. Transitive closure The transitive closure of a set is the smallest (with respect to inclusion) transitive set that includes (i.e. ). Suppose one is given a set , then the transitive closure of is Proof. Denote and . Then we claim that the set is transitive, and whenever is a transitive set including then . Assume . Then for some and so . Since , . Thus is transitive. Now let be as above. We prove by induction that for all , thus proving that : The base case holds since . Now assume . Then . But is transitive so , hence . This completes the proof. Note that this is the set of all of the objects related
https://en.wikipedia.org/wiki/Isochromosome
An isochromosome is an unbalanced structural abnormality in which the arms of the chromosome are mirror images of each other. The chromosome consists of two copies of either the long (q) arm or the short (p) arm because isochromosome formation is equivalent to a simultaneous duplication and deletion of genetic material. Consequently, there is partial trisomy of the genes present in the isochromosome and partial monosomy of the genes in the lost arm. Nomenclature An isochromosome can be abbreviated as i(chromosome number)(centromeric breakpoint). For example, an isochromosome of chromosome 17 containing two q arms can be identified as i(17)(q10).(Medulloblastoma) Mechanism Isochromosomes can be created during mitosis and meiosis through a misdivision of the centromere or U-type strand exchange. Centromere misdivision Under normal separation of sister chromatids in anaphase, the centromere will divide longitudinally, or parallel to the long axis of the chromosome. An isochromosome is created when the centromere is divided transversely, or perpendicular to the long axis of the chromosome. The division is usually not occurring in the centromere itself, but in an area surrounding the centromere, also known as a pericentric region. It is proposed that these sites of exchange contain homologous sequences between sister chromatids. Although the resulting chromosome may appear monocentric with only one centromere, it is isodicentric with two centromeres very close to each other; resulting in a potential loss of genetic material found on the other arms. Misdivision of the centromere can also produce monocentric isochromosomes, but they are not as common as dicentric isochromosomes. U-type strand exchange A more common mechanism in the formation of isochromosomes is through the breakage and fusion of sister chromatids, most likely occurring in early anaphase of mitosis or meiosis. A double-stranded break in the pericentric region of the chromosome is repaired when the sist
https://en.wikipedia.org/wiki/Generalized%20Appell%20polynomials
In mathematics, a polynomial sequence has a generalized Appell representation if the generating function for the polynomials takes on a certain form: where the generating function or kernel is composed of the series with and and all and with Given the above, it is not hard to show that is a polynomial of degree . Boas–Buck polynomials are a slightly more general class of polynomials. Special cases The choice of gives the class of Brenke polynomials. The choice of results in the Sheffer sequence of polynomials, which include the general difference polynomials, such as the Newton polynomials. The combined choice of and gives the Appell sequence of polynomials. Explicit representation The generalized Appell polynomials have the explicit representation The constant is where this sum extends over all compositions of into parts; that is, the sum extends over all such that For the Appell polynomials, this becomes the formula Recursion relation Equivalently, a necessary and sufficient condition that the kernel can be written as with is that where and have the power series and Substituting immediately gives the recursion relation For the special case of the Brenke polynomials, one has and thus all of the , simplifying the recursion relation significantly. See also q-difference polynomials
https://en.wikipedia.org/wiki/Ristra
A ristra (), also known as a sarta, is an arrangement of drying chile pepper pods, garlic bulbs, or other vegetables for later consumption. In addition to its practical use, the ristra has come to be a trademark of decorative design in the state of New Mexico as well as southern Arizona. Typically, large chiles such as New Mexico chiles and Anaheim peppers are used, although any kind of chile may be used. Garlic can also be arranged into a ristra for drying and curing after the bulbs have matured and the leaves have died away. Ristras are commonly used for decoration and "are said to bring health and good luck." See also List of dried foods
https://en.wikipedia.org/wiki/Herbicidal%20warfare
Herbicidal warfare is the use of substances primarily designed to destroy the plant-based ecosystem of an area. Although herbicidal warfare use chemical substances, its main purpose is to disrupt agricultural food production and/or to destroy plants which provide cover or concealment to the enemy, not to asphyxiate or poison humans and/or destroy human-made structures. Herbicidal warfare has been forbidden by the Environmental Modification Convention since 1978, which bans "any technique for changing the composition or structure of the Earth's biota". History Modern day herbicidal warfare resulted from military research discoveries of plant growth regulators in the Second World War, and is therefore a technological advance on the scorched earth practices by armies throughout history to deprive the enemy of food and cover. Work on military herbicides began in England in 1940, and by 1944, the United States joined in the effort. Even though herbicides are chemicals, due to their mechanism of action (growth regulators), they are often considered a means of biological warfare. Over 1,000 substances were investigated by the war's end for phytotoxic properties, and the Allies envisioned using herbicides to destroy Axis crops. British planners did not believe herbicides were logistically feasible against Germany. In May 1945, USAAF General Victor E. Betrandias advanced a proposal to his superior General Arnold to use of ammonium thiocyanate to reduce rice crops in Japan as part of the bombing raids on their country. This was part of larger set of proposed measures to starve the Japanese. The plan calculated that ammonium thiocyanate would not be seen as "gas warfare" because the substance was not particularly dangerous to humans. On the other hand, the same plan envisaged that if the U.S. were to engage in "gas warfare" against Japan, then mustard gas would be an even more effective rice crop killer. The Joint Target Group rejected the plan as tactically unsound, but ex
https://en.wikipedia.org/wiki/Glossary%20of%20beekeeping
This page is a glossary of beekeeping. A Africanized bee – a hybrid bee with characteristics unsuitable for beekeeping Apiary – a yard where beehives are kept Apicology – ecology of bees Apiology – scientific study of bees Apitherapy – a branch of alternative medicine that uses honey bee products including honey, pollen, propolis, royal jelly and bee venom. B Bee – a member of the order that includes ants and wasps Bee anatomy (mouth) Bee bread – the main source of food for most honey bees and their larvae Beekeeper – also called apiarist or apiculturist, a person who cares for bees Bee learning and communication Bee museums Bee sting Bee venom therapy – also called apitherapy Beehive – a housing for cavity-dwelling bees that allows inspection and honey removal Beekeeping – bees are kept for their products (principally honey), and their utility in pollinating crops Bees and toxic chemicals Brood (honey bee) – the egg, larval, and pupal form of the bee and the comb in which they develop Buckfast bee – a productive breed of bee suitable for damp and cloudy climes C Carniolan honey bee – a gentle bee good for variable nectar flow Characteristics of common wasps and bees Colony Collapse Disorder – malady of unknown cause characterized by disappearance of bees from hive D Deseret – the beehive and its symbolism to the Church of Jesus Christ of Latter-day Saints (Mormons) Diseases of the honey bee Drone bee – the male bee Drone laying queen H Honey bee – all the species in the genus Apis Honey bee life cycle – the physical stages in the development of a mature bee starting from the egg I Italian bee – the most well known honey bee subspecies L Laying worker bee – this worker will produce only drone bees Langstroth hive – commonly seen in developed countries as stacks of white or muted colored boxes at the edges of fields and orchards N Northern Nectar Sources for Honey Bees – common names and descriptions of northern latitude necta
https://en.wikipedia.org/wiki/Emily%20Blackwell
Emily Blackwell (October 8, 1826 – September 7, 1910) was the second woman to earn a medical degree at what is now Case Western Reserve University, after Nancy Talbot Clark. In 1993, she was inducted into the National Women's Hall of Fame. Early life and education Blackwell was born on October 8, 1826, in Bristol, England. She was the daughter of Samuel and Hannah Lane Blackwell. In 1832, Blackwell and her family emigrated to the United States, and in 1837, they settled near Cincinnati, Ohio. Inspired by the example of her older sister, Elizabeth, Blackwell applied to study medicine at Geneva Medical College in Geneva, New York, from which her sister graduated in 1849, but was rejected. After being rejected by several other schools, she was finally accepted in 1853 by Rush Medical College in Chicago, where she studied for a year. However, in 1853, when male students complained about having to study with a woman, the Illinois Medical Society vetoed her admission. Eventually, she was accepted to the Medical College of Cleveland, Ohio, Medical Branch of Western Reserve University, earning her Doctor of Medicine in 1854. At Western Reserve University, the medical education of women began at the urging of Dean John Delamater, who was backed by the Ohio Female Medical Education Society, formed in 1852 to provide moral and financial support for the women medical students. Despite their efforts, the Western Reserve faculty voted to put an end to Delamater's policies in 1856, finding it "inexpedient" to continue admitting women. (The American Medical Association also adopted a report in 1856 advising against coeducation in medicine.) Western Reserve resumed admitting women in 1879, but did so only sporadically for five years. Admission of women at Western Reserve recommenced on a continuous basis in 1918. After earning her medical degree, Blackwell pursued further studies in Edinburgh under Sir James Young Simpson, in London under Dr. William Jenner, and in Paris, Be
https://en.wikipedia.org/wiki/Seroprevalence
Seroprevalence is the number of persons in a population who test positive for a specific disease based on serology (blood serum) specimens; often presented as a percent of the total specimens tested or as a proportion per 100,000 persons tested. As positively identifying the occurrence of disease is usually based upon the presence of antibodies for that disease (especially with viral infections such as herpes simplex, HIV, and SARS-CoV-2), this number is not significant if the specificity of the antibody is low.
https://en.wikipedia.org/wiki/Thermal%20energy%20storage
Thermal energy storage (TES) is achieved with widely different technologies. Depending on the specific technology, it allows excess thermal energy to be stored and used hours, days, months later, at scales ranging from the individual process, building, multiuser-building, district, town, or region. Usage examples are the balancing of energy demand between daytime and nighttime, storing summer heat for winter heating, or winter cold for summer air conditioning (Seasonal thermal energy storage). Storage media include water or ice-slush tanks, masses of native earth or bedrock accessed with heat exchangers by means of boreholes, deep aquifers contained between impermeable strata; shallow, lined pits filled with gravel and water and insulated at the top, as well as eutectic solutions and phase-change materials. Other sources of thermal energy for storage include heat or cold produced with heat pumps from off-peak, lower cost electric power, a practice called peak shaving; heat from combined heat and power (CHP) power plants; heat produced by renewable electrical energy that exceeds grid demand and waste heat from industrial processes. Heat storage, both seasonal and short term, is considered an important means for cheaply balancing high shares of variable renewable electricity production and integration of electricity and heating sectors in energy systems almost or completely fed by renewable energy. Categories The different kinds of thermal energy storage can be divided into three separate categories: sensible heat, latent heat, and thermo-chemical heat storage. Each of these has different advantages and disadvantages that determine their applications. Sensible heat storage Sensible heat storage (SHS) is the most straightforward method. It simply means the temperature of some medium is either increased or decreased. This type of storage is the most commercially available out of the three; other techniques are less developed. The materials are generally inexpens
https://en.wikipedia.org/wiki/Sulfur%20cycle
The sulfur cycle is a biogeochemical cycle in which the sulfur moves between rocks, waterways and living systems. It is important in geology as it affects many minerals and in life because sulfur is an essential element (CHNOPS), being a constituent of many proteins and cofactors, and sulfur compounds can be used as oxidants or reductants in microbial respiration. The global sulfur cycle involves the transformations of sulfur species through different oxidation states, which play an important role in both geological and biological processes. Steps of the sulfur cycle are: Mineralization of organic sulfur into inorganic forms, such as hydrogen sulfide (H2S), elemental sulfur, as well as sulfide minerals. Oxidation of hydrogen sulfide, sulfide, and elemental sulfur (S) to sulfate (). Reduction of sulfate to sulfide. Incorporation of sulfide into organic compounds (including metal-containing derivatives). Disproportionation of sulfur compounds (elemental sulfur, sulfite, thiosulfate) into sulfate and hydrogen sulfide. These are often termed as follows: Assimilative sulfate reduction (see also sulfur assimilation) in which sulfate () is reduced by plants, fungi and various prokaryotes. The oxidation states of sulfur are +6 in sulfate and –2 in R–SH. Desulfurization in which organic molecules containing sulfur can be desulfurized, producing hydrogen sulfide gas (H2S, oxidation state = –2). An analogous process for organic nitrogen compounds is deamination. Oxidation of hydrogen sulfide produces elemental sulfur (S8), oxidation state = 0. This reaction occurs in the photosynthetic green and purple sulfur bacteria and some chemolithotrophs. Often the elemental sulfur is stored as polysulfides. Oxidation in elemental sulfur by sulfur oxidizers produces sulfate. Dissimilative sulfur reduction in which elemental sulfur can be reduced to hydrogen sulfide. Dissimilative sulfate reduction in which sulfate reducers generate hydrogen sulfide from sulfate. Sulfur oxidation
https://en.wikipedia.org/wiki/Hydrogen%20cycle
The hydrogen cycle consists of hydrogen exchanges between biotic (living) and abiotic (non-living) sources and sinks of hydrogen-containing compounds. Hydrogen (H) is the most abundant element in the universe. On Earth, common H-containing inorganic molecules include water (H2O), hydrogen gas (H2), hydrogen sulfide (H2S), and ammonia (NH3). Many organic compounds also contain H atoms, such as hydrocarbons and organic matter. Given the ubiquity of hydrogen atoms in inorganic and organic chemical compounds, the hydrogen cycle is focused on molecular hydrogen, H2. As a consequence of microbial metabolisms or naturally occurring rock-water interactions, hydrogen gas can be created. Other bacteria may then consume free H2, which may also be oxidised photochemically in the atmosphere or lost to space. Hydrogen is also thought to be an important reactant in pre-biotic chemistry and the early evolution of life on Earth, and potentially elsewhere in the Solar System. Abiotic cycles Sources Abiotic sources of hydrogen gas include water-rock and photochemical reactions. Exothermic serpentinization reactions between water and olivine minerals produce H2 in the marine or terrestrial subsurface. In the ocean, hydrothermal vents erupt magma and altered seawater fluids including abundant H2, depending on the temperature regime and host rock composition. Molecular hydrogen can also be produced through photooxidation (via solar UV radiation) of some mineral species such as siderite in anoxic aqueous environments. This may have been an important process in the upper regions of early Earth's Archaean oceans. Sinks Because H2 is the lightest element, atmospheric H2 can readily be lost to space via Jeans escape, an irreversible process that drives Earth's net mass loss. Photolysis of heavier compounds not prone to escape, such as CH4 or H2O, can also liberate H2 from the upper atmosphere and contribute to this process. Another major sink of free atmospheric H2 is photochemical oxi
https://en.wikipedia.org/wiki/Hamartoma
A hamartoma is a mostly benign, local malformation of cells that resembles a neoplasm of local tissue but is usually due to an overgrowth of multiple aberrant cells, with a basis in a systemic genetic condition, rather than a growth descended from a single mutated cell (monoclonality), as would typically define a benign neoplasm/tumor. Despite this, many hamartomas are found to have clonal chromosomal aberrations that are acquired through somatic mutations, and on this basis the term hamartoma is sometimes considered synonymous with neoplasm. Hamartomas are by definition benign, slow-growing or self-limiting, though the underlying condition may still predispose the individual towards malignancies. Hamartomas are usually caused by a genetic syndrome that affects the development cycle of all or at least multiple cells. Many of these conditions are classified as overgrowth syndromes or cancer syndromes. Hamartomas occur in many different parts of the body and are most often asymptomatic incidentalomas (undetected until they are found incidentally on an imaging study obtained for another reason). Additionally, the definition of hamartoma versus benign neoplasm is often unclear, since both lesions can be clonal. Lesions such as adenomas, developmental cysts, hemangiomas, lymphangiomas and rhabdomyomas within the kidneys, lungs or pancreas are interpreted by some experts as hamartomas while others consider them true neoplasms. Moreover, even though hamartomas show a benign histology, there is a risk of some rare but life-threatening complications such as those found in neurofibromatosis type I and tuberous sclerosis. It is different from choristoma, a closely related form of heterotopia. The two can be differentiated as follows: a hamartoma is an excess of normal tissue in a normal situation (e.g., a birthmark on the skin), while a choristoma is an excess of tissue in an abnormal situation (e.g., pancreatic tissue in the duodenum). The term hamartoma is from the Greek ἁ
https://en.wikipedia.org/wiki/Quantum%20yield
In particle physics, the quantum yield (denoted ) of a radiation-induced process is the number of times a specific event occurs per photon absorbed by the system. Applications Fluorescence spectroscopy The fluorescence quantum yield is defined as the ratio of the number of photons emitted to the number of photons absorbed. Fluorescence quantum yield is measured on a scale from 0 to 1.0, but is often represented as a percentage. A quantum yield of 1.0 (100%) describes a process where each photon absorbed results in a photon emitted. Substances with the largest quantum yields, such as rhodamines, display the brightest emissions; however, compounds with quantum yields of 0.10 are still considered quite fluorescent. Quantum yield is defined by the fraction of excited state fluorophores that decay through fluorescence: where is the fluorescence quantum yield, is the rate constant for radiative relaxation (fluorescence), is the rate constant for all non-radiative relaxation processes. Non-radiative processes are excited state decay mechanisms other than photon emission, which include: Förster resonance energy transfer, internal conversion, external conversion, and intersystem crossing. Thus, the fluorescence quantum yield is affected if the rate of any non-radiative pathway changes. The quantum yield can be close to unity if the non-radiative decay rate is much smaller than the rate of radiative decay, that is . Fluorescence quantum yields are measured by comparison to a standard of known quantum yield. The quinine salt quinine sulfate in a sulfuric acid solution was regarded as the most common fluorescence standard, however, a recent study revealed that the fluorescence quantum yield of this solution is strongly affected by the temperature, and should no longer be used as the standard solution. The quinine in 0.1M perchloric acid ( 0.60) shows no temperature dependence up to 45 °C, therefore it can be considered as a reliable standard solution. Experimenta
https://en.wikipedia.org/wiki/Cold%20chain
Cold chain is defined as the series of actions and equipment applied to maintain a product within a specified low-temperature range from harvest/production to consumption. An unbroken cold chain is an uninterrupted sequence of refrigerated production, storage and distribution activities, along with associated equipment and logistics, which maintain a desired low-temperature interval to keep the safety and quality of perishable or sensitive products, such as foods and medicines. In other words, the term denotes a low temperature-controlled supply chain network used to ensure and extend the shelf life of products, e.g. fresh agricultural produce, seafood, frozen food, photographic film, chemicals, and pharmaceutical products. Such products, during transport and end-use when in transient storage, are sometimes called cool cargo. Unlike other goods or merchandise, cold chain goods are perishable and always en-route towards end use or destination, even when held temporarily in cold stores and hence commonly referred to as "cargo" during its entire logistics cycle. Adequate cold storage, in particular, can be crucial to prevent quantitative and qualitative food losses. History Mobile refrigeration with ice from the ice trade began with reefer ships and refrigerator cars (iceboxes on wheels) in the mid-19th century. The term cold chain was first used in 1908. The first effective cold store in the UK opened in 1882 at St Katharine Docks. It could hold 59,000 carcasses, and by 1911 cold storage capacity in London had reached 2.84 million carcasses. By 1930 about a thousand refrigerated meat containers were in use which could be switched from road to railway. Mobile mechanical refrigeration was invented by Frederick McKinley Jones, who co-founded Thermo King with entrepreneur Joseph A. "Joe" Numero. In 1938 Numero sold his Cinema Supplies Inc. movie sound equipment business to RCA to form the new entity, U.S. Thermo Control Company (later the Thermo King Corporation), in p
https://en.wikipedia.org/wiki/Relational%20aggression
Relational aggression, alternative aggression, or relational bullying is a type of aggression in which harm is caused by damaging someone's relationships or social status. Although it can be used in many contexts and among different age groups, relational aggression among adolescents in particular, has received a lot of attention. The attention relational aggression has received has been augmented by the help of popular media, including movies like Mean Girls and books like Odd Girl Out by Rachel Simmons (2002), Nesthäkchen and the World War by Else Ury (1916), and Queen Bees and Wannabes by R. Wiseman (2003). Relational aggression can have various lifelong consequences. Relational aggression has been primarily observed and studied among girls, following pioneering research by psychologist Nicki R. Crick. Overview A person's peers become increasingly significant in adolescence and are especially important for adolescents' healthy psychological development. Peers provide many new behavioral models and feedback that are essential for successful identity formation and for the development of one's sense of self. Interactions with peers encourage positive practice of autonomy and independent decision-making skills. They are also essential for healthy sexual development including the development of the capacity for intimate friendships and learning appropriate sexual behavior. Peer relationships are also very important for determining how much adolescents value school, how much effort they put into it, and how well they perform in class. However, quite frequently adolescents take part in peer relationships that are harmful for their psychological development. Adolescents tend to form various cliques and belong to different crowds based on their activity interests, music and clothing preferences, as well as their cultural or ethnic background. Such groups differ in their sociometric or popularity status, which often create unhealthy, aggression-victimization based dy
https://en.wikipedia.org/wiki/Metasystem%20transition
A metasystem transition is the emergence, through evolution, of a higher level of organization or control. A metasystem is formed by the integration of a number of initially independent components, such as molecules (as theorized for instance by hypercycles), cells, or individuals, and the emergence of a system steering or controlling their interactions. As such, the collective of components becomes a new, goal-directed individual, capable of acting in a coordinated way. This metasystem is more complex, more intelligent, and more flexible in its actions than the initial component systems. Prime examples are the origin of life, the transition from unicellular to multicellular organisms, the emergence of eusociality or symbolic thought. The concept of metasystem transition was introduced by the cybernetician Valentin Turchin in his 1970 book The Phenomenon of Science, and developed among others by Francis Heylighen in the Principia Cybernetica Project. Another related idea, that systems ("operators") evolve to become more complex by successive closures encapsulating components in a larger whole, is proposed in "the operator theory", developed by Gerard Jagers op Akkerhuis. Turchin has applied the concept of metasystem transition in the domain of computing, via the notion of metacompilation or supercompilation. A supercompiler is a compiler program that compiles its own code, thus increasing its own efficiency, producing a remarkable speedup in its execution. Evolutionary quanta The following is the classical sequence of metasystem transitions in the history of animal evolution according to Turchin, from the origin of animate life to sapient culture: Control of Position = Motion: the animal or agent develops the ability to control its position in space Control of Motion = Irritability: the movement of the agent is no longer given, but a reaction to elementary sensations or stimuli Control of Irritability = Reflex: different elementary sensations and their re
https://en.wikipedia.org/wiki/Mittag-Leffler%27s%20theorem
In complex analysis, Mittag-Leffler's theorem concerns the existence of meromorphic functions with prescribed poles. Conversely, it can be used to express any meromorphic function as a sum of partial fractions. It is sister to the Weierstrass factorization theorem, which asserts existence of holomorphic functions with prescribed zeros. The theorem is named after the Swedish mathematician Gösta Mittag-Leffler who published versions of the theorem in 1876 and 1884. Theorem Let be an open set in and be a subset whose limit points, if any, occur on the boundary of . For each in , let be a polynomial in without constant coefficient, i.e. of the form Then there exists a meromorphic function on whose poles are precisely the elements of and such that for each such pole , the function has only a removable singularity at ; in particular, the principal part of at is . Furthermore, any other meromorphic function on with these properties can be obtained as , where is an arbitrary holomorphic function on . Proof sketch One possible proof outline is as follows. If is finite, it suffices to take . If is not finite, consider the finite sum where is a finite subset of . While the may not converge as F approaches E, one may subtract well-chosen rational functions with poles outside of (provided by Runge's theorem) without changing the principal parts of the and in such a way that convergence is guaranteed. Example Suppose that we desire a meromorphic function with simple poles of residue 1 at all positive integers. With notation as above, letting and , Mittag-Leffler's theorem asserts the existence of a meromorphic function with principal part at for each positive integer . More constructively we can let This series converges normally on any compact subset of (as can be shown using the M-test) to a meromorphic function with the desired properties. Pole expansions of meromorphic functions Here are some examples of pole expansions of meromorphic
https://en.wikipedia.org/wiki/Plug%20compatible
Plug compatible refers to "hardware that is designed to perform exactly like another vendor's product." The term PCM was originally applied to manufacturers who made replacements for IBM peripherals. Later this term was used to refer to IBM-compatible computers. PCM and peripherals Before the rise of the PCM peripheral industry, computing systems were either configured with peripherals designed and built by the CPU vendor, or designed to use vendor-selected rebadged devices. The first example of plug-compatible IBM subsystems were tape drives and controls offered by Telex beginning 1965. Memorex in 1968 was first to enter the IBM plug-compatible disk followed shortly thereafter by a number of suppliers such as CDC, Itel, and Storage Technology Corporation. This was boosted by the world's largest user of computing equipment in both directions. Ultimately plug-compatible products were offered for most peripherals and system main memory. PCM and computer systems A plug-compatible machine is one that has been designed to be backward compatible with a prior machine. In particular, a new computer system that is plug-compatible has not only the same connectors and protocol interfaces to peripherals, but also binary-code compatibility—it runs the same software as the old system. A plug compatible manufacturer or PCM is a company that makes such products. One recurring theme in plug-compatible systems is the ability to be bug compatible as well. That is, if the forerunner system had software or interface problems, then the successor must have (or simulate) the same problems. Otherwise, the new system may generate unpredictable results, defeating the full compatibility objective. Thus, it is important for customers to understand the difference between a "bug" and a "feature", where the latter is defined as an intentional modification to the previous system (e.g. higher speed, lighter weight, smaller package, better operator controls, etc.). PCM and IBM mainframes The or
https://en.wikipedia.org/wiki/Defense%20Information%20System%20Network
The Defense Information System Network (DISN) has been the United States Department of Defense's enterprise telecommunications network for providing data, video, and voice services for 40 years. The DISN end-to-end infrastructure is composed of three major segments: The sustaining base (I.e., base, post, camp, or station, and Service enterprise networks). The Command, Control, Communications, Computers and Intelligence (C4I) infrastructure will interface with the long-haul network to support the deployed warfighter. The sustaining base segment is primarily the responsibility of the individual Services. The long-haul transport infrastructure, which includes the communication systems and services between the fixed environments and the deployed Joint Task Force (JTF) and/or Coalition Task Force (CTF) warfighter. The long-haul telecommunications infrastructure segment is primarily the responsibility of DISA. The deployed warfighter, mobile users, and associated Combatant Commander telecommunications infrastructures are supporting the Joint Task Force (JTF) and/or Coalition Task Force (CTF). The deployed warfighter and associated Combatant Commander telecommunications infrastructure is primarily the responsibility of the individual Services. The DISN provides the following multiple networking services: Global Content Delivery System (GCDS) Data Services Sensitive but Unclassified (NIPRNet) Secret Data Services (SIPRNet) Multicast Organizational Messaging The Organizational Messaging Service provides a range of assured services to the customer community that includes the military services, DoD agencies, combatant commands (CCMDs), non-DoD U.S. government activities, and the Intelligence Community (IC). These services include the ability to exchange official information between military organizations and to support interoperability with allied nations, non-DoD activities, and the IC operating in both the strategic/fixed-base and the tactical/deployed enviro
https://en.wikipedia.org/wiki/P6%20%28microarchitecture%29
The P6 microarchitecture is the sixth-generation Intel x86 microarchitecture, implemented by the Pentium Pro microprocessor that was introduced in November 1995. It is frequently referred to as i686. It was planned to be succeeded by the NetBurst microarchitecture used by the Pentium 4 in 2000, but was revived for the Pentium M line of microprocessors. The successor to the Pentium M variant of the P6 microarchitecture is the Core microarchitecture which in turn is also derived from P6. P6 was used within Intel's mainstream offerings from the Pentium Pro to Pentium III, and was widely known for low power consumption, excellent integer performance, and relatively high instructions per cycle (IPC). Features The P6 core was the sixth generation Intel microprocessor in the x86 line. The first implementation of the P6 core was the Pentium Pro CPU in 1995, the immediate successor to the original Pentium design (P5). P6 processors dynamically translate IA-32 instructions into sequences of buffered RISC-like micro-operations, then analyze and reorder the micro-operations to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro was the first x86 microprocessor designed by Intel to use this technique, though the NexGen Nx586, introduced in 1994, did so earlier. Other features first implemented in the x86 space in the P6 core include: Speculative execution and out-of-order completion (called "dynamic execution" by Intel), which required new retire units in the execution core. This lessened pipeline stalls, and in part enabled greater speed-scaling of the Pentium Pro and successive generations of CPUs. Superpipelining, which increased from Pentium's 5-stage pipeline to 14 of the Pentium Pro and early model of the Pentium III (Coppermine), and eventually morphed into less than 10-stage pipeline of the Pentium M for embedded and mobile market due to energy inefficiency and higher voltage issues that encountered in the pr
https://en.wikipedia.org/wiki/Helmholtz%20resonance
Helmholtz resonance or wind throb is the phenomenon of air resonance in a cavity, such as when one blows across the top of an empty bottle. The name comes from a device created in the 1850s by Hermann von Helmholtz, the Helmholtz resonator, which he used to identify the various frequencies or musical pitches present in music and other complex sounds. History Helmholtz described in his 1862 book On the Sensations of Tone an apparatus able to pick out specific frequencies from a complex sound. The Helmholtz resonator, as it is now called, consists of a rigid container of a known volume, nearly spherical in shape, with a small neck and hole in one end and a larger hole in the other end to emit the sound. When the resonator's 'nipple' is placed inside one's ear, a specific frequency of the complex sound can be picked out and heard clearly. In his book Helmholtz explains: When we "apply a resonator to the ear, most of the tones produced in the surrounding air will be considerably damped; but if the proper tone of the resonator is sounded, it brays into the ear most powerfully…. The proper tone of the resonator may even be sometimes heard cropping up in the whistling of the wind, the rattling of carriage wheels, the splashing of water." A set of varied size resonators was sold to be used as discrete acoustic filters for the spectral analysis of complex sounds. There is also an adjustable type, called a universal resonator, which consists of two cylinders, one inside the other, which can slide in or out to change the volume of the cavity over a continuous range. An array of 14 of this type of resonator has been employed in a mechanical Fourier sound analyzer. This resonator can also emit a variable-frequency tone when driven by a stream of air in the "tone variator" invented by William Stern, 1897. When air is forced into a cavity, the pressure inside increases. When the external force pushing the air into the cavity is removed, the higher-pressure air inside will f
https://en.wikipedia.org/wiki/Intel%20Core%20%28microarchitecture%29
The Intel Core microarchitecture (provisionally referred to as Next Generation Micro-architecture, and developed as Merom) is a multi-core processor microarchitecture launched by Intel in mid-2006. It is a major evolution over the Yonah, the previous iteration of the P6 microarchitecture series which started in 1995 with Pentium Pro. It also replaced the NetBurst microarchitecture, which suffered from high power consumption and heat intensity due to an inefficient pipeline designed for high clock rate. In early 2004 the new version of NetBurst (Prescott) needed very high power to reach the clocks it needed for competitive performance, making it unsuitable for the shift to dual/multi-core CPUs. On May 7, 2004 Intel confirmed the cancellation of the next NetBurst, Tejas and Jayhawk. Intel had been developing Merom, the 64-bit evolution of the Pentium M, since 2001, and decided to expand it to all market segments, replacing NetBurst in desktop computers and servers. It inherited from Pentium M the choice of a short and efficient pipeline, delivering superior performance despite not reaching the high clocks of NetBurst. The first processors that used this architecture were code-named 'Merom', 'Conroe', and 'Woodcrest'; Merom is for mobile computing, Conroe is for desktop systems, and Woodcrest is for servers and workstations. While architecturally identical, the three processor lines differ in the socket used, bus speed, and power consumption. The first Core-based desktop and mobile processors were branded Core 2, later expanding to the lower-end Pentium Dual-Core, Pentium and Celeron brands; while server and workstation Core-based processors were branded Xeon. Features The Core microarchitecture returned to lower clock rates and improved the use of both available clock cycles and power when compared with the preceding NetBurst microarchitecture of the Pentium 4 and D-branded CPUs. The Core microarchitecture provides more efficient decoding stages, execution units,
https://en.wikipedia.org/wiki/Conference%20on%20Automated%20Deduction
The Conference on Automated Deduction (CADE) is the premier academic conference on automated deduction and related fields. The first CADE was organized in 1974 at the Argonne National Laboratory near Chicago. Most CADE meetings have been held in Europe and the United States. However, conferences have been held all over the world. Since 1996, CADE has been held yearly. In 2001, CADE was, for the first time, merged into the International Joint Conference on Automated Reasoning (IJCAR). This has been repeated biannually since 2004. In 1996, CADE Inc. was formed as a non-profit sub-corporation of the Association for Automated Reasoning to organize the formerly individually organized conferences. External links , CADE , AAR
https://en.wikipedia.org/wiki/Trichosanthes%20cucumerina
Trichosanthes cucumerina is a tropical or subtropical vine. Its variety T. cucumerina var. anguina raised for its strikingly long fruit. In Asia, it is eaten immature as a vegetable much like the summer squash and in Africa, the reddish pulp of mature snake gourd is used as an economical substitute for tomato. Common names for the cultivated variety include snake gourd, serpent gourd, chichinda padwal and Snake Tomato. Trichosanthes cucumerina is found in the wild across much of South and Southeast Asia, including India, Bangladesh, Nepal, Pakistan, Sri Lanka, Indonesia, Malaysia, Myanmar(Burma) and southern China (Guangxi and Yunnan). It is also regarded as native in northern Australia. and naturalized in Florida, parts of Africa and on various islands in the Indian and Pacific Oceans. Formerly, the cultivated form was considered a distinct species, T. anguina, but it is now generally regarded as conspecific with the wild populations, as they freely interbreed: Trichosanthes cucumerina var. anguina (L.) Haines – cultivated variant Trichosanthes cucumerina var. cucumerina – wild variant Description Trichosanthes cucumerina is a monoecious annual vine climbing by means of tendrils. Leaves are palmately lobed, up to 25 cm long. Flowers are unisexual, white, opening at night, with long branching hairs on the margins of the petals. These hairs are curled up in the daytime when the flower is closed, but unfurl at night to form a delicate lacy display (see photos in gallery below). Fruits can be up to 200 cm long, deep red at maturity, hanging below the vine. The related Japanese snake gourd (Trichosanthes pilosa, sometimes called T. ovigera or T. cucumeroides), very similar in vegetative morphology, but the fruit of T. pilosa is round to egg-shaped, only about 7 cm long. Uses Culinary The common name "snake gourd" refers to the narrow, twisted, elongated fruit. The soft-skinned immature fruit can reach up to in length. It is soft, bland, somewhat mucilaginous
https://en.wikipedia.org/wiki/Molecular%20autoionization
In chemistry, molecular autoionization (or self-ionization) is a chemical reaction between molecules of the same substance to produce ions. If a pure liquid partially dissociates into ions, it is said to be self-ionizing. In most cases the oxidation number on all atoms in such a reaction remains unchanged. Such autoionization can be protic ( transfer), or non-protic. Examples Protic solvents Protic solvents often undergo some autoionization (in this case autoprotolysis): 2 H2O <=> H3O+ + OH- The self-ionization of water is particularly well studied, due to its implications for acid-base chemistry of aqueous solutions. 2 NH3 <=> NH4+ + NH2- 2 H2SO4 <=> H3SO4+ + HSO4- 3 HF <=> H2F+ + HF2- Here proton transfer between two HF combines with homoassociation of and a third HF to form Non-protic solvents 2 PF5 <=> PF6- + PF4+ N2O4 <=> NO+ + NO3- Here the nitrogen oxidation numbers change from (+4 and +4) to (+3 and +5). 2 BrF3 <=> BrF2+ + BrF4- These solvents all possess atoms with odd atomic numbers, either nitrogen or a halogen. Such atoms enable the formation of singly charged, nonradical ions (which must have at least one odd atomic number atom), which are the most favorable autoionization products. Protic solvents, mentioned previously, use hydrogen for this role. Autoionization would be much less favorable in solvents such as sulfur dioxide or carbon dioxide, which have only even atomic number atoms. Coordination chemistry Autoionization is not restricted to neat liquids or solids. Solutions of metal complexes exhibit this property. For example, compounds of the type (where X = Cl or Br) are unstable with respect to autoionization forming . See also Ionization Ion association
https://en.wikipedia.org/wiki/Ciphertext%20expansion
In cryptography, the term ciphertext expansion refers to the length increase of a message when it is encrypted. Many modern cryptosystems cause some degree of expansion during the encryption process, for instance when the resulting ciphertext must include a message-unique Initialization Vector (IV). Probabilistic encryption schemes cause ciphertext expansion, as the set of possible ciphertexts is necessarily greater than the set of input plaintexts. Certain schemes, such as Cocks Identity Based Encryption, or the Goldwasser-Micali cryptosystem result in ciphertexts hundreds or thousands of times longer than the plaintext. Ciphertext expansion may be offset or increased by other processes which compress or expand the message, e.g., data compression or error correction coding.
https://en.wikipedia.org/wiki/Double%20hyphen
In Latin script, the double hyphen is a punctuation mark that consists of two parallel hyphens. It was a development of the earlier , which developed from a Central European variant of the virgule slash, originally a form of scratch comma. Similar marks (see below) are used in other scripts. In order to avoid it being confused with the equals sign , the double hyphen is often shown as a double oblique hyphen in modern typography. The double hyphen is also not to be confused with two consecutive hyphens (--), which are often used to represent an em dash or en dash due to the limitations of typewriters and keyboards that do not have distinct hyphen and dash keys. Usage The double hyphen is used for several different purposes throughout the world: Some typefaces, such as Fraktur faces, use the double hyphen as a glyphic variant of the single hyphen. (With Fraktur faces, such a double hyphen is usually oblique.) It may be also used for artistic or commercial purposes to achieve a distinctive visual effect. For example, the name of The Waldorf⹀Astoria hotel was officially written with a double hyphen from 1949 to 2009. In Merriam-Webster dictionaries if a word is divided at the end of the line, and the division point happens to be a hyphen, it is replaced with a double hyphen to graphically indicate that the divided word is normally hyphenated, for example cross⸗country. In several dictionaries published in the late nineteenth and early twentieth centuries, all such compound words are linked with double hyphens, whether at the end of the line or not, and the normal use of the single hyphen for non-compound words is retained. An example from the first or second page of such dictionaries is Aaron's⸗rod. Examples include the Century Dictionary and Funk & Wagnalls New Standard Dictionary of the English Language. It is used by Coptic language scholars to denote the form of the verb used before pronominal suffixes, e.g. ⲕⲟⲧ⸗ kot⹀ 'to build'. It is used by scholar
https://en.wikipedia.org/wiki/Pseudorapidity
In experimental particle physics, pseudorapidity, , is a commonly used spatial coordinate describing the angle of a particle relative to the beam axis. It is defined as where is the angle between the particle three-momentum and the positive direction of the beam axis. Inversely, As a function of three-momentum , pseudorapidity can be written as where is the component of the momentum along the beam axis (i.e. the longitudinal momentum – using the conventional system of coordinates for hadron collider physics, this is also commonly denoted ). In the limit where the particle is travelling close to the speed of light, or equivalently in the approximation that the mass of the particle is negligible, one can make the substitution (i.e. in this limit, the particle's only energy is its momentum-energy, similar to the case of the photon), and hence the pseudorapidity converges to the definition of rapidity used in experimental particle physics: This differs slightly from the definition of rapidity in special relativity, which uses instead of . However, pseudorapidity depends only on the polar angle of the particle's trajectory, and not on the energy of the particle. One speaks of the "forward" direction in a hadron collider experiment, which refers to regions of the detector that are close to the beam axis, at high ; in contexts where the distinction between "forward" and "backward" is relevant, the former refers to the positive z-direction and the latter to the negative z-direction. In hadron collider physics, the rapidity (or pseudorapidity) is preferred over the polar angle because, loosely speaking, particle production is constant as a function of rapidity, and because differences in rapidity are Lorentz invariant under boosts along the longitudinal axis: they transform additively, similar to velocities in Galilean relativity. A measurement of a rapidity difference between particles (or if the particles involved are massless) is hence not dependent on the
https://en.wikipedia.org/wiki/Max%20Q%20%28astronaut%20band%29
Max Q is a Houston-based rock band whose members are all astronauts. It was formed in early 1987 by Brewster Shaw and Robert L. Gibson, recruiting George Nelson. Gibson has stated that he came up with the name "Max Q" (though recognizes that Shaw has also claimed having come up with the name himself), the engineering term for the maximum dynamic pressure from the atmosphere experienced by an ascending spacecraft. He joked that like the Space Shuttle, the band "makes lots of noise but no music." The band's rotating line-up often changes due to flight crew assignments, training, and the occasional retirement. In 2009, members included: Ricky Arnold – rhythm guitar Dan Burbank – lead vocals and guitar Tracy "TC" Caldwell Dyson – lead vocals Ken "Taco" Cockrell – keyboards and background vocals Chris Ferguson – drums Drew Feustel – lead vocals and lead guitar Kevin A. Ford – drums Chris A. Hadfield – lead vocals and bass guitar Greg "Box" Johnson – keyboards and background vocals Dottie Metcalf-Lindenburger – lead vocals Steve "Stevie Ray" Robinson – lead guitar Former members include: Carl Walz – lead vocals Susan Helms – lead vocals and keyboards (after Hawley left) Kevin "Chili" Chilton – lead vocals and guitar (after Shaw left) Pierre Thuot – bass guitar (after Nelson left) Steven A. Hawley - keyboards (the original keyboard player) ...with the original members having been: Jim Wetherbee – drums George "Pinky" Nelson – bass guitar Robert "Hoot" Gibson – lead vocals and lead guitar Brewster Shaw – rhythm guitar The genesis of this band is connected to the Challenger disaster, as it was in the wake of that severely somber event that Dan Brandenstein, as Chief of the Astronaut Office, suggested that they hold a party to lighten things up. Brewster Shaw then approached Hoot Gibson, who he had performed a few songs with at astronaut parties, with the idea of forming a four-piece group to play at this party. The band was well-liked. They decided to add a keyboar
https://en.wikipedia.org/wiki/Urushiol-induced%20contact%20dermatitis
Urushiol-induced contact dermatitis (also called Toxicodendron dermatitis or Rhus dermatitis) is a type of allergic contact dermatitis caused by the oil urushiol found in various plants, most notably sumac family species of the genus Toxicodendron: poison ivy, poison oak, poison sumac, and the Chinese lacquer tree. The name is derived from the Japanese word for the sap of the Chinese lacquer tree, urushi. Other plants in the sumac family (including mango, pistachio, the Burmese lacquer tree, the India marking nut tree, and the shell of the cashew) also contain urushiol, as do unrelated plants such as Ginkgo biloba. As is the case with all contact dermatitis, urushiol-induced allergic rashes are a Type IV hypersensitivity reaction, also known as delayed-type hypersensitivity. Symptoms include itching, inflammation, oozing, and, in severe cases, a burning sensation. The American Academy of Dermatology estimates that there are up to 50 million cases of urushiol-induced dermatitis annually in the United States alone, accounting for 10% of all lost-time injuries in the United States Forest Service. Poison oak is a significant problem in the rural Western and Southern United States, while poison ivy is most rampant in the Eastern United States. Dermatitis from poison sumac is less common. Signs and symptoms Urushiol causes an eczematous contact dermatitis characterized by redness, swelling, papules, vesicles, blisters, and streaking. People vary greatly in their sensitivity to urushiol. In approximately 15% to 30% of people, urushiol does not trigger an immune system response, while at least 25% of people have a very strong immune response resulting in severe symptoms. The rash takes one to two weeks to run its course and may cause scars, depending on the severity of the exposure. Severe cases involve small (1–2 mm), clear, fluid-filled blisters on the skin. Pus-filled vesicles containing a whitish fluid may indicate an infection. Most poison ivy rashes, without infec
https://en.wikipedia.org/wiki/Hybrid%20drive
In computing, a hybrid drive (solid state hybrid drive – SSHD) is a logical or physical storage device that combines a faster storage medium such as solid-state drive (SSD) with a higher-capacity hard disk drive (HDD). The intent is adding some of the speed of SSDs to the cost-effective storage capacity of traditional HDDs. The purpose of the SSD in a hybrid drive is to act as a cache for the data stored on the HDD, improving the overall performance by keeping copies of the most frequently used data on the faster SSD drive. There are two main configurations for implementing hybrid drives: dual-drive hybrid systems and solid-state hybrid drives. In dual-drive hybrid systems, physically separate SSD and HDD devices are installed in the same computer, having the data placement optimization performed either manually by the end user, or automatically by the operating system through the creation of a "hybrid" logical device. In solid-state hybrid drives, SSD and HDD functionalities are built into a single piece of hardware, where data placement optimization is performed either entirely by the device (self-optimized mode), or through placement "hints" supplied by the operating system (host-hinted mode). Types There are two main "hybrid" storage technologies that combine NAND flash memory or SSDs, with the HDD technology: dual-drive hybrid systems and solid-state hybrid drives. Dual-drive hybrid systems Dual-drive hybrid systems combine the usage of separate SSD and HDD devices installed in the same computer. Performance optimizations are managed in one of three ways: By the computer user, who manually places more frequently accessed data onto the faster drive. By the computer's operating system software, which combines SSD and HDD into a single hybrid volume, providing an easier experience to the end-user. Examples of hybrid volumes implementations in operating systems are ZFS' "hybrid storage pools", bcache and dm-cache on Linux, Intel's Hystor and Apple's Fusion D
https://en.wikipedia.org/wiki/PEDOT%3APSS
poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is a polymer mixture of two ionomers. One component in this mixture is made up of polystyrene sulfonate which is a sulfonated polystyrene. Part of the sulfonyl groups are deprotonated and carry a negative charge. The other component poly(3,4-ethylenedioxythiophene) (PEDOT) is a conjugated polymer and carries positive charges and is based on polythiophene. Together the charged macromolecules form a macromolecular salt. Synthesis PEDOT:PSS can be prepared by mixing an aqueous solution of PSS with EDOT monomer, and to the resulting mixture, a solution of sodium persulfate and ferric sulfate. Applications PEDOT:PSS has the highest efficiency among conductive organic thermoelectric materials (ZT~0.42) and thus can be used in flexible and biodegradable thermoelectric generators. Yet its largest application is as a transparent, conductive polymer with high ductility. For example, AGFA coats 200 million photographic films per year with a thin, extensively-stretched layer of virtually transparent and colorless PEDOT:PSS as an antistatic agent to prevent electrostatic discharges during production and normal film use, independent of humidity conditions, and as electrolyte in polymer electrolytic capacitors. If organic compounds, including high boiling solvents like methylpyrrolidone, dimethyl sulfoxide, sorbitol, ionic liquids and surfactants, are added conductivity increases by many orders of magnitude. This makes it also suitable as a transparent electrode, for example in touchscreens, organic light-emitting diodes, flexible organic solar cells and electronic paper to replace the traditionally used indium tin oxide (ITO). Owing to the high conductivity (up to 4600 S/cm), it can be used as a cathode material in capacitors replacing manganese dioxide or liquid electrolytes. It is also used in organic electrochemical transistors. The conductivity of PEDOT:PSS can also be significantly improved by a post-trea
https://en.wikipedia.org/wiki/Punctuality
Punctuality is the characteristic of completing a required task or fulfilling an obligation before or at a previously designated time. "Punctual" is often used synonymously with "on time". An opposite characteristic is tardiness. Each culture tends to have its own understanding about what is considered an acceptable degree of punctuality. Typically, a small amount of lateness is acceptable—this is commonly about five to ten minutes in most Western cultures—but this is context-dependent, for example it might not the case for doctor's appointments. Some cultures have an unspoken understanding that actual deadlines are different from stated deadlines, for example with African time. For example, it may be understood in a particular culture that people will turn up an hour later than advertised. In this case, since everyone understands that a 9 p.m. party will actually start at around 10 p.m., no-one is inconvenienced when everyone arrives at 10 p.m. In cultures that value punctuality, being late is seen as disrespectful of others' time and may be considered insulting. In such cases, punctuality may be enforced by social penalties, for example by excluding low-status latecomers from meetings entirely. of punctuality in econometrics and to considering the effects of non-punctuality on others in queueing theory. See also
https://en.wikipedia.org/wiki/Float-zone%20silicon
Float-zone silicon is very pure silicon obtained by vertical zone melting. The process was developed at Bell Labs by Henry Theuerer in 1955 as a modification of a method developed by William Gardner Pfann for germanium. In the vertical configuration molten silicon has sufficient surface tension to keep the charge from separating. The major advantages is crucibleless growth that prevents contamination of the silicon from the vessel itself and therefore an inherently high-purity alternative to boule crystals grown by the Czochralski method. The concentrations of light impurities, such as carbon (C) and oxygen (O2) elements, are extremely low. Another light impurity, nitrogen (N2), helps to control microdefects and also brings about an improvement in mechanical strength of the wafers, and is now being intentionally added during the growth stages. The diameters of float-zone wafers are generally not greater than 200 mm due to the surface tension limitations during growth. A polycrystalline rod of ultrapure electronic-grade silicon is passed through an RF heating coil, which creates a localized molten zone from which the crystal ingot grows. A seed crystal is used at one end to start the growth. The whole process is carried out in an evacuated chamber or in an inert gas purge. The molten zone carries the impurities away with it and hence reduces impurity concentration (most impurities are more soluble in the melt than the crystal). Specialized doping techniques like core doping, pill doping, gas doping and neutron transmutation doping are used to incorporate a uniform concentration of desirable impurity. Float-zone silicon wafers may be irradiated by neutrons to turn it into a n-doped semiconductor. Application Float-zone silicon is typically used for power devices and detector applications, where high-resistivity is required. It is highly transparent to terahertz radiation, and is usually used to fabricate optical components, such as lenses and windows, for teraher
https://en.wikipedia.org/wiki/IBM%20High%20Availability%20Cluster%20Multiprocessing
IBM PowerHA SystemMirror (formerly IBM PowerHA and HACMP) is IBM's solution for high-availability clusters on the AIX Unix and Linux for IBM System p platforms and stands for High Availability Cluster Multiprocessing. IBM's HACMP product was first shipped in 1991 and is now in its 20th release - PowerHA SystemMirror for AIX 7.1. PowerHA can run on up to 32 computers or nodes, each of which is either actively running an application (active) or waiting to take over when another node fails (passive). Data on file systems can be shared between systems in the cluster. PowerHA relies heavily on IBM's Reliable Scalable Cluster Technology (RSCT). PowerHA is an RSCT aware client. RSCT is distributed with AIX. RSCT includes a daemon called group services that coordinates the response to events of interest to the cluster (for example, an interface or a node fails, or an administrator makes a change to the cluster configuration). Up until PowerHA V6.1, RSCT also monitored cluster nodes, networks and network adapters for failures using the topology services daemon (topsvcs). In the current release (V7.1), RSCT provides coordinate response between nodes, but monitoring and communication are provided by the Cluster Aware AIX (CAA) infrastructure. The 7.1 release of PowerHA relies heavily on CAA, a clustering infrastructure built into the operating system and exploited by RSCT and PowerHA. CAA provides the monitoring and communication infrastructure for PowerHA and other clustering solutions on AIX, as well as cluster-wide event notification using the Autonomic Health Advisor File System (AHAFS) and cluster-aware AIX commands with clcmd. CAA replaces the function provided by Topology Services (topsvcs) in RSCT in previous releases of PowerHA/HACMP . IBM PowerHA SystemMirror Timeline IBM PowerHA SystemMirror Releases PowerHA SystemMirror 7 PowerHA SystemMirror 7.2, released in . PowerHA SystemMirror 7.2.1, released in . New User Interface. PowerHA SystemMirror 7.1 was
https://en.wikipedia.org/wiki/High-availability%20cluster
High-availability clusters (also known as HA clusters, fail-over clusters) are groups of computers that support server applications that can be reliably utilized with a minimum amount of down-time. They operate by using high availability software to harness redundant computers in groups or clusters that provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known as failover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate file systems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well. HA clusters are often used for critical databases, file sharing on a network, business applications, and customer services such as electronic commerce websites. HA cluster implementations attempt to build redundancy into a cluster to eliminate single points of failure, including multiple network connections and data storage which is redundantly connected via storage area networks. HA clusters usually use a heartbeat private network connection which is used to monitor the health and status of each node in the cluster. One subtle but serious condition all clustering software must be able to handle is split-brain, which occurs when all of the private links go down simultaneously, but the cluster nodes are still running. If that happens, each node in the cluster may mistakenly decide that every other node has gone down and attempt to start services that other nodes are still running. Having duplicate instances of services may cause data corruption on the shared storage. HA
https://en.wikipedia.org/wiki/Newton%27s%20identities
In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity. Mathematical statement Formulation in terms of symmetric polynomials Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum: and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so Then Newton's identities can be stated as valid for all . Also, one has for all . Concretely, one gets for the first few values of k: The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has and so on; here the left-hand sides never become zero. These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as In general, we have valid for all n ≥k ≥ 1. Also, one has for all k > n ≥ 1. Application to the roots of a polynomial The polynomial with roots xi may be expanded as where the coefficients are the symmetric polynomials defined above. Given the power sums of the roots the coefficien