source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Rip%20van%20Winkle%20cipher
In cryptography, the Rip van Winkle cipher is a provably secure cipher with a finite key, assuming the attacker has only finite storage. The cipher requires a broadcaster (perhaps a numbers station) publicly transmitting a series of random numbers. The sender encrypts a plaintext message by XORing it with the random numbers, then holding it some length of time T. At the end of that time, the sender finally transmits the encrypted message. The receiver holds the random numbers the same length of time T. As soon as the receiver gets the encrypted message, he XORs it with the random numbers he remembers were transmitted T ago, to recover the original plaintext message. The delay T represents the "key" and must be securely communicated only once. Ueli Maurer says the original Rip van Winkle cipher is completely impractical, but it motivated a new approach to provable security. Sources J.L. Massey and I. Ingemarsson. The Rip van Winkle cipher - a simple and provably computationally secure cipher with a finite key. In Proc. IEEE Int. Symp. Information Theory (Abstracts), page 146, 1985. Cryptographic algorithms
https://en.wikipedia.org/wiki/Castigliano%27s%20method
Castigliano's method, named after Carlo Alberto Castigliano, is a method for determining the displacements of a linear-elastic system based on the partial derivatives of the energy. He is known for his two theorems. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy. Castigliano's first theorem – for forces in an elastic structure Castigliano's method for calculating forces is an application of his first theorem, which states: If the strain energy of an elastic structure can be expressed as a function of generalised displacement qi then the partial derivative of the strain energy with respect to generalised displacement gives the generalised force Qi. In equation form, where U is the strain energy. If the force-displacement curve is nonlinear then the complementary strain energy needs to be used instead of strain energy. Castigliano's second theorem – for displacements in a linearly elastic structure. Castigliano's method for calculating displacements is an application of his second theorem, which states: If the strain energy of a linearly elastic structure can be expressed as a function of generalised force Qi then the partial derivative of the strain energy with respect to generalised force gives the generalised displacement qi in the direction of Qi. As above this can also be expressed as: Examples For a thin, straight cantilever beam with a load P at the end, the displacement at the end can be found by Castigliano's second theorem : where is Young's modulus, is the second moment of area of the cross-section, and is the expression for the internal
https://en.wikipedia.org/wiki/Paper%20composite%20panels
Paper composite panels are a phenolic resin/cellulose composite material made from partially recycled paper and phenolic resin. Multiple layers of paper are soaked in phenolic resin, then molded and baked into net shape in a heated form or press. Originally distributed as a commercial kitchen surface in the 1950s, it has recently been adapted for use in skateboard parks as well as various other applications, such as residential counters, cabinetry, fiberglass cores, guitar fingerboards, signage, exterior wall cladding, and a variety of architectural applications. Composition There are several manufacturers in North America who use a different composition of materials to form the final product. One composition is cellulose fiber and phenolic resin (a type of polymer) which is combined and baked for a smooth hard surface. The natural fibers are made from plant, animal and mineral sources. However most natural fibers are predominantly cellulosic. Cellulose derived from tree pulp is turned into large rolls of paper. The paper is then soaked in phenolic resin and goes up to a heating chamber to be dried out before being rolled back up. Then hundreds of these sheets are laid on top of each other and with the use of compression molding the stack is compacted. Because of the resin's thermoset properties the resulting cooled material is hard. Applications It was used for the Boeing 747 for their air tables, hydroforming dyes, vacuum chuck faces, work holders, and proofing materials. Architecturally, it is used for countertops. It has also been used for whaleboard in fiberglass boat building. Other commercial uses include cutting boards, prep tables, pizza peels, and the dashboard of a pickup truck prototype vehicle. Since the last quarter of the 20th century, phenolic resin and cellulose based compound materials have been used as an alternative to ebony and rosewood to make stringed instrument fingerboards. From 2012 to 2018 guitarmaker Gibson used Richlite,
https://en.wikipedia.org/wiki/Twelvefold%20way
In combinatorics, the twelvefold way is a systematic classification of 12 related enumerative problems concerning two finite sets, which include the classical problems of counting permutations, combinations, multisets, and partitions either of a set or of a number. The idea of the classification is credited to Gian-Carlo Rota, and the name was suggested by Joel Spencer. Overview Let and be finite sets. Let and be the cardinality of the sets. Thus is an -set, and is an -set. The general problem we consider is the enumeration of equivalence classes of functions . The functions are subject to one of the three following restrictions: No condition: each in may be sent by to any in , and each may occur multiple times. is injective: each value for in must be distinct from every other, and so each in may occur at most once in the image of . is surjective: for each in there must be at least one in such that , thus each will occur at least once in the image of . (The condition " is bijective" is only an option when ; but then it is equivalent to both " is injective" and " is surjective".) There are four different equivalence relations which may be defined on the set of functions from to : equality; equality up to a permutation of ; equality up to a permutation of ; equality up to permutations of and . The three conditions on the functions and the four equivalence relations can be paired in ways. The twelve problems of counting equivalence classes of functions do not involve the same difficulties, and there is not one systematic method for solving them. Two of the problems are trivial (the number of equivalence classes is 0 or 1), five problems have an answer in terms of a multiplicative formula of n and x, and the remaining five problems have an answer in terms of combinatorial functions (Stirling numbers and the partition function for a given number of parts). The incorporation of classical enumeration problems into this setting is a
https://en.wikipedia.org/wiki/Robot%20learning
Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives). Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation. Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning". Projects Maya Cakmak, assistant professor of computer science and engineering at the University of Washington, is trying to create a robot that learns by imitating - a technique called "programming by demonstration". A researcher shows it a cleaning technique for the robot's vision system and it generalizes the cleaning motion from the human demonstration as well as identifying the "state of dirt" before and after cleaning. Similarly the Baxter industrial robot can be taught how to do something by grabbing its arm and showing it the desired movements. It can also use deep learning to teach itself to grasp an unknown object. Sha
https://en.wikipedia.org/wiki/McCune%E2%80%93Albright%20syndrome
McCune–Albright syndrome is a complex genetic disorder affecting the bone, skin and endocrine systems. It is a mosaic disease arising from somatic activating mutations in GNAS, which encodes the alpha-subunit of the Gs heterotrimeric G protein. It was first described in 1937 by American pediatrician Donovan James McCune and American endocrinologist Fuller Albright. Signs and symptoms McCune–Albright syndrome is suspected when two or more of the following features are present: Fibrous dysplasia (specifically, polyostotic fibrous dysplasia) Hyperpigmented skin lesions with characteristic features, including jagged "coast of Maine" borders and tendency occur along the midline of the body. These lesions are historically termed café au lait macules, however the term "cafe-au-lait" only describes their appearance on lighter-skinned individuals. Hyperfunctioning endocrine disease Patients may have one or many of these features, which may occur in any combination. As such, the clinical presentation of patients with McCune Albright syndrome varies greatly depending on the disease features. Various endocrine diseases may present in McCune–Albright syndrome due to increased hormone production. Precocious puberty: The most common endocrinopathy is precocious puberty, which presents in girls (~85%) with recurrent estrogen-producing cysts leading to episodic breast development, growth acceleration, and vaginal bleeding. Precocious puberty may also occur in boys with McCune–Albright syndrome, but is much less common (~10–15%). In children of both sexes, growth acceleration may lead to tall stature in childhood, however premature bone maturation may lead to early growth plate fusion and short stature in adulthood. Testicular abnormalities: Testicular abnormalities are seen in a majority (~85%) of boys with McCune–Albright syndrome. These typically present with macro-orchidism. On pathology lesions show Leydig and Sertoli cell hyperplasia. Hyperthyroidism: Hyperthyroidism
https://en.wikipedia.org/wiki/Language%2C%20Truth%2C%20and%20Logic
Language, Truth and Logic is a 1936 book about meaning by the philosopher Alfred Jules Ayer, in which the author defines, explains, and argues for the verification principle of logical positivism, sometimes referred to as the criterion of significance or criterion of meaning. Ayer explains how the principle of verifiability may be applied to the problems of philosophy. Language, Truth and Logic brought some of the ideas of the Vienna Circle and the logical empiricists to the attention of the English-speaking world. Historical background According to Ayer's autobiographical book, Part of My Life, it was work he started in the summer and autumn of 1933 that eventually led to Language, Truth and Logic, specifically Demonstration of the Impossibility of Metaphysics—later published in Mind under the editorship of G.E. Moore. The title of the book was taken ("To some extent plagiarized" according to Ayer) from Friedrich Waismann's Logik, Sprache, Philosophie. Criterion of meaning According to Ayer, analytic statements are tautologies. A tautology is a statement that is necessarily true, true by definition, and true under any conditions. A tautology is a repetition of the meaning of a statement, using different words or symbols. According to Ayer, the statements of logic and mathematics are tautologies. Tautologies are true by definition, and thus their validity does not depend on empirical testing. Synthetic statements, or empirical propositions, assert or deny something about the real world. The validity of synthetic statements is not established merely by the definition of the words or symbols they contain. According to Ayer, if a statement expresses an empirical proposition, then the validity of the proposition is established by its empirical verifiability. Propositions are statements that have conditions under which they can be verified. By the verification principle, meaningful statements have conditions under which their validity can be affirmed or denied. Sta
https://en.wikipedia.org/wiki/Induced%20radioactivity
Induced radioactivity, also called artificial radioactivity or man-made radioactivity, is the process of using radiation to make a previously stable material radioactive. The husband-and-wife team of Irène Joliot-Curie and Frédéric Joliot-Curie discovered induced radioactivity in 1934, and they shared the 1935 Nobel Prize in Chemistry for this discovery. Irène Curie began her research with her parents, Marie Curie and Pierre Curie, studying the natural radioactivity found in radioactive isotopes. Irene branched off from the Curies to study turning stable isotopes into radioactive isotopes by bombarding the stable material with alpha particles (denoted α). The Joliot-Curies showed that when lighter elements, such as boron and aluminium, were bombarded with α-particles, the lighter elements continued to emit radiation even after the α−source was removed. They showed that this radiation consisted of particles carrying one unit positive charge with mass equal to that of an electron, now known as a positron. Neutron activation is the main form of induced radioactivity. It occurs when an atomic nucleus captures one or more free neutrons. This new, heavier isotope may be either stable or unstable (radioactive), depending on the chemical element involved. Because neutrons disintegrate within minutes outside of an atomic nucleus, free neutrons can be obtained only from nuclear decay, nuclear reaction, and high-energy interaction, such as cosmic radiation or particle accelerator emissions. Neutrons that have been slowed through a neutron moderator (thermal neutrons) are more likely to be captured by nuclei than fast neutrons. A less common form of induced radioactivity results from removing a neutron by photodisintegration. In this reaction, a high energy photon (a gamma ray) strikes a nucleus with an energy greater than the binding energy of the nucleus, which releases a neutron. This reaction has a minimum cutoff of 2 MeV (for deuterium) and around 10 MeV for most heav
https://en.wikipedia.org/wiki/Munchers
Munchers is a series of educational/edutainment computer games produced by the Minnesota Educational Computing Consortium (MECC) for several operating systems. The series was popular among American schoolchildren in the 1980s and 1990s and were the recipients of several awards. The two original games in the series were Number Munchers and Word Munchers. The brand name is currently owned by Houghton Mifflin Harcourt, but is defunct. Number Munchers is the first educational game in the Munchers series. Designed to teach basic math skills, it was popular among American school children in the 1980s and 1990s and was the recipient of several awards. An updated 3D version, Math Munchers Deluxe, was released in 1995. Word Munchers is a spin-off of Number Munchers designed to teach basic grammar skills. It was popular among American schoolchildren in the 1980s and 1990s and was used as a teaching aid widely used in schools. Though the gameplay was the same as in Number Munchers, specific to Word Munchers were the modes of play, which includes parts of speech such as verbs or adjectives. Teachers had the options to select the vowel sounds and how difficult the word sets would be, such as whether to include words that break pronunciation rules. These games were followed up by other titles that focused on areas like fractions (Fraction Munchers) and trivia (Knowledge Munchers Deluxe), and the original games also received deluxe versions. Entries The Munchers series included: Word Munchers Number Munchers Fraction Munchers Super Munchers Math Munchers Deluxe (a remake of Number Munchers) Word Munchers Deluxe (also a remake of Word Munchers) Math Munchers for the 21st Century Word Munchers for the 21st Century Knowledge Munchers Deluxe (originally released as "Trivia Munchers Deluxe") Troggle Trouble Math (a spin-off) The original version only allowed navigation through the keyboard arrow keys. Later versions featured better graphics and added mouse support. Gamep
https://en.wikipedia.org/wiki/Diesel%20generator
A diesel generator (DG) (also known as a diesel genset) is the combination of a diesel engine with an electric generator (often an alternator) to generate electrical energy. This is a specific case of engine generator. A diesel compression-ignition engine is usually designed to run on diesel fuel, but some types are adapted for other liquid fuels or natural gas (CNG). Diesel generating sets are used in places without connection to a power grid or as an emergency power supply if the grid fails, as well as for more complex applications such as peak-lopping, grid support, and export to the power grid. Diesel generator size is crucial to minimize low load or power shortages. Sizing is complicated by the characteristics of modern electronics, specifically non-linear loads. In size ranges around 50 MW and above, an open cycle gas turbine is more efficient at full load than an array of diesel engines, and far more compact, with comparable capital costs; but for regular part-loading, even at these power levels, diesel arrays are sometimes preferred to open cycle gas turbines, due to their superior efficiencies. Diesel generator set The packaged combination of a diesel engine, a generator, and various ancillary devices (such as base, canopy, sound attenuation, control systems, circuit breakers, jacket water heaters, and starting system) is referred to as a "generating set" or a "genset" for short. Set sizes range from 8 to 30 kW (also 8 to 30 kVA single phase) for homes, small shops, and offices, with the larger industrial generators from 8 kW (11 kVA) up to 2,000 kW (2,500 kVA three phase) used for office complexes, factories, and other industrial facilities. A 2,000 kW set can be housed in a ISO container with a fuel tank, controls, power distribution equipment and all other equipment needed to operate as a standalone power station or as a standby backup to grid power. These units, referred to as power modules, are gensets on large triple axle trailers weighing or
https://en.wikipedia.org/wiki/Fr%C3%B6licher%E2%80%93Nijenhuis%20bracket
In mathematics, the Frölicher–Nijenhuis bracket is an extension of the Lie bracket of vector fields to vector-valued differential forms on a differentiable manifold. It is useful in the study of connections, notably the Ehresmann connection, as well as in the more general study of projections in the tangent bundle. It was introduced by Alfred Frölicher and Albert Nijenhuis (1956) and is related to the work of Schouten (1940). It is related to but not the same as the Nijenhuis–Richardson bracket and the Schouten–Nijenhuis bracket. Definition Let Ω*(M) be the sheaf of exterior algebras of differential forms on a smooth manifold M. This is a graded algebra in which forms are graded by degree: A graded derivation of degree ℓ is a mapping which is linear with respect to constants and satisfies Thus, in particular, the interior product with a vector defines a graded derivation of degree ℓ = −1, whereas the exterior derivative is a graded derivation of degree ℓ = 1. The vector space of all derivations of degree ℓ is denoted by DerℓΩ*(M). The direct sum of these spaces is a graded vector space whose homogeneous components consist of all graded derivations of a given degree; it is denoted This forms a graded Lie superalgebra under the anticommutator of derivations defined on homogeneous derivations D1 and D2 of degrees d1 and d2, respectively, by Any vector-valued differential form K in Ωk(M, TM) with values in the tangent bundle of M defines a graded derivation of degree k − 1, denoted by iK, and called the insertion operator. For ω ∈ Ωℓ(M), The Nijenhuis–Lie derivative along K ∈ Ωk(M, TM) is defined by where d is the exterior derivative and iK is the insertion operator. The Frölicher–Nijenhuis bracket is defined to be the unique vector-valued differential form such that Hence, If k = 0, so that K ∈ Ω0(M, TM) is a vector field, the usual homotopy formula for the Lie derivative is recovered If k=ℓ=1, so that K,L ∈ Ω1(M, TM), one has for any vector field
https://en.wikipedia.org/wiki/Plasmoid
A plasmoid is a coherent structure of plasma and magnetic fields. Plasmoids have been proposed to explain natural phenomena such as ball lightning, magnetic bubbles in the magnetosphere, and objects in cometary tails, in the solar wind, in the solar atmosphere, and in the heliospheric current sheet. Plasmoids produced in the laboratory include field-reversed configurations, spheromaks, and in dense plasma focuses. The word plasmoid was coined in 1956 by Winston H. Bostick (1916-1991) to mean a "plasma-magnetic entity": The plasma is emitted not as an amorphous blob, but in the form of a torus. We shall take the liberty of calling this toroidal structure a plasmoid, a word which means plasma-magnetic entity. The word plasmoid will be employed as a generic term for all plasma-magnetic entities. Plasmoid characteristics Bostick wrote: Plasmoids appear to be plasma cylinders elongated in the direction of the magnetic field. Plasmoids possess a measurable magnetic moment, a measurable translational speed, a transverse electric field, and a measurable size. Plasmoids can interact with each other, seemingly by reflecting off one another. Their orbits can also be made to curve toward one another. Plasmoids can be made to spiral to a stop if projected into a gas at about 10−3 mm Hg pressure. Plasmoids can also be made to smash each other into fragments. There is some scant evidence to support the hypothesis that they undergo fission and possess spin. A plasmoid has an internal pressure stemming from both the gas pressure of the plasma and the magnetic pressure of the field. To maintain an approximately static plasmoid radius, this pressure must be balanced by an external confining pressure. In a field-free vacuum, for example, a plasmoid will rapidly expand and dissipate. Plasmoids have been formed in discharges with local magnetic field strengths on the order of 16,000 Tesla. Cosmic applications Bostick went on to apply his theory of plasmoids to astrophysics pheno
https://en.wikipedia.org/wiki/Thermal%20desorption
Thermal desorption is an environmental remediation technology that utilizes heat to increase the volatility of contaminants such that they can be removed (separated) from the solid matrix (typically soil, sludge or filter cake). The volatilized contaminants are then either collected or thermally destroyed. A thermal desorption system therefore has two major components; the desorber itself and the offgas treatment system. Thermal desorption is not incineration. History Thermal desorption first appeared as an environmental treatment technology in 1985 when it was specified in the Record of Decision for the McKin Company Superfund site within the Royal River watershed in Maine. It is frequently referred to as "low temp" thermal desorption to differentiate it from high temperature incineration. An early direct fired thermal desorption project was the treatment of 8000 tons of toxaphene (a chlorinated pesticide) contaminated sandy soil at the S&S Flying Services site in Marianna Florida in 1990, with later projects exceeding 170,000 tons at the Cape Fear coal tar site in 1999. A status report from the United States Environmental Protection Agency shows that thermal desorption has been used at 69 Superfund sites through FY2000. In addition, hundreds of remediation projects have been completed using thermal desorption at non-Superfund sites. For in-situ on-site treatment options, only incineration and stabilization have been used at more Superfund sites. Incineration suffers from poor public acceptance. Stabilization does not provide a permanent remedy, since the contaminants are still on site. Thermal desorption is a widely accepted technology that provides a permanent solution at an economically competitive cost. The world’s first large-scale thermal desorption for treatment of mercury containing wastes was erected in Wölsau, for the remediation of the Chemical Factory Marktredwitz (founded in 1788) was considered to be the oldest in Germany. Operation commenced in
https://en.wikipedia.org/wiki/Dideoxynucleotide
Dideoxynucleotides are chain-elongating inhibitors of DNA polymerase, used in the Sanger method for DNA sequencing. They are also known as 2',3' because both the 2' and 3' positions on the ribose lack hydroxyl groups, and are abbreviated as ddNTPs (ddGTP, ddATP, ddTTP and ddCTP). Role in the Sanger method The Sanger method is used to amplify a target segment of DNA, so that the DNA sequence can be determined precisely. The incorporation of ddNTPs in the reaction valves are simply used to terminate the synthesis of a growing DNA strand, resulting in partially replicated DNA fragments. This is because DNA polymerase requires the 3' OH group of the growing chain and the 5' phosphate group of the incoming dNTP to create a phosphodiester bond. Sometimes the DNA polymerase will incorporate a ddNTP and the absence of the 3' OH group will interrupt the condensation reaction between the 5' phosphate (following the cleavage of pyrophospate) of the incoming nucleotide with the 3' hydroxyl group of the previous nucleotide on the growing strand. This condensation reaction would normally occur with the incorporation of a non-modified dNTP by DNA polymerase. In the simplest of terms, the nucleophilic attack of the 3' OH group leds to the addition of a nucleotide onto a growing chain. The absence of the 3' hydroxyl group inhibits this nucleophilic attack from happening, disabling the DNA polymerase's ability to continue with its function. This discovery led to its appropriate name "Chain-terminating nucleotides". The dideoxyribonucleotides do not have a 3' hydroxyl group, hence no further chain elongation can occur once this dideoxynucleotide is on the chain. This can lead to the termination of the DNA sequence. Thus, these molecules form the basis of the dideoxy chain-termination method of DNA sequencing, which was reported by Frederick Sanger and his team in 1977 as an extension of earlier work. Sanger's approach was described in 2001 as one of the two fundamental method
https://en.wikipedia.org/wiki/Category%201%20cable
Category 1 cable, also known as Cat 1, Level 1, or voice-grade copper, is a grade of unshielded twisted pair cabling designed for telephone communications, and at one time was the most common on-premises wiring. The maximum frequency suitable for transmission over Cat 1 cable is 1 MHz, but Cat 1 is not currently considered adequate for data transmission (though it was at one time used for that purpose on the Apple Macintosh starting in the late 1980s in the form of Farallon Computing's//NetTopia's PhoneNet, an implementation of Apple's LocalTalk networking hardware standard). Although not an official category standard established by TIA/EIA, Category 1 has become the de facto name given to Level 1 cables originally defined by Anixter International, the distributor. Cat 1 cable was typically used for networks that carry only voice traffic, for example telephones. Official TIA/EIA-568 standards have only been established for cables of Category 3 ratings or above. See also Category 2 cable Category 3 cable Category 4 cable Category 5 cable
https://en.wikipedia.org/wiki/Don%27t%20repeat%20yourself
"Don't repeat yourself" (DRY) is a principle of software development aimed at reducing repetition of information which is likely to change, replacing it with abstractions that are less likely to change, or using data normalization which avoids redundancy in the first place. The DRY principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system". The principle has been formulated by Andy Hunt and Dave Thomas in their book The Pragmatic Programmer. They apply it quite broadly to include database schemas, test plans, the build system, even documentation. When the DRY principle is applied successfully, a modification of any single element of a system does not require a change in other logically unrelated elements. Additionally, elements that are logically related all change predictably and uniformly, and are thus kept in sync. Besides using methods and subroutines in their code, Thomas and Hunt rely on code generators, automatic build systems, and scripting languages to observe the DRY principle across layers. Single choice principle A particular case of DRY is the single choice principle. It was defined by Bertrand Meyer as: "Whenever a software system must support a set of alternatives, one and only one module in the system should know their exhaustive list." It was applied when designing Eiffel. Alternatives WET The opposing view to DRY is called WET, a backronym commonly taken to stand for write everything twice (alternatively write every time, we enjoy typing or waste everyone's time). WET solutions are common in multi-tiered architectures where a developer may be tasked with, for example, adding a comment field on a form in a web application. The text string "comment" might be repeated in the label, the HTML tag, in a read function name, a private variable, database DDL, queries, and so on. A DRY approach eliminates that redundancy by using frameworks that reduce or eliminate all those editing t
https://en.wikipedia.org/wiki/Certified%20software%20development%20professional
Certified Software Development Professional (CSDP) is a vendor-neutral professional certification in software engineering developed by the IEEE Computer Society for experienced software engineering professionals. This certification was offered globally since 2001 through Dec. 2014. The certification program constituted an element of the Computer Society's major efforts in the area of Software engineering professionalism, along with the IEEE-CS and ACM Software Engineering 2004 (SE2004) Undergraduate Curricula Recommendations, and The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide 2004), completed two years later. As a further development of these elements, to facilitate the global portability of the software engineering certification, since 2005 through 2008 the International Standard ISO/IEC 24773:2008 "Software engineering -- Certification of software engineering professionals -- Comparison framework" has been developed. (Please, see an overview of this ISO/IEC JTC 1 and IEEE standardization effort in the article published by Stephen B. Seidman, CSDP. ) The standard was formulated in such a way, that it allowed to recognize the CSDP certification scheme as basically aligned with it, soon after the standard's release date, 2008-09-01. Several later revisions of the CSDP certification were undertaken with the aim of making the alignment more complete. In 2019, ISO/IEC 24773:2008 has been withdrawn and revised (by ISO/IEC 24773-1:2019 ). The certification was initially offered by the IEEE Computer Society to experienced software engineering and software development practitioners globally in 2001 in the course of the certification examination beta-testing. The CSDP certification program has been officially approved in 2002. After December 2014 this certification program has been discontinued, all issued certificates are recognized as valid forever. A number of new similar certifications were introduced by the IEEE Computer Society, includi
https://en.wikipedia.org/wiki/Verificationism
Verificationism, also known as the verification principle or the verifiability criterion of meaning, is the philosophical doctrine which asserts that a statement is meaningful only if it is either empirically verifiable (i.e. confirmed through the senses) or a truth of logic (e.g., tautologies). Verificationism rejects statements of metaphysics, theology, ethics, and aesthetics, as cognitively meaningless. Such statements may be meaningful in influencing emotions or behavior, but not in terms of conveying truth value, information, or factual content. Verificationism was a central theme of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge. Origins Although earlier philosophical principles which aim to ground scientific theory in some verifiable experience are found within the work of American pragmatist C.S. Peirce and that of French conventionalist Pierre Duhem, who fostered instrumentalism, the project of verificationism was launched by the logical positivists who, emerging from the Berlin Circle and the Vienna Circle in the 1920s, sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as an empirical science. Logical positivists garnered the verifiability criterion of cognitive meaningfulness from Ludwig Wittgenstein's philosophy of language posed in his 1921 book Tractatus, and, led by Bertrand Russell, sought to reformulate the analytic–synthetic distinction in a way that would reduce mathematics and logic to semantical conventions. This would be pivotal to verificationism, in that logic and mathematics would otherwise be classified as synthetic a priori knowledge and defined as "meaningless" under verificationism. Seeking grounding in such empiricism as of David Hume, Auguste Comte, and Ernst Mach—along with the positivism of the latter two—they borrowed some p
https://en.wikipedia.org/wiki/Hazard%20analysis
A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different types of hazards. A hazard is a potential condition and exists or not (probability is 1 or 0). It may, in single existence or in combination with other hazards (sometimes called events) and conditions, become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability (between 1 and 0) of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, food safety, occupational safety and health, process safety, reliability engineering. Hazards and risk A hazard is defined as a "Condition, event, or circumstance that could lead to or contribute to an unplanned or undesirable event." Seldom does a single hazard cause an accident or a functional failure. More often an accident or operational failure occurs as the result of a sequence of causes. A hazard analysis will consider system state, for example operating environment, as well as failures or malfunctions. While in some cases, safety or reliability risk can be eliminated, in most cases a certain degree of risk must be accepted. In order to quantify expected costs before the fact, the potential consequences and the probability of occurrence must be considered. Assessment of risk is made by combining the severity of consequence with the likelihood of occurrence in a m
https://en.wikipedia.org/wiki/Adam%20Logan
Adam Logan (born 1975 in Kingston, Ontario) is a research mathematician and a top Canadian Scrabble player. He won the World Scrabble Championship in 2005, beating Pakorn Nemitrmansuk of Thailand 3–0 in the final. He is the only player to have won the Canadian Scrabble Championship five times (1996, 2005, 2008, 2013 and 2016). He was also the winner of the 1996 National Scrabble Championship, North America's top rated player in 1997, and the winner of the Collins division of the 2014 North American Scrabble Championship. Since his competitive career began in 1985, Logan has played nearly 2200 tournament games, compiling a winning percentage of over 68%, and earning just $100,000 in prize money. He was a Putnam Fellow in 1992 and 1993. Logan completed his first degree, in mathematics, at Princeton University in 1995 and received a PhD from Harvard University in 1999. He completed his Post-doctoral work at McGill University between 2002 through 2003. From 2008 to 2009 he was employed as a Quantitative Analyst at D. E. Shaw & Co. in New York City. He works for the Tutte Institute for Mathematics and Computing in Ottawa, Ontario, Canada.
https://en.wikipedia.org/wiki/Ethyl%20heptanoate
Ethyl heptanoate is the ester resulting from the condensation of heptanoic acid and ethanol. It is used in the flavor industry because of its odor that is similar to grape.
https://en.wikipedia.org/wiki/Lampricide
A lampricide is any chemical designed to target the larvae of lampreys in river systems before they develop into parasitic adults. One lampricide is used in the headwaters of Lake Champlain and the Great Lakes to control the sea lamprey (Petromyzon marinus), an invasive species to these lakes. TFM (3-trifluoromethyl-4-nitrophenol) is the main chemical used for this purpose. As it is hydrophobic, it passes through biological membranes. TFM is a metabolic uncoupler—that is, TFM separates the electron transport chain from ATP synthesis, resulting in the failure of the aerobic respiration process. It accomplishes this by disrupting the electrochemical gradient that powers ATP synthase—as an acid, it donates H+ ions to the mitochondrial matrix. The electron transport chain is not affected and continues using oxygen, without producing ATP. While the general opinion is that TFM typically does not harm other fish (due to the relationship between true fish and lampreys), lampricide can be problematic for many amphibians, such as mudpuppies (genus Necturus) which often share the same habitats. Also, some more "primitive" species of fish, such as the sturgeon in the Great Lakes are sensitive to chemicals such as TFM.
https://en.wikipedia.org/wiki/Blue%20Flag%20beach
The Blue Flag is a certification by the Foundation for Environmental Education (FEE) that a beach, marina, or sustainable boating tourism operator meets its standards. The Blue Flag is a trademark owned by FEE, which is a not-for-profit non-governmental organisation consisting of 65 organisations in 77 member countries. FEE's Blue Flag criteria include standards for quality, safety, environmental education and information, the provision of services and general environmental management criteria. The Blue Flag is sought for beaches, marinas, and sustainable boating tourism operators as an indication of their high environmental and quality standards. Certificates, which FEE refers to as awards, are issued on an annual basis to beaches and marinas of FEE member countries. The awards are announced yearly on 5 June for Europe, Canada, Morocco, Tunisia, and other countries in a similar geographic location, and on 1 November for the Caribbean, New Zealand, South Africa, and other countries in the southern hemisphere. In the European Union, the water quality standards are incorporated in the EC Water Framework Directive. As of 2016 Spain has had more blue flag beaches than any other country every year since the awards began in 1987. Blue Flags awarded 2015 Awards As a result of the 2015 awards, a total of 4,154 Blue Flags are waving around the world. Table of Blue Flags in force 2015 `The table below lists the Blue Flags (both for beaches and marinas) awarded and in force in 2015. The table can be sorted to show the total number of Blue Flags per country and also the number of Blue Flags per population, per area or per the length of the coastline of each country. History The Blue Flag was created in France in 1985, as a pilot scheme from the Office of the Foundation for Environmental Education in Europe (Office français de la Fondation pour l'Education à l'Environnement en Europe) where French coastal municipalities were awarded the Blue Flag on the basis of criter
https://en.wikipedia.org/wiki/Betanin
Betanin, or Beetroot Red, is a red glycosidic food dye obtained from beets; its aglycone, obtained by hydrolyzing away the glucose molecule, is betanidin. As a food additive, its E number is E162. The color of betanin depends on pH; between four and five it is bright bluish-red, becoming blue-violet as the pH increases. Once the pH reaches alkaline levels betanin degrades by hydrolysis, resulting in a yellow-brown color. Betanin is a betalain pigment, together with isobetanin, probetanin, and neobetanin. Other pigments contained in beet are indicaxanthin and vulgaxanthins. Sources Betanin is usually obtained from the extract of beet juice; the concentration of betanin in red beet can reach 300–600 mg/kg. Other dietary sources of betanin and other betalains include the Opuntia cactus, Swiss chard, and the leaves of some strains of amaranth. Uses The most common uses of betanins are in coloring ice cream and powdered soft drink beverages; other uses are in some sugar confectionery, e.g. fondants, sugar strands, sugar coatings, and fruit or cream fillings. In hot processed candies, it can be used if added at the final part of the processing. Betanin is also used in soups as well as tomato and bacon products. Betanin has "not been implicated as causing clinical food allergy when used as a coloring agent". Betanin can be also used for coloring meat and sausages. Betanin has also shown to have antimicrobial activity and can be used as a natural antimicrobial agent in food preservation. Degradation and stability Betanin degrades when subjected to light, heat, and oxygen; therefore, it is used in frozen products, products with short shelf life, or products sold in dry state. Betanin can survive pasteurization when in products with high sugar content. Its sensitivity to oxygen is highest in products with a high water content and/or containing metal cations (e.g. iron and copper); antioxidants like ascorbic acid and sequestrants can slow this process down, together w
https://en.wikipedia.org/wiki/Manufacture%20of%20cheddar%20cheese
The manufacture of Cheddar cheese includes the process of cheddaring, which makes this cheese unique. Cheddar cheese is named for the village of Cheddar in Somerset in South West England where it was originally manufactured. The manufacturing of this cheese has since spread around the world and thus the name has become generically known. Food ingredients used during manufacture Milk In general, the milk is raw milk (whole or 3.3%). The milk must be "ripened" before adding in the rennet. The term ripening means allowing the lactic acid bacteria (LAB) to turn lactose into lactic acid, which lowers the pH of the solution, greatly aiding in the coagulation of the milk. This is vital for the production of cheese curds that are later formed into cheddar. Rennet/chymosin/rennin Rennet is an enzyme, originally collected from the stomach of a milk-fed calf (natural rennet). This enzyme is responsible for the coagulation of the milk proteins to produce curds. Cheese produced this way is neither vegetarian nor kosher. Coagulation can also be achieved using acids, but this method yields lower-quality cheddar. The two key components of natural rennet are chymosin and bovine pepsin. Extracts from plants such as nettles were found to produce similar effects and have been used in some types of cheese-making (vegetable rennet). When calf-rennet grew scarce in the 1960s, scientists developed a synthesized type of chymosin by fermenting certain bacteria or fungi (microbial rennet), but this was also not useful for all types of cheese-making. A solution using recombinant-gene (GMO microbial rennet) technology was developed and approved by the U.S. Food and Drug Administration in 1990. This splices the calf-gene for producing chymosin into the genes of certain bacteria, yeasts, or fungi, producing pure chymosin. Equipment Stainless steel knives are used to uniformly cut the curds at various points during the process. The device is a stainless steel frame with stainless st
https://en.wikipedia.org/wiki/Sudan%20stain
Sudan stains and Sudan dyes are synthetic organic compounds that are used as dyes for various plastics (plastic colorants) and are also used to stain sudanophilic biological samples, usually lipids. Sudan II, Sudan III, Sudan IV, Oil Red O, and Sudan Black B are important members of this class of compounds (see images below). Staining Sudan dyes have high affinity to fats, therefore they are used to demonstrate triglycerides, lipids, and lipoproteins. Alcoholic solutions of Sudan dyes are usually used, however pyridine solutions can be used in some situations as well. Sudan stain test is often used to determine the level of fecal fat to diagnose steatorrhea. A small sample is dissolved in water or saline, glacial acetic acid is added to hydrolyze the insoluble salts of fatty acids, a few drops of alcoholic solution of Sudan III are added, the sample is spread on a microscopic slide, and heated twice to boil. Normally a stool sample should show only a few drops of red-orange stained fat under the microscope. The method is only semiquantitative but, due to its simplicity, it is used for screening. Dyeing Since they are characteristically oil- and fat-soluble, Sudan dyes are also useful for dyeing plastics and fabrics. Sudan dyes I–IV and Sudan Red G consist of arylazo-substituted naphthols. Such compounds are known to exist as a pair of tautomers: Examples Safety Some spices exported from Asia have been adulterated with Sudan dyes, especially Sudan I and Sudan III, to enhance their colors. This finding has led to controversy because some Sudan dyes are carcinogenic in rats.
https://en.wikipedia.org/wiki/Cant%20hook
A cant hook or pike or a hooked pike is a traditional logging tool consisting of a wooden lever handle with a movable metal hook called a dog at one end, used for handling and turning logs and cants, especially in sawmills. A cant dog has a blunt end, or possibly small teeth for friction. A peavey or peavey hook is similar but has a spike in the working end of the handle. Many lumberjacks use the terms interchangeably. A peavey is generally from long. The spike is rammed into a log, then a hook (at the end of an arm attached to a pivot a short distance up the handle) grabs the log at a second place. Once engaged, the handle gives the operator leverage to roll or slide or float the log to a new place. The peavey was named for blacksmith Joseph Peavey of Maine who invented the tool as a refinement to the cant hook in the late 1850s. The Peavey Manufacturing Co. is still located in Eddington, Maine, and manufactures several variations. From early times to about 1910, the peavey was written about with various spellings such as "pevy" and "pivie". Description A logging tool description from the Lumberman's Museum at Patten, Maine, reads in part: "A cant dog or cant hook was used for lifting, turning, and prying logs when loading sleds and on the drive. At first, a swivel hook on a pole with nothing to hold it in position was used. This was called a swing dingle." However, the term swing dingle is more often published as being a type of logging sled. These early types are also called a ring dog or ring dog cant hook. In 1858, Joseph Peavey, a blacksmith in Stillwater, Maine, made a rigid clasp to encircle the cant dog handle with the hook on one side. It moved up and down, but not sideways. Loggers have used it ever since. While this tool has its origins in the logging industry, many arborists, tree care professionals, landowners and portable sawmill operators now use cant hooks for moving logs and timber. Gallery See also Boat hook Drawknife Forestry hook Pic
https://en.wikipedia.org/wiki/DNA%20machine
A DNA machine is a molecular machine constructed from DNA. Research into DNA machines was pioneered in the late 1980s by Nadrian Seeman and co-workers from New York University. DNA is used because of the numerous biological tools already found in nature that can affect DNA, and the immense knowledge of how DNA works previously researched by biochemists. DNA machines can be logically designed since DNA assembly of the double helix is based on strict rules of base pairing that allow portions of the strand to be predictably connected based on their sequence. This "selective stickiness" is a key advantage in the construction of DNA machines. An example of a DNA machine was reported by Bernard Yurke and co-workers at Lucent Technologies in the year 2000, who constructed molecular tweezers out of DNA. The DNA tweezers contain three strands: A, B and C. Strand A latches onto half of strand B and half of strand C, and so it joins them all together. Strand A acts as a hinge so that the two "arms" — AB and AC — can move. The structure floats with its arms open wide. They can be pulled shut by adding a fourth strand of DNA (D) "programmed" to stick to both of the dangling, unpaired sections of strands B and C. The closing of the tweezers was proven by tagging strand A at either end with light-emitting molecules that do not emit light when they are close together. To re-open the tweezers add a further strand (E) with the right sequence to pair up with strand D. Once paired up, they have no connection to the machine BAC, so float away. The DNA machine can be opened and closed repeatedly by cycling between strands D and E. These tweezers can be used for removing drugs from inside fullerenes as well as from a self assembled DNA tetrahedron. The state of the device can be determined by measuring the separation between donor and acceptor fluorophores using FRET. DNA walkers are another type of DNA machine. See also DNA nanotechnology
https://en.wikipedia.org/wiki/Decimal%20representation
A decimal representation of a non-negative real number is its expression as a sequence of symbols consisting of decimal digits traditionally written with a single separator: Here is the decimal separator, is a nonnegative integer, and are digits, which are symbols representing integers in the range 0, ..., 9. Commonly, if The sequence of the —the digits after the dot—is generally infinite. If it is finite, the lacking digits are assumed to be 0. If all are , the separator is also omitted, resulting in a finite sequence of digits, which represents a natural number. The decimal representation represents the infinite sum: Every nonnegative real number has at least one such representation; it has two such representations (with if ) if and only if one has a trailing infinite sequence of , and the other has a trailing infinite sequence of . For having a one-to-one correspondence between nonnegative real numbers and decimal representations, decimal representations with a trailing infinite sequence of are sometimes excluded. Integer and fractional parts The natural number , is called the integer part of , and is denoted by in the remainder of this article. The sequence of the represents the number which belongs to the interval and is called the fractional part of (except when all are ). Finite decimal approximations Any real number can be approximated to any desired degree of accuracy by rational numbers with finite decimal representations. Assume . Then for every integer there is a finite decimal such that: Proof: Let , where . Then , and the result follows from dividing all sides by . (The fact that has a finite decimal representation is easily established.) Non-uniqueness of decimal representation and notational conventions Some real numbers have two infinite decimal representations. For example, the number 1 may be equally represented by 1.000... as by 0.999... (where the infinite sequences of trailing 0's or 9's, respectively, are repre
https://en.wikipedia.org/wiki/Ethyl%20salicylate
Ethyl salicylate is the ester formed by the condensation of salicylic acid and ethanol. It is a clear liquid that is sparingly soluble in water, but soluble in alcohol and ether. It has a pleasant odor resembling wintergreen and is used in perfumery and artificial flavors. See also Methyl salicylate Isopropyl salicylate
https://en.wikipedia.org/wiki/Phantasie%20III
Phantasie III: The Wrath of Nikademus is the third video game in the Phantasie series. Gameplay The "final" installment of the Phantasie trilogy was based around fighting the evil Nikademus and finishing him for good. Released in 1987, this time Nikademus was attempting to take over the entire world and it was up to the party to stop him. Phantasie III maintained the style of the original two and improved upon the graphics on all platforms except the DOS version. The combat engine also saw a few upgrades, adding specific wound locations, with characters now able to have their head, torso, or a limb specifically injured, broken, or removed. It was also now possible to have a more tactical battle line-up, with the ability to move characters to the front, middle, or rear of the party. The game also improved upon the spell list and added a larger variety of weapons and equipment. The game also had two possible endings depending on whether the characters chose to fight Nikademus or join him. Reception Phantasie III sold 46,113 copies. Computer Gaming World stated that "there are a few new wrinkles" in the game. The magazine's Scorpia was pleased by Phantasie III improving the trading interface and combat, and by the "grand ending" to the game and the trilogy, but called the game "by far the weakest in the series" and criticized its short length. Phantasie III was reviewed in 1988 in Dragon #130 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4 out of 5 stars. Phantasie I, Phantasie III, and Questron II were later re-released together, and reviewed in 1994 in Dragon #203 by Sandy Petersen in the "Eye of the Monitor" column. Petersen gave the compilation 2 out of 5 stars.
https://en.wikipedia.org/wiki/Walras%27s%20law
Walras's law is a principle in general equilibrium theory asserting that budget constraints imply that the values of excess demand (or, conversely, excess market supplies) must sum to zero regardless of whether the prices are general equilibrium prices. That is: where is the price of good j and and are the demand and supply respectively of good j. Walras's law is named after the economist Léon Walras of the University of Lausanne who formulated the concept in his Elements of Pure Economics of 1874. Although the concept was expressed earlier but in a less mathematically rigorous fashion by John Stuart Mill in his Essays on Some Unsettled Questions of Political Economy (1844), Walras noted the mathematically equivalent proposition that when considering any particular market, if all other markets in an economy are in equilibrium, then that specific market must also be in equilibrium. The term "Walras's law" was coined by Oskar Lange to distinguish it from Say's law. Some economic theorists also use the term to refer to the weaker proposition that the total value of excess demands cannot exceed the total value of excess supplies. Definitions A market for a particular commodity is in equilibrium if, at the current prices of all commodities, the quantity of the commodity demanded by potential buyers equals the quantity supplied by potential sellers. For example, suppose the current market price of cherries is $1 per pound. If all cherry farmers summed together are willing to sell a total of 500 pounds of cherries per week at $1 per pound, and if all potential customers summed together are willing to buy 500 pounds of cherries in total per week when faced with a price of $1 per pound, then the market for cherries is in equilibrium because neither shortages nor surpluses of cherries exist. An economy is in general equilibrium if every market in the economy is in partial equilibrium. Not only must the market for cherries clear, but so too must all markets for all c
https://en.wikipedia.org/wiki/Somatic%20effort
Somatic effort refers to the total investments of an organism in its own development, differentiation, and maintenance which consequently increases its reproductive potential.
https://en.wikipedia.org/wiki/Platinum%20silicide
Platinum silicide, also known as platinum monosilicide, is the inorganic compound with the formula PtSi. It is a semiconductor that turns into a superconductor when cooled to 0.8 K. Structure and bonding The crystal structure of PtSi is orthorhombic, with each silicon atom having six neighboring platinum atoms. The distances between the silicon and the platinum neighbors are as follows: one at a distance of 2.41 angstroms, two at a distance of 2.43 angstroms, one at a distance of 2.52 angstroms, and the final two at a distance of 2.64 angstroms. Each platinum atom has six silicon neighbors at the same distances, as well as two platinum neighbors, at a distance of 2.87 and 2.90 angstroms. All of the distances over 2.50 angstroms are considered too far to really be involved in bonding interactions of the compound. As a result, it has been shown that two sets of covalent bonds compose the bonds forming the compound. One set is the three center Pt–Si–Pt bond, and the other set the two center Pt–Si bonds. Each silicon atom in the compound has one three center bond and two center bonds. The thinnest film of PtSi would consist of two alternating planes of atoms, a single sheet of orthorhombic structures. Thicker layers are formed by stacking pairs of the alternating sheets. The mechanism of bonding between PtSi is more similar to that of pure silicon than pure platinum or , though experimentation has revealed metallic bonding character in PtSi that pure silicon lacks. Synthesis Methods PtSi can be synthesized in several ways. The standard method involves depositing a thin film of pure platinum onto silicon wafers and heating in a conventional furnace at 450–600 °C for a half an hour in inert ambients. The process cannot be carried out in an oxygenated environment, as this results in the formation of an oxide layer on the silicon, preventing PtSi from forming. A secondary technique for synthesis requires a sputtered platinum film deposited on a silicon substrate. Due to
https://en.wikipedia.org/wiki/Molecular%20anthropology
Molecular anthropology, also known as genetic anthropology, is the study of how molecular biology has contributed to the understanding of human evolution. This field of anthropology examines evolutionary links between ancient and modern human populations, as well as between contemporary species. Generally, comparisons are made between sequences, either DNA or protein sequences; however, early studies used comparative serology. By examining DNA sequences in different populations, scientists can determine the closeness of relationships between populations (or within populations). Certain similarities in genetic makeup let molecular anthropologists determine whether or not different groups of people belong to the same haplogroup, and thus if they share a common geographical origin. This is significant because it allows anthropologists to trace patterns of migration and settlement, which gives helpful insight as to how contemporary populations have formed and progressed over time. Molecular anthropology has been extremely useful in establishing the evolutionary tree of humans and other primates, including closely related species like chimps and gorillas. While there are clearly many morphological similarities between humans and chimpanzees, for example, certain studies also have concluded that there is roughly a 98 percent commonality between the DNA of both species. However, more recent studies have modified the commonality of 98 percent to a commonality of 94 percent, showing that the genetic gap between humans and chimps is larger than originally thought. Such information is useful in searching for common ancestors and coming to a better understanding of how humans evolved. Haploid loci in molecular anthropology There are two continuous linkage groups in humans that are carried by a single sex. The first is the Y chromosome, which is passed from father to son. Anatomical females carry a Y chromosome only rarely, as a result of genetic defect. The other linkage
https://en.wikipedia.org/wiki/Electronic%20Frontier%20Finland
Electronic Frontier Finland – Effi ry (Effi) is a Finnish on-line civil rights organization founded in 2001 by Herkko Hietanen, Ville Oksanen and Mikko Välimäki. It had about 1,600 members at the end of 2014. While not formally affiliated with the U.S.-based Electronic Frontier Foundation, the two organizations share many of their goals. Effi is a member of the Global Internet Liberty Campaign and a founding member of European Digital Rights (EDRi). Effi's stated aim is to protect and promote freedom of speech and privacy on the Internet as well as in Finnish society in general. Among other things, Effi has lobbied for effective anti-spam legislation and against software patents. Effi has also assumed a leading role on certain consumer rights issues such as CD copy protection, in part due to the reluctance of traditional Finnish consumer protection agencies to address them. Effi presents the annual Finnish Big Brother Awards in cooperation with Privacy International. Board members include Tapani Tarvainen, Timo Karjalainen and Leena Romppainen as chairperson.
https://en.wikipedia.org/wiki/Enzyme%20replacement%20therapy
Enzyme replacement therapy (ERT) is a medical treatment which replaces an enzyme that is deficient or absent in the body. Usually, this is done by giving the patient an intravenous (IV) infusion of a solution containing the enzyme. ERT is available for some lysosomal storage diseases: Gaucher disease, Fabry disease, MPS I, MPS II (Hunter syndrome), MPS VI and Pompe disease. ERT does not correct the underlying genetic defect, but it increases the concentration of the enzyme that the patient is lacking. ERT has also been used to treat patients with severe combined immunodeficiency (SCID) resulting from an adenosine deaminase deficiency (ADA-SCID). Other treatment options for patients with enzyme or protein deficiencies include substrate reduction therapy, gene therapy, and bone-marrow derived stem cell transplantation. History ERT was developed in 1964 by Christian de Duve and Roscoe Brady. Leading work was done on this subject at the Department of Physiology at the University of Alberta by Mark J. Poznansky and Damyanti Bhardwaj, where a model for enzyme therapy was developed using rats. ERT was not used in clinical practice until 1991, after the FDA gave orphan drug approval for the treatment of Gaucher disease with Alglucerase. ERTs were initially manufactured by isolating the therapeutic enzyme from human placenta. The FDA has approved ERTs that are derived from other human cells, animal cells (i.e. Chinese hamster ovary cells, or CHO cells), and plant cells. Medical uses Lysosomal storage diseases are a group of diseases and a main application of ERT. Lysosomes are cellular organelles that are responsible for the metabolism of many different macromolecules and proteins. They use enzymes to break down macromolecules, which are recycled or disposed. As of 2012, there are 50 lysosomal storage diseases, and more are still being discovered. These disorders arise because of genetic mutations that prevent the production of certain enzymes used in the lysoso
https://en.wikipedia.org/wiki/Manufacturing%20bill%20of%20materials
A manufacturing bill of materials (MBOM), also referred to as the manufacturing BOM, contains all the parts and assemblies required to build a complete and shippable product. MBOM is a type of bill of materials (BOM). Unlike engineering bill of materials (EBOM), which is organized with regards to how the product is designed, the MBOM is focused on the parts that are needed to manufacture a product. In addition to the parts list in an EBOM, the MBOM also includes information about how the parts relate to each other. In a batch execution system such as ISA-88, the MBOM will refer to the formula part of the recipe. A recipe will include a "recipe procedure" and "equipment requirements" in addition to the formula. The "recipe procedure" explains the steps to make the end product. The "equipment requirements" describes the machines and tools that are necessary to make the product. In ISA-95 terms, the MBOM will refer to the "material specification" in the "product definition model". An MBOM is not the same as "as manufactured" or "as built". The MBOM can be viewed as the ingredients in a recipe to make a cake, where as "as built" refers to the actual materials that were consumed to make the cake. In ISA-88 terms "as built" is the same as the batch record, in ISA-95 terms "as built" is the same as a "segment response" in "production performance". The details in an MBOM are good enough to allow it to be used in a manufacturing operations management (MOM) System or manufacturing execution system (MES). The MBOM typically contains more information than what is needed to do the MRP (material requirements planning) part of an MPS (master production schedule) in an ERP (enterprise resource planning) system.
https://en.wikipedia.org/wiki/Background%20noise
Background noise or ambient noise is any sound other than the sound being monitored (primary sound). Background noise is a form of noise pollution or interference. Background noise is an important concept in setting noise levels. Background noises include environmental noises such as water waves, traffic noise, alarms, extraneous speech, bioacoustic noise from animals, and electrical noise from devices such as refrigerators, air conditioning, power supplies, and motors. The prevention or reduction of background noise is important in the field of active noise control. It is an important consideration with the use of ultrasound (e.g. for medical diagnosis or imaging), sonar, and sound reproduction. Other uses In astronomy, background noise or cosmic background radiation is electromagnetic radiation from the sky with no discernible source. In information architecture, irrelevant, duplicate or incorrect information may be called background noise. In physics and telecommunication, background signal noise can be detrimental or in some cases beneficial. The study of avoiding, reducing or using signal noise is information theory. In telephony, artificial comfort noise is used as a substitute for natural background noise, to fill in artificial silence created by discontinuous transmission systems using voice activity detection. Background noise can also affect concentration. See also 4'33" Ambient noise level Electronic noise The Hum Colors of noise Sound masking External links How low noise levels are achieved in concert halls Background noise in acoustics (demo) Acoustics Noise pollution Sound
https://en.wikipedia.org/wiki/Jablonski%20diagram
In molecular spectroscopy, a Jablonski diagram is a diagram that illustrates the electronic states and often the vibrational levels of a molecule, and also the transitions between them. The states are arranged vertically by energy and grouped horizontally by spin multiplicity. Nonradiative transitions are indicated by squiggly arrows and radiative transitions by straight arrows. The vibrational ground states of each electronic state are indicated with thick lines, the higher vibrational states with thinner lines. The diagram is named after the Polish physicist Aleksander Jabłoński who first proposed it in 1933. Transitions Radiative transitions involve the absorption of a photon, if the transition occurs to a higher energy level, or the emission of a photon, for a transition to a lower level. Nonradiative transitions arise through several different mechanisms, all differently labeled in the diagram. Relaxation of the excited state to its lowest vibrational level is called vibrational relaxation. This process involves the dissipation of energy from the molecule to its surroundings, and thus it cannot occur for isolated molecules. A second type of nonradiative transition is internal conversion (IC), which occurs when a vibrational state of an electronically excited state can couple to a vibrational state of a lower electronic state. A third type is intersystem crossing (ISC); this is a transition to a state with a different spin multiplicity. In molecules with large spin-orbit coupling, intersystem crossing is much more important than in molecules that exhibit only small spin-orbit coupling. ISC can be followed by phosphorescence. See also Franck–Condon principle Grotrian diagram (for atoms)
https://en.wikipedia.org/wiki/Morita%20equivalence
In abstract algebra, Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. More precisely two rings like R, S are Morita equivalent (denoted by ) if their categories of modules are additively equivalent (denoted by ). It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958. Motivation Rings are commonly studied in terms of their modules, as modules can be viewed as representations of rings. Every ring R has a natural R-module structure on itself where the module action is defined as the multiplication in the ring, so the approach via modules is more general and gives useful information. Because of this, one often studies a ring by studying the category of modules over that ring. Morita equivalence takes this viewpoint to a natural conclusion by defining rings to be Morita equivalent if their module categories are equivalent. This notion is of interest only when dealing with noncommutative rings, since it can be shown that two commutative rings are Morita equivalent if and only if they are isomorphic. Definition Two rings R and S (associative, with 1) are said to be (Morita) equivalent if there is an equivalence of the category of (left) modules over R, R-Mod, and the category of (left) modules over S, S-Mod. It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive. Examples Any two isomorphic rings are Morita equivalent. The ring of n-by-n matrices with elements in R, denoted Mn(R), is Morita-equivalent to R for any n > 0. Notice that this generalizes the classification of simple artinian rings given by Artin–Wedderburn theory. To see the equivalence, notice that if X is a left R-module then Xn is an Mn(R)-module where the module structure
https://en.wikipedia.org/wiki/Vaccination%20schedule
A vaccination schedule is a series of vaccinations, including the timing of all doses, which may be either recommended or compulsory, depending on the country of residence. A vaccine is an antigenic preparation used to produce active immunity to a disease, in order to prevent or reduce the effects of infection by any natural or "wild" pathogen. Vaccines go through multiple phases of trials to ensure safety and effectiveness. Many vaccines require multiple doses for maximum effectiveness, either to produce sufficient initial immune response or to boost response that fades over time. For example, tetanus vaccine boosters are often recommended every 10 years. Vaccine schedules are developed by governmental agencies or physicians groups to achieve maximum effectiveness using required and recommended vaccines for a locality while minimizing the number of health care system interactions. Over the past two decades, the recommended vaccination schedule has grown rapidly and become more complicated as many new vaccines have been developed. Some vaccines are recommended only in certain areas (countries, sub national areas, or at-risk populations) where a disease is common. For instance, yellow fever vaccination is on the routine vaccine schedule of French Guiana, is recommended in certain regions of Brazil but in the United States is only given to travelers heading to countries with a history of the disease. In developing countries, vaccine recommendations also take into account the level of health care access, the cost of vaccines and issues with vaccine availability and storage. Sample vaccination schedules discussed by the World Health Organization show a developed country using a schedule which extends over the first five years of a child's life and uses vaccines which cost over $700 including administration costs while a developing country uses a schedule providing vaccines in the first 9 months of life and costing only $25. This difference is due to the lower cost o
https://en.wikipedia.org/wiki/DO-254
RTCA DO-254 / EUROCAE ED-80, Design Assurance Guidance for Airborne Electronic Hardware is a document providing guidance for the development of airborne electronic hardware, published by RTCA, Incorporated and EUROCAE. The DO-254/ED-80 standard was formally recognized by the FAA in 2005 via AC 20-152 as a means of compliance for the design assurance of electronic hardware in airborne systems. The guidance in this document is applicable, but not limited, to such electronic hardware items as Line Replaceable Units (quickly replaceable components) Circuit board assemblies (CBA) Custom micro-coded components such as field programmable gate arrays (FPGA), programmable logic devices (PLD), and application-specific integrated circuits (ASIC), including any associated macro functions Integrated technology components such as hybrid integrated circuits and multi-chip modules Commercial off-the-shelf (COTS) components The document classifies electronic hardware items into simple or complex categories. An item is simple "if a comprehensive combination of deterministic tests and analyses appropriate to the design assurance level can ensure correct functional performance under all foreseeable operating conditions with no anomalous behavior." Conversely, a complex item is one that cannot have correct functional performance ensured by tests and analyses alone; so, assurance must be accomplished by additional means. The body of DO-254/ED-80 establishes objectives and activities for the systematic design assurance of complex electronic hardware, generally presumed to be complex custom micro-coded components, as listed above. However, simple electronic hardware is within the scope of DO-254/ED-80 and applicants propose and use the guidance in this standard to obtain certification approval of simple custom micro-coded components, especially devices that support higher level (A/B) aircraft functions. The DO-254/ED-80 standard is the counterpart to the well-established software s
https://en.wikipedia.org/wiki/Functional%20fixedness
Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a mental block against using an object in a new way that is required to solve a problem. This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person couldn't think to use the hammer in a way other than in its conventional function. When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special. Examples in research Experimental paradigms typically involve solving problems in novel situations in which the subject has the use of a familiar object in an unfamiliar context. The object may be familiar from the subject's past experience or from previous tasks within an experiment. Candle box In a classic experiment demonstrating functional fixedness, Duncker (1945) gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below. Duncker found that participants tried to attach the candle directly to the wall with the tacks, or to glue it to the wall by melting it. Very few of them thought of using the inside of the box as a candle-holder and tacking this to the wall. In D
https://en.wikipedia.org/wiki/Arboreal%20theory
The arboreal theory claims that primates evolved from their ancestors by adapting to arboreal life. It was proposed by Grafton Elliot Smith (1912), a neuroanatomist who was chiefly concerned with the emergence of the primate brain. Summary Primates are thought to have developed several of their traits and habits initially while living in trees. One key component to this argument is that primates relied on sight over smell. They were able to develop a keen sense of depth perception, perhaps because of the constant leaping that was necessary to move about the trees. Primates also developed hands and feet that were capable of grasping. This was also a result of arboreal life, which required a great deal of crawling along branches, and reaching out for fruit and other food. These early primates were likely to have eaten foods found in trees, such as flowers, fruits, berries, gums, leaves, and insects. They are thought to have shifted their diets towards insects in the early Cenozoic era, when insects became more numerous.
https://en.wikipedia.org/wiki/Human%20behavioral%20ecology
Human behavioral ecology (HBE) or human evolutionary ecology applies the principles of evolutionary theory and optimization to the study of human behavioral and cultural diversity. HBE examines the adaptive design of traits, behaviors, and life histories of humans in an ecological context. One aim of modern human behavioral ecology is to determine how ecological and social factors influence and shape behavioral flexibility within and between human populations. Among other things, HBE attempts to explain variation in human behavior as adaptive solutions to the competing life-history demands of growth, development, reproduction, parental care, and mate acquisition. HBE overlaps with evolutionary psychology, human or cultural ecology, and decision theory. It is most prominent in disciplines such as anthropology and psychology where human evolution is considered relevant for a holistic understanding of human behavior. Evolutionary theory Human behavioral ecology rests upon a foundation of evolutionary theory. This includes aspects of both general evolutionary theory and established middle-level evolutionary theories, as well. Aspects of general evolutionary theory include: Natural selection, the process by which individual organisms with favorable traits are more likely to survive and reproduce. Sexual selection, the theory that competition for mates between individuals of the same sex results in differential mating and reproduction. Kin selection, the changes in gene frequency across generations that are driven at least in part by interactions between related individuals. Inclusive fitness, the sum of an individual's own reproductive success, (natural and sexual selection), plus the effects the individual's actions have on the reproductive success of that individual's kin, (kin selection). Middle-level evolutionary theories used in HBE include: The theory of parental investment, which predicts that the sex making the largest investment in lactation, nurturing and pr
https://en.wikipedia.org/wiki/Pyomyositis
Pyomyositis is a bacterial infection of the skeletal muscles which results in an abscess. Pyomyositis is most common in tropical areas but can also occur in temperate zones. Pyomyositis can be classified as primary or secondary. Primary pyomyositis is a skeletal muscle infection arising from hematogenous infection, whereas secondary pyomyositis arises from localized penetrating trauma or contiguous spread to the muscle. Diagnosis Diagnosis is done via the following manner: Pus discharge culture and sensitivity X ray of the part to rule out osteomyelitis Creatinine phosphokinase (more than 50,000 units) MRI is useful Ultrasound guided aspiration Treatment The abscesses within the muscle must be drained surgically (not all patient require surgery if there is no abscess). Antibiotics are given for a minimum of three weeks to clear the infection. Epidemiology Pyomyositis is most often caused by the bacterium Staphylococcus aureus. The infection can affect any skeletal muscle, but most often infects the large muscle groups such as the quadriceps or gluteal muscles. Pyomyositis is mainly a disease of children and was first described by Scriba in 1885. Most patients are aged 2 to 5 years, but infection may occur in any age group. Infection often follows minor trauma and is more common in the tropics, where it accounts for 4% of all hospital admissions. In temperate countries such as the US, pyomyositis was a rare condition (accounting for 1 in 3000 pediatric admissions), but has become more common since the appearance of the USA300 strain of MRSA. Gonococcal pyomyositis is a rare infection caused by Neisseria gonorrhoeae. Additional images
https://en.wikipedia.org/wiki/Pelvic%20thrust
The pelvic thrust is the thrusting motion of the pelvic region, which is used for a variety of activities, such as dance, exercise, or sexual activity. Sexual activity The pelvic thrust is used during copulation by many species of mammals, including humans, or for other sexual activities (such as non-penetrative sex). In 2007, German scientists noted that female monkeys could increase the vigor and amount of pelvic thrusts made by the male by shouting during intercourse. In whitetail deer, copulation consists of a single pelvic thrust. Dance One of the first to perform this move on stage was Elvis Presley. It was quite controversial due to its obvious sexual connotations. Due to this controversy, he was sometimes shown (as seen on his third appearance on The Ed Sullivan Show) from the waist up on TV. Later, the pelvic thrust also became one of the signature moves of Michael Jackson. It is also mentioned in "Time Warp", a song from The Rocky Horror Show, as a part of the choreography associated with the warp itself. Twerking, a reverse and sometimes passive form of pelvic thrust dance move, is also a very popular form of hip-hop dance move. The sideways pelvic thrust is a famous female dance move in India and Bangladesh and known as thumka. It appears in the lyrics of various Bollywood songs. Exercise Hip thrusts can be used as an exercise to train the gluteus maximus muscle. The athlete will get into a reclined position and thrust their hips upward to lift weights balanced on their lap. Infants Pelvic thrusting is observed in infant monkeys, apes, and humans. These observations led ethologist John Bowlby (1969) to suggest that infantile sexual behavior may be the rule in mammals, not the exception. Thrusting has been observed in humans at eight to 10 months of age and may be an expression of affection. Typically, the infant clings to the parent, then nuzzles, thrusts, and rotates the pelvis for several seconds. See also Lordosis behavior Twerking
https://en.wikipedia.org/wiki/P-compact%20group
In mathematics, in particular algebraic topology, a p-compact group is a homotopical version of a compact Lie group, but with all the local structure concentrated at a single prime p. This concept was introduced in , making precise earlier notions of a mod p finite loop space. A p-compact group has many Lie-like properties like maximal tori and Weyl groups, which are defined purely homotopically in terms of the classifying space, but with the important difference that the Weyl group, rather than being a finite reflection group over the integers, is now a finite p-adic reflection group. They admit a classification in terms of root data, which mirrors the classification of compact Lie groups, but with the integers replaced by the p-adic integers. Definition A p-compact group is a pointed space BG, with is local with respect to mod p homology, and such the pointed loop space G = ΩBG has finite mod p homology. One sometimes also refer to the p-compact group by G, but then one needs to keep in mind that the loop space structure is part of the data (which then allows one to recover BG). A p-compact group is said to be connected if G is a connected space (in general the group of components of G will be a finite p-group). The rank of a p-compact group is the rank of its maximal torus. Examples The p-completion, in the sense of homotopy theory, of (the classifying space of) a compact connected Lie group defines a connected p-compact group. (The Weyl group is just its ordinary Weyl group, now viewed as a p-adic reflection group by tensoring the coweight lattice by .) More generally the p-completion of a connected finite loop space defines a p-compact group. (Here the Weyl will be a -reflection group that may not come from a -reflection group.) A rank 1 connected 2-compact group is either the 2-completion of SU(2) or SO(3). A rank 1 connected p-compact group, for p odd, is a "Sullivan sphere", i.e., the p-completion of a 2n-1-sphere S2n-1, where n divides p − 1.
https://en.wikipedia.org/wiki/Placard
A placard is a notice installed in a public place, like a small card, sign, or plaque. It can be attached to or hung from a vehicle or building to indicate information about the vehicle operator or contents of a vehicle or building. It can also refer to paperboard signs or notice carried by picketers or demonstrators. Buildings A placard is posted on buildings to communicate a wide variety of information, such as fire safety policies, emergency shelters. The International Building Code requires doors in some public and commercial structures, fitted with an internal key lock have a notice "This door to remain unlocked when this space is occupied" in a minimum of text be posted beside or above the door. Some state and local building codes modify this text, such as California fire code, which specifies "This door to remain unlocked during business hours". Temporary placards may be placed on buildings such as warning signs when a structure is being fumigated, or has been condemned by building inspectors or the fire department and is unsafe to enter. Fallout shelters As part of the civil defense preparations in the event of a nuclear attack, in 1961 United States began establishing fallout shelters in communities across the country. The shelters were symbolized by orange-yellow and black trefoil symbol, designed by Robert W. Blakeley. In 1962, 1.4 million metal signs and 1 million adhesive stickers were manufactured and distributed across the country at a total cost of $700,500 . Two standard signs were used widely, a aluminum sign for posting on the exterior of buildings identifying the building as having a fallout shelter, and a steel sign, intended for interior use to the shelter's location and mark the actual location of the shelter within the building. The sign system included 'overlays' that were designed to be added to signs for conveying additional information about the specific shelter and its location. Exterior sign overlays: Numbers - for Capacity
https://en.wikipedia.org/wiki/Graph%20factorization
In graph theory, a factor of a graph G is a spanning subgraph, i.e., a subgraph that has the same vertex set as G. A k-factor of a graph is a spanning k-regular subgraph, and a k-factorization partitions the edges of the graph into disjoint k-factors. A graph G is said to be k-factorable if it admits a k-factorization. In particular, a 1-factor is a perfect matching, and a 1-factorization of a k-regular graph is a proper edge coloring with k colors. A 2-factor is a collection of cycles that spans all vertices of the graph. 1-factorization If a graph is 1-factorable (i.e., has a 1-factorization), then it has to be a regular graph. However, not all regular graphs are 1-factorable. A k-regular graph is 1-factorable if it has chromatic index k; examples of such graphs include: Any regular bipartite graph. Hall's marriage theorem can be used to show that a k-regular bipartite graph contains a perfect matching. One can then remove the perfect matching to obtain a (k − 1)-regular bipartite graph, and apply the same reasoning repeatedly. Any complete graph with an even number of nodes (see below). However, there are also k-regular graphs that have chromatic index k + 1, and these graphs are not 1-factorable; examples of such graphs include: Any regular graph with an odd number of nodes. The Petersen graph. Complete graphs A 1-factorization of a complete graph corresponds to pairings in a round-robin tournament. The 1-factorization of complete graphs is a special case of Baranyai's theorem concerning the 1-factorization of complete hypergraphs. One method for constructing a 1-factorization of a complete graph on an even number of vertices involves placing all but one of the vertices on a circle, forming a regular polygon, with the remaining vertex at the center of the circle. With this arrangement of vertices, one way of constructing a 1-factor of the graph is to choose an edge e from the center to a single polygon vertex together with all possible edges that lie on
https://en.wikipedia.org/wiki/Program%20status%20word
The program status word (PSW) is a register that performs the function of a status register and program counter, and sometimes more. The term is also applied to a copy of the PSW in storage. This article only discusses the PSW in the IBM System/360 and its successors, and follows the IBM convention of numbering bits starting with 0 as the leftmost (most significant) bit. Although certain fields within the PSW may be tested or set by using non-privileged instructions, testing or setting the remaining fields may only be accomplished by using privileged instructions. Contained within the PSW are the two bit condition code, representing zero, positive, negative, overflow, and similar flags of other architectures' status registers. Conditional branch instructions test this encoded as a four bit value, with each bit representing a test of one of the four condition code values, 23 + 22 + 21 + 20. (Since IBM uses big-endian bit numbering, mask value 8 selects code 0, mask value 4 selects code 1, mask value 2 selects code 2, and mask value 1 selects code 3.) The 64-bit PSW describes (among other things) Interrupt masks Privilege states Condition code Instruction address In the early instances of the architecture (System/360 and early System/370), the instruction address was 24 bits; in later instances (XA/370), the instruction address was 31 bits plus a mode bit (24 bit addressing mode if zero; 31 bit addressing mode if one) for a total of 32 bits. In the present instances of the architecture (z/Architecture), the instruction address is 64 bits and the PSW itself is 128 bits. The PSW may be loaded by the LOAD PSW instruction (LPSW or LPSWE). Its contents may be examined with the Extract PSW instruction (EPSW). Format S/360 On all but 360/20, the PSW has the following formats. S/360 Extended PSW format only applies to the 360/67 with bit 8 of control register 6 set. S/370 S/370 Extended Architecture (S/370-XA) Enterprise Systems Architecture (ESA) z/Archi
https://en.wikipedia.org/wiki/Transversality%20%28mathematics%29
In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection. Definition Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point. Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point. In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point. One notation for the transverse intersection of two submanifolds and of a given manifold is . This notation can be read in two ways: either as “ and intersect transversally” or as an alternative notation for the set-theoretic intersection of and when that intersection is transverse. In this notation, the definition of transversality reads Transversality of maps The notion of transversality of a pair of submanifolds is easily extended to tran
https://en.wikipedia.org/wiki/WinWAP
WinWAP was a web browser for Windows CE mobile devices. It was first released in 1999. See also Article about Winwap in Finnish IT Magazine Tietoviikko Windows web browsers Mobile web browsers 1999 software Discontinued web browsers
https://en.wikipedia.org/wiki/Herpetic%20gingivostomatitis
Gingivostomatitis is a combination of gingivitis and stomatitis, or an inflammation of the oral mucosa and gingiva. Herpetic gingivostomatitis is often the initial presentation during the first ("primary") herpes simplex infection. It is of greater severity than herpes labialis (cold sores) which is often the subsequent presentations. Primary herpetic gingivostomatitis is the most common viral infection of the mouth. Primary herpetic gingivostomatitis (PHGS) represents the clinically apparent pattern of primary herpes simplex virus (HSV) infection, since the vast majority of other primary infections are symptomless. PHGS is caused predominantly by HSV-1 and affects mainly children. Prodromal symptoms, such as fever, anorexia, irritability, malaise and headache, may occur in advance of disease. The disease presents as numerous pin-head vesicles, which rupture rapidly to form painful irregular ulcerations covered by yellow–grey membranes. Sub-mandibular lymphadenitis, halitosis and refusal to drink are usual concomitant findings. Signs and symptoms The symptoms can be mild or severe and may include: Not able to chew or swallow Sores on the inside of the cheeks or gums Fever General discomfort, uneasiness, or ill feeling Very sore mouth with no desire to eat Halitosis (bad breath) Causes Herpetic gingivostomatitis is an infection caused by the herpes simplex virus (HSV). The HSV is a double-stranded DNA virus categorised into two types; HSV-1 and HSV-2. HSV-1 is predominantly responsible for oral, facial and ocular infections whereas HSV-2 is responsible for most genital and cutaneous lower herpetic lesions. Both HSV-1, and HSV-2 can be the cause of herpetic gingivostomatitis, although HSV-1 is the source of infection in around 90% of cases. Herpetic gingivostomatitis infections can present as acute or recurrent. Acute infection refers to the first invasion of the virus, and recurrent is when reactivation of the latent virus occurs. Acute herpetic ging
https://en.wikipedia.org/wiki/Nyquist%20ISI%20criterion
In communications, the Nyquist ISI criterion describes the conditions which, when satisfied by a communication channel (including responses of transmit and receive filters), result in no intersymbol interference or ISI. It provides a method for constructing band-limited functions to overcome the effects of intersymbol interference. When consecutive symbols are transmitted over a channel by a linear modulation (such as ASK, QAM, etc.), the impulse response (or equivalently the frequency response) of the channel causes a transmitted symbol to be spread in the time domain. This causes intersymbol interference because the previously transmitted symbols affect the currently received symbol, thus reducing tolerance for noise. The Nyquist theorem relates this time-domain condition to an equivalent frequency-domain condition. The Nyquist criterion is closely related to the Nyquist–Shannon sampling theorem, with only a differing point of view. Nyquist criterion If we denote the channel impulse response as , then the condition for an ISI-free response can be expressed as: for all integers , where is the symbol period. The Nyquist theorem says that this is equivalent to: , where is the Fourier transform of . This is the Nyquist ISI criterion. This criterion can be intuitively understood in the following way: frequency-shifted replicas of must add up to a constant value. This condition is satisfied when spectrum has even symmetry, has bandwidth less than or equal to , and its single-sideband has odd symmetry at the cutoff frequency . In practice this criterion is applied to baseband filtering by regarding the symbol sequence as weighted impulses (Dirac delta function). When the baseband filters in the communication system satisfy the Nyquist criterion, symbols can be transmitted over a channel with flat response within a limited frequency band, without ISI. Examples of such baseband filters are the raised-cosine filter, or the sinc filter as the ideal case. Der
https://en.wikipedia.org/wiki/Gun.Smoke
is vertically scrolling run and gun video game and designed by Yoshiki Okamoto and released in arcades in 1985. Gun.Smoke centers on a character named Billy Bob, a bounty hunter going after the criminals of the Wild West. Gameplay Gun Smoke is a run and gun video game in which the screen automatically scrolls upward. Players use three buttons to shoot left, right, and center. The player can also change the way Billy shoots through button combinations. The player dies by getting shot, struck by enemies, or caught between an obstacle and the bottom of the screen. The player can collect various items, including a horse for extra protection, boots for increased movement speed, bullets for faster shots, a yashichi for an extra life, and a rifle for longer shot range. Other items add points to your score such as stars, bottles, bags, and dragonflies. Two versions of Gun.Smoke were released in North America by Romstar. Ports Gun.Smoke was ported to these systems: The MSX The PlayStation and Sega Saturn as a part of Capcom Generation 4 The PlayStation 2, PlayStation Portable and Xbox as a part of Capcom Classics Collection The PlayStation 3 and Xbox 360 as a part of Capcom Arcade Cabinet The, Nintendo Switch, PlayStation 4, Xbox One, and Microsoft Windows as part of Capcom Arcade 2nd Stadium, referred to as Gan Sumoku. Windows 98 and Windows XP as a part of Capcom Arcade Hits Volume 3 The Amstrad CPC as Desperado – Gun.Smoke; this platform received a sequel called Desperado 2 The ZX Spectrum NES version The game was later ported to the Nintendo Entertainment System (NES) and Family Computer Disk System (FDS) in 1988. The game has a new storyline: In 1849, a gang known as the Wingates attacks the town of Hicksville, kills the sheriff, and causes trouble everyday until Billy, the main character, comes to town to free it from the gang. The NES version also has different music. Soundtrack The soundtrack for the arcade version was composed by Ayako Mori. On August 25, 19
https://en.wikipedia.org/wiki/Asbestiform
Asbestiform is a crystal habit. It describes a mineral that grows in a fibrous aggregate of high tensile strength, flexible, long, and thin crystals that readily separate. The most common asbestiform mineral is chrysotile, commonly called "white asbestos", a magnesium phyllosilicate part of the serpentine group. Other asbestiform minerals include riebeckite, an amphibole whose fibrous form is known as crocidolite or "blue asbestos", and brown asbestos, a cummingtonite-grunerite solid solution series. The United States Environmental Protection Agency explains that, "In general, exposure may occur only when the asbestos-containing material is disturbed or damaged in some way to release particles and fibers into the air." "Mountain leather" is an old-fashioned term for flexible, sheet-like natural formations of asbestiform minerals which resemble leather. Asbestos-containing minerals known to form mountain leather include: actinolite, palygorskite, saponite, sepiolite, tremolite, and zeolite. See also Chrysotile
https://en.wikipedia.org/wiki/Weighing%20matrix
In mathematics, a weighing matrix of order and weight is a matrix with entries from the set such that: Where is the transpose of and is the identity matrix of order . The weight is also called the degree of the matrix. For convenience, a weighing matrix of order and weight is often denoted by . Weighing matrices are so called because of their use in optimally measuring the individual weights of multiple objects. When the weighing device is a balance scale, the statistical variance of the measurement can be minimized by weighing multiple objects at once, including some objects in the opposite pan of the scale where they subtract from the measurement. Properties Some properties are immediate from the definition. If is a , then: The rows of are pairwise orthogonal (that is, every pair of rows you pick from will be orthogonal). Similarly, the columns are pairwise orthogonal. Each row and each column of has exactly non-zero elements. , since the definition means that where is the inverse of where is the determinant of A weighing matrix is a generalization of Hadamard matrix, which does not allow zero entries. As two special cases, a is a Hadamard matrix and a is equivalent to a conference matrix. Applications Experiment design Weighing matrices take their name from the problem of measuring the weight of multiple objects. If a measuring device has a statistical variance of , then measuring the weights of objects and subtracting the (equally imprecise) tare weight will result in a final measurement with a variance of . It is possible to increase the accuracy of the estimated weights by measuring different subsets of the objects, especially when using a balance scale where objects can be put on the opposite measuring pan where they subtract their weight from the measurement. An order matrix can be used to represent the placement of objects—including the tare weight—in trials. Suppose the left pan of the balance scale adds to the meas
https://en.wikipedia.org/wiki/Charactron
Charactron was a U.S. registered trademark (number 0585950, 23 February 1954) of Consolidated Vultee Aircraft Corporation (Convair) for its shaped electron beam cathode ray tube. Charactron CRTs performed functions of both a display device and a read-only memory storing multiple characters and fonts. The similar Typotron was a U.S. registered trademark (23 November 1953) of Hughes Aircraft Corporation for its type of shaped electron beam storage tube with a direct-view bistable storage screen. The Charactron CRT used an electron beam to flood a specially patterned perforated anode that contained the stencil patterns for each of the characters that it could form. The first deflection positioning of the electron beam steered the beam to pass through one of the (typically 64 or 116) characters and symbols that could be formed. The beam, which then had the cross-section of the desired character, was re-centered along the axis of the tube and deflected to the desired position of the screen for display. Alternately, as in the accompanying image, the entire matrix was filled with the electron beam then deflected through a selection aperture to isolate one character. The term Charactron is sometimes mistakenly applied to another type of CRT properly called a monoscope which generates an electrical signal by scanning an electron beam of uniform cross section across a printed pattern on an internal target electrode. Applications There were two basic types/uses of Charactrons: Direct view — where the intended user watched the face of the tube. An example was the tube of the AN/FSQ-7 SAGE Semi Automatic Ground Environment computer console. Photographic output — where the display screen was photographed by a microfilm camera for recording of computer generated data. The Stromberg-Carlson SC-4000 series system was a typical use of the tube The technical expertise, and trademarks, for the Charactron ultimately passed to Stromberg-Carlson, General Dynamics, Stromberg Da
https://en.wikipedia.org/wiki/Enterotoxigenic%20Escherichia%20coli
Enterotoxigenic Escherichia coli (ETEC) is a type of Escherichia coli and one of the leading bacterial causes of diarrhea in the developing world, as well as the most common cause of travelers' diarrhea. Insufficient data exists, but conservative estimates suggest that each year, about 157,000 deaths occur, mostly in children, from ETEC. A number of pathogenic isolates are termed ETEC, but the main hallmarks of this type of bacterium are expression of one or more enterotoxins and presence of fimbriae used for attachment to host intestinal cells. The bacterium was identified by the Bradley Sack lab in Kolkata in 1968. Signs and symptoms Infection with ETEC can cause profuse, watery diarrhea with no blood or leukocytes and abdominal cramping. Fever, nausea with or without vomiting, chills, loss of appetite, headache, muscle aches and bloating can also occur, but are less common. Enterotoxins Enterotoxins produced by ETEC include heat-labile enterotoxin (LT) and heat-stable enterotoxin (ST). Prevention To date, no licensed vaccines specifically target ETEC, though several are in various stages of development. Studies indicate that protective immunity to ETEC develops after natural or experimental infection, suggesting that vaccine-induced ETEC immunity should be feasible and could be an effective preventive strategy. Prevention through vaccination is a critical part of the strategy to reduce the incidence and severity of diarrheal disease due to ETEC, particularly among children in low-resource settings. The development of a vaccine against this infection has been hampered by technical constraints, insufficient support for coordination, and a lack of market forces for research and development. Most vaccine development efforts are taking place in the public sector or as research programs within biotechnology companies. ETEC is a longstanding priority and target for vaccine development for the World Health Organization. Management Treatment for ETEC infection inclu
https://en.wikipedia.org/wiki/Graphics%20hardware
Graphics hardware is computer hardware that generates computer graphics and allows them to be shown on a display, usually using a graphics card (video card) in combination with a device driver to create the images on the screen. Types Graphics cards The most important piece of graphics hardware is the graphics card, which is the piece of equipment that renders out all images and sends them to a display. There are two types of graphics cards: integrated and dedicated. An integrated graphics card, usually by Intel to use in their computers, is bound to the motherboard and shares RAM(Random Access Memory) with the CPU, reducing the total amount of RAM available. This is undesirable for running programs and applications that use a large amount of video memory. A dedicated graphics card has its own RAM and Processor for generating its images, and does not slow down the computer. Dedicated graphics cards also have higher performance than integrated graphics cards. It is possible to have both dedicated and integrated graphics, however once a dedicated graphics card is installed, the integrated card will no longer function until the dedicated card is removed. Parts of a graphics card The GPU, or graphics processing unit, is the unit that allows the graphics card to function. It performs a large amount of the work given to the card. The majority of video playback on a computer is controlled by the GPU. Once again, a GPU can be either integrated or dedicated. Video Memory is built-in RAM on the graphics card, which provides it with its own memory, allowing it to run smoothly without taking resources intended for general use by the rest of the computer. The term "Video" here is an informal designation and is not intended in a narrow sense. In particular, it does not imply exclusively video data. The data in this form of memory comprises all manner of graphical data including those for still images, icons, fonts, and generally anything that is displayed on the screen.
https://en.wikipedia.org/wiki/Domains%20by%20Proxy
Domains by Proxy (DBP) is an Internet company started by the founder of GoDaddy, Bob Parsons. Domains by Proxy offers domain privacy services through partner domain registrars such as GoDaddy and Wild West Domains. Subscribers list Domains by Proxy as their administrative and technical contacts in the Internet's WHOIS database, thereby delegating responsibility for managing unsolicited contacts from third parties and keeping the domains owners' personal information secret. However, the company will release a registrant's personal information in some cases, such as by court order or for other reasons as deemed appropriate by the company per its Domain Name Proxy Agreement. As of 2014, over 9,850,000 domain names use the Domains by Proxy service. Political usage In the run-up to the 2012 United States presidential primaries, numerous domain names with derogatory expressions have been registered through Domains by Proxy by both Republicans and Democrats. Domains by Proxy have allegedly been a target of the Internet organization Anonymous due to perceived malicious business activities including inducements to join their service, claims of privacy that are not fulfilled and the lowering of Google PageRank of the sites they link to. Controversy Fraudsters Controversially, Domains By Proxy is also used by a number of organizations that target vulnerable individuals by sending threatening psychic letters, and fake drug companies. It is also used by fake anti-spyware and anti-malware sites to hide their real ownership of the software that they promote. Advance Fee fraudsters also use Domains By Proxy. On 5 February 2016, the Artists Against 419 database reflected 1124 out of 108684 entries abused the services of Domains By Proxy. This represents a figure of slightly over one percent of the entries. Privacy In 2014, Domains by Proxy handed over personal details of a site owner to Motion Picture Association due to potential copyright infringement despite the website
https://en.wikipedia.org/wiki/Sintran%20III
Sintran III is a real-time, multitasking, multi-user operating system used with Norsk Data minicomputers from 1974. Unlike its predecessors Sintran I and II, it was written entirely by Norsk Data, in Nord Programming Language (Nord PL, NPL), an intermediate language for Norsk Data computers. Overview Sintran was mainly a command-line interface based operating system, though there were several shells which could be installed to control the user environment more strictly, by far the most popular of which was USER-ENVIRONMENT. One of the clever features was to be able to abbreviate commands and file names between hyphens. For example, typing LIST-FILES would give users several prompts, including for print, paging etc. Users could override this using the following LI-FI ,,n, which would abbreviate the LIST-FILES command prompt and bypass any of the prompts. One could also refer to files in this way, for example, with PED H-W: which would refer to HELLO-WORLD:SYMB if this was the only file having H, any number of characters, a hyphen -, a W, any number of characters, and any file ending. This saved many keystrokes and would allow users a very nice learning experience, from complete and self-explanatory commands like LIST-ALL-FILES to L-A-F for an advanced user. (The hyphen key on Norwegian keyboards resides where the slash key does on U.S. ones.) Now that Sintran has mostly disappeared as an operating system, there are few references to it. However a job control or batch processing language was available named JEC, believed to be named Job Execution Controller, this could be used to set up batch jobs to compile COBOL programs, etc.
https://en.wikipedia.org/wiki/Microsoft%20CryptoAPI
The Microsoft Windows platform specific Cryptographic Application Programming Interface (also known variously as CryptoAPI, Microsoft Cryptography API, MS-CAPI or simply CAPI) is an application programming interface included with Microsoft Windows operating systems that provides services to enable developers to secure Windows-based applications using cryptography. It is a set of dynamically linked libraries that provides an abstraction layer which isolates programmers from the code used to encrypt the data. The Crypto API was first introduced in Windows NT 4.0 and enhanced in subsequent versions. CryptoAPI supports both public-key and symmetric key cryptography, though persistent symmetric keys are not supported. It includes functionality for encrypting and decrypting data and for authentication using digital certificates. It also includes a cryptographically secure pseudorandom number generator function CryptGenRandom. CryptoAPI works with a number of CSPs (Cryptographic Service Providers) installed on the machine. CSPs are the modules that do the actual work of encoding and decoding data by performing the cryptographic functions. Vendors of HSMs may supply a CSP which works with their hardware. Cryptography API: Next Generation Windows Vista features an update to the Crypto API known as Cryptography API: Next Generation (CNG). It has better API factoring to allow the same functions to work using a wide range of cryptographic algorithms, and includes a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It is also flexible, featuring support for plugging custom cryptographic APIs into the CNG runtime. However, CNG Key Storage Providers still do not support symmetric keys. CNG works in both user and kernel mode, and also supports all of the algorithms from the CryptoAPI. The Microsoft provider that implements CNG is housed in Bcrypt.dll. CNG also supports elliptic curve cryptography which, because it uses shorter keys for the
https://en.wikipedia.org/wiki/Hardbass
Hardbass or hard bass () is a subgenre of pumping house that originated in Saint Petersburg, Russia during the late 1990s, drawing inspiration from bouncy techno, hardstyle, as well as local Russian influences. Hardbass is characterized by its fast tempo (usually 150–175 BPM), donks, distinctive basslines (commonly known as "hard bounce"), distorted sounds, heavy kicks and occasional chants or rapping. In several European countries, so-called "hardbass scenes" have sprung up, which are events related to the genre that involve multiple people dancing in public while masked, sometimes with moshing involved. History Late 1990s–mid 2000s: Saint Petersburg, metal shade, drug raves Hardbass first began to emerge in the late 1990s, mainly in the Saint Petersburg electronic dance music underground, when the pumping house genre, built around the bamboo bass, or donk bass (a type of metallic bass synthesizer sound, first invented by Klubbheads in 1997), became a staple in local raves. Eventually, party nights dedicated solely to pumping house were held in Saint Petersburg and to a lesser extent, in Moscow. The most famous venues for pumping raves in Saint Petersburg included those held in the "Rassvet" (Dawn) club and forest raves in a quarry near , an artificial lake not far from Saint Petersburg. Among the DJs kickstarting the domestic pumping house production in Russia were DJ Tolstyak, DJ 8088, DJ Yurbanoid, DJ Solovey, Dj Glyuk, and many others. This raving scene was markedly different from its later offshoots. It formed a distinct subculture, mostly catering to the lower and middle class youth of Saint Petersburg. Drug use (especially barbiturate, xyrem and amphetamine use) became prevalent in the scene. To increase the energy of the parties, Saint Petersburg producers and DJs started to increase the BPM of the pumping house they played and produced, eventually reaching 150 BPM and beyond. Saint Petersburg producers would include distinct whistles and other samples
https://en.wikipedia.org/wiki/HubMed
HubMed is an alternative, third-party interface to PubMed, the database of biomedical literature produced by the National Library of Medicine. It transforms data from PubMed and integrates it with data from other sources. Features include relevance-ranked search results, direct citation export, tagging and graphical display of related articles. See also List of academic databases and search engines
https://en.wikipedia.org/wiki/State%20of%20Denial%20%28film%29
State of Denial is a 2003 documentary film about AIDS in Africa, produced and directed by Elaine Epstein. The film highlights the errors of President Mbeki's government, which insists that there isn't enough evidence to show that HIV causes AIDS and refuses vital life-saving drugs to their people because of unknown long-term risks. The film follows the stories of HIV positive Africans and activists as well as their careers, interspersed with the harrowing statistics of the AIDS epidemic in Africa. It features various HIV positive patients coping with the disease in times when the use of ARV medicine was strongly discouraged by the South African government. The film captures the desperation and growing discontent of average South Africans infected and affected by the disease. Some of the subjects interviewed make heartbreaking but inspirational statements about AIDS and how living with it is like. After the death of his brother who also succumbed to the disease, a young man is filmed saying the following: For me, it was the most traumatic time in my life because I could see myself in him. You know, he didn’t really have to die as helplessly as he did. And not only him, but thousands and thousands of people are dying unnecessarily. It makes me sick. The film also features Zackie Achmat, an HIV positive AIDS activist and co-founder of Treatment Action Campaign (TAC), who refused to take ARVs until they were made available to the general public. State of Denial was first shown at the 2003 Sundance Film Festival. It later aired on TV as part of the Acclaimed Point of View Documentary Film Series. Four of the subjects interviewed died before the film was released. See also AIDS denialism, a movement challenging the scientific consensus that HIV causes AIDS.
https://en.wikipedia.org/wiki/Uniform-machines%20scheduling
Uniform machine scheduling (also called uniformly-related machine scheduling or related machine scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m different machines. The goal is to minimize the makespan - the total time required to execute the schedule. The time that machine i needs in order to process job j is denoted by pi,j. In the general case, the times pi,j are unrelated, and any matrix of positive processing times is possible. In the specific variant called uniform machine scheduling, some machines are uniformly faster than others. This means that, for each machine i, there is a speed factor si, and the run-time of job j on machine i is pi,j = pj / si. In the standard three-field notation for optimal job scheduling problems, the uniform-machine variant is denoted by Q in the first field. For example, the problem denoted by " Q||" is a uniform machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time. A special case of uniform machine scheduling is identical machine scheduling, in which all machines have the same speed. This variant is denoted by P in the first field. In some variants of the problem, instead of minimizing the maximum completion time, it is desired to minimize the average completion time (averaged over all n jobs); it is denoted by Q||. More generally, when some jobs are more important than others, it may be desired to minimize a weighted average of the completion time, where each job has a different weight. This is denoted by Q||. Algorithms Minimizing the average completion time Minimizing the average completion time can be done in polynomial time: The SPT algorithm (Shortest Processing Time First), sorts the jobs by their length, shortest first, and then assigns them to the processor with the earliest end time so far
https://en.wikipedia.org/wiki/Robinson%E2%80%93Dadson%20curves
The Robinson–Dadson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by D. W. Robinson and R. S. Dadson. Until recently, it was common to see the term 'Fletcher–Munson' used to refer to equal-loudness contours generally, even though the re-determination carried out by Robinson and Dadson in 1956, became the basis for an ISO standard ISO 226 which was only revised recently. It is now better to use the term 'Equal-loudness contours' as the generic term, especially as a recent survey by ISO redefined the curves in a new standard, ISO 226 :2003. According to the ISO report, the Robinson-Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. It comments that it is fortunate that the 40-Phon Fletcher-Munson curve on which the A-weighting standard was based turns out to have been in good agreement with modern determinations. The article also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are: The equipment used was not properly calibrated. The criteria used for judging equal loudness (which is tricky) differed. Different races actually vary greatly in this respect (possible, and most recent determinations were by the Japanese). Subjects were not properly rested for days in advance, or were exposed to loud noise in travelling to the tests which tensed the tensor timpani and stapedius muscles controlling low-frequency mechanical coupling. See also A-weighting ITU-R 468 noise weighting
https://en.wikipedia.org/wiki/Fokker%20periodicity%20block
Fokker periodicity blocks are a concept in tuning theory used to mathematically relate musical intervals in just intonation to those in equal tuning. They are named after Adriaan Daniël Fokker. These are included as the primary subset of what Erv Wilson refers to as constant structures, where "each interval occurs always subtended by the same number of steps". The basic idea of Fokker's periodicity blocks is to represent just ratios as points on a lattice, and to find vectors in the lattice which represent very small intervals, known as commas. Treating pitches separated by a comma as equivalent "folds" the lattice, effectively reducing its dimension by one; mathematically, this corresponds to finding the quotient group of the original lattice by the sublattice generated by the commas. For an n-dimensional lattice, identifying n linearly independent commas reduces the dimension of the lattice to zero, meaning that the number of pitches in the lattice is finite; mathematically, its quotient is a finite abelian group. This zero-dimensional set of pitches is a periodicity block. Frequently, it forms a cyclic group, in which case identifying the m pitches of the periodicity block with m-equal tuning gives equal tuning approximations of the just ratios that defined the original lattice. Note that octaves are usually ignored in constructing periodicity blocks (as they are in scale theory generally) because it is assumed that for any pitch in the tuning system, all pitches differing from it by some number of octaves are also available in principle. In other words, all pitches and intervals can be considered as residues modulo octave. This simplification is commonly known as octave equivalence. Definition of periodicity blocks Let an n-dimensional lattice (i.e. integer grid) embedded in n-dimensional space have a numerical value assigned to each of its nodes, such that moving within the lattice in one of the cardinal directions corresponds to a shift in pitch by a
https://en.wikipedia.org/wiki/Isoscape
An isoscape is a geologic map of isotope distribution. It is a spatially explicit prediction of elemental isotope ratios (δ) that is produced by executing process-level models of elemental isotope fractionation or distribution in a geographic information system (GIS). The word isoscape is derived from isotope landscape and was first coined by Jason B. West. Isoscapes of hydrogen, carbon, oxygen, nitrogen, strontium and sulfur have been used to answer scientific or forensic questions regarding the sources, partitioning, or provenance of natural and synthetic materials or organisms via their isotopic signatures. These include questions about migration, Earth's element cycles, human water use, climate, archaeological reconstructions, forensic science, and pollution. Isoscapes of hydrogen and oxygen isotopes of precipitation, surface water, groundwater, and tap water have been developed to better understand the water cycle at regional to global scales. See also Isotope geochemistry Notes Isotopes Geographic information systems Geologic maps
https://en.wikipedia.org/wiki/Key%20Code%20Qualifier
Key Code Qualifier is an error-code returned by a SCSI device. When a SCSI target device returns a check condition in response to a command, the initiator usually then issues a SCSI Request Sense command. This process is part of a SCSI protocol called Contingent Allegiance Condition. The target will respond to the Request Sense command with a set of SCSI sense data which includes three fields giving increasing levels of detail about the error: K - sense key - 4 bits, (byte 2 of Fixed sense data format) C - additional sense code (ASC) - 8 bits, (byte 12 of Fixed sense data format) Q - additional sense code qualifier (ASCQ) - 8 bits, (byte 13 of Fixed sense data format) The initiator can take action based on just the K field which indicates if the error is minor or major. However all three fields are usually logically combined into a 20 bit field called Key Code Qualifier or KCQ. The specification for the target device will define the list of possible KCQ values. In practice there are many KCQ values which are common between different SCSI device types and different SCSI device vendors. Common values are listed below, you should consult your hardware specific documentation as well. List of common SCSI KCQs
https://en.wikipedia.org/wiki/Quantum%20gyroscope
A quantum gyroscope is a very sensitive device to measure angular rotation based on quantum mechanical principles. The first of these was built by Richard Packard and his colleagues at the University of California, Berkeley. The extreme sensitivity means that theoretically, a larger version could detect effects like minute changes in the rotational rate of the Earth. Principle In 1962, Cambridge University PhD student Brian Josephson hypothesized that an electric current could travel between two superconducting materials even when they were separated by a thin insulating layer. The term Josephson effect has come to refer generically to the different behaviors that occur in any two weakly connected macroscopic quantum systems—systems composed of molecules that all possess identical wavelike properties. Among other things, the Josephson effect means that when two superfluids (zero friction fluids) are connected using a weak link and pressure is applied to the superfluid on one side of a weak link, the fluid will oscillate from one side of the weak link to the other. This phenomenon, known as quantum whistling, occurs when pressure is applied to push a superfluid through a very small hole, somewhat as sound is produced by blowing air through an ordinary whistle. A ring-shaped tube full of superfluid, blocked by a barrier containing a tiny hole, could in principle be used to detect pressure differences caused by changes in rotational motion of the ring, in effect functioning as a sensitive gyroscope. Superfluid whistling was first demonstrated using helium-3, which has the disadvantage of being scarce and expensive, and requiring extremely low temperature (a few thousandths of a Kelvin). Common helium-4, which remains superfluid at 2 Kelvin, is much more practical, but its quantum whistling is too weak to be heard with a single practical-sized hole. This problem was overcome by using barriers with thousands of holes, in effect a chorus of quantum whistles p
https://en.wikipedia.org/wiki/Nettle%20agent
Nettle agents (named after stinging nettles) or urticants are a variety of chemical warfare agents that produce corrosive skin and tissue injury upon contact, resulting in erythema, urticaria, intense itching, and a hive-like rash. Most nettle agents, such as the best known and studied nettle agent, phosgene oxime, are often grouped with the vesicant (blister agent) chemical agents. However, because nettle agents do not cause blisters, they are not true vesicants.
https://en.wikipedia.org/wiki/Mineral%20ascorbates
Mineral ascorbates are a group of salts of ascorbic acid (vitamin C). They are composed of a mineral cation bonded to ascorbate (the anion of ascorbic acid). Production Mineral ascorbates are powders manufactured by reacting ascorbic acid with mineral carbonates in aqueous solutions, venting the carbon dioxide, drying the reaction product, and then milling the dried product to the desired particle size. The choice of the mineral carbonates can be calcium carbonate, potassium carbonate, sodium bicarbonate, magnesium carbonate, or many other mineral forms. Uses Mineral ascorbates are used as dietary supplements and food additives. Ascorbate salts may be better tolerated by the human body than the corresponding weakly acidic ascorbic acid. Ascorbates are highly reactive antioxidants used as food preservatives. Examples of mineral ascorbates are: Sodium ascorbate (E301) Calcium ascorbate (E302) Potassium ascorbate (E303) Magnesium ascorbate See also Ascorbyl palmitate Ascorbyl stearate
https://en.wikipedia.org/wiki/HPV%20vaccine
Human papillomavirus (HPV) vaccines are vaccines that prevent infection by certain types of human papillomavirus (HPV). Available HPV vaccines protect against either two, four, or nine types of HPV. All HPV vaccines protect against at least HPV types 16 and 18, which cause the greatest risk of cervical cancer. It is estimated that HPV vaccines may prevent 70% of cervical cancer, 80% of anal cancer, 60% of vaginal cancer, 40% of vulvar cancer, and show more than 90% efficacy in preventing HPV-positive oropharyngeal cancers. They additionally prevent some genital warts, with the quadrivalent and nonavalent vaccines that protect against HPV types HPV-6 and HPV-11 providing greater protection. The World Health Organization (WHO) recommends HPV vaccines as part of routine vaccinations in all countries, along with other prevention measures. The vaccines require two or three doses depending on a person's age and immune status. Vaccinating girls around the ages of nine to thirteen is typically recommended. The vaccines provide protection for at least 5 to 10 years. Cervical cancer screening is still required following vaccination. Vaccinating a large portion of the population may also benefit the unvaccinated. HPV vaccines are very safe. Pain at the site of injection occurs in about 80% of people. Redness and swelling at the site and fever may also occur. No link to Guillain–Barré syndrome has been found. The first HPV vaccine became available in 2006. As of 2022, 125 countries include HPV vaccine in their routine vaccinations for girls, and 47 countries also for boys. It is on the World Health Organization's List of Essential Medicines and prequalified vaccines. Vaccination may be cost effective in the low and middle-income countries. As of 2017, Gardasil 9 is the only HPV vaccine available in the United States, as it provides protection against more HPV types than the earlier approved vaccines (the original Gardasil and Cervarix). Medical uses HPV vaccines are used t
https://en.wikipedia.org/wiki/Causal%20filter
In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems (including filters) that are realizable (i.e. that operate in real time) must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function. An example of an anti-causal filter is a maximum phase filter, which can be defined as a stable, anti-causal filter whose inverse is also stable and anti-causal. Example The following definition is a sliding or moving average of input data . A constant factor of is omitted for simplicity: where could represent a spatial coordinate, as in image processing. But if represents time , then a moving average defined that way is non-causal (also called non-realizable), because depends on future inputs, such as . A realizable output is which is a delayed version of the non-realizable output. Any linear filter (such as a moving average) can be characterized by a function h(t) called its impulse response. Its output is the convolution In those terms, causality requires and general equality of these two expressions requires h(t) = 0 for all t < 0. Characterization of causal filters in the frequency domain Let h(t) be a causal filter with corresponding Fourier transform H(ω). Define the function which is non-causal. On the other hand, g(t) is Hermitian and, consequently, its Fourier transform G(ω) is real-valued. We now have the following relat
https://en.wikipedia.org/wiki/Quadrature%20filter
In signal processing, a quadrature filter is the analytic representation of the impulse response of a real-valued filter: If the quadrature filter is applied to a signal , the result is which implies that is the analytic representation of . Since is an analytic signal, it is either zero or complex-valued. In practice, therefore, is often implemented as two real-valued filters, which correspond to the real and imaginary parts of the filter, respectively. An ideal quadrature filter cannot have a finite support. It has single sided support, but by choosing the (analog) function carefully, it is possible to design quadrature filters which are localized such that they can be approximated by means of functions of finite support. A digital realization without feedback (FIR) has finite support. Applications This construction will simply assemble an analytic signal with a starting point to finally create a causal signal with finite energy. The two Delta Distributions will perform this operation. This will impose an additional constraint on the filter. Single frequency signals For single frequency signals (in practice narrow bandwidth signals) with frequency the magnitude of the response of a quadrature filter equals the signal's amplitude A times the frequency function of the filter at frequency . This property can be useful when the signal s is a narrow-bandwidth signal of unknown frequency. By choosing a suitable frequency function Q of the filter, we may generate known functions of the unknown frequency which then can be estimated. See also Analytic signal Hilbert transform Signal processing
https://en.wikipedia.org/wiki/CrypTool
CrypTool is an open-source project that is a free e-learning software for illustrating cryptographic and cryptanalytic concepts. According to "Hakin9", CrypTool is worldwide the most widespread e-learning software in the field of cryptology. CrypTool implements more than 400 algorithms. Users can adjust these with own parameters. To introduce users to the field of cryptography, the organization created multiple graphical interface software containing an online documentation, analytic tools and algorithms. They contain most classical ciphers, as well as modern symmetric and asymmetric cryptography including RSA, ECC, digital signatures, hybrid encryption, homomorphic encryption, and Diffie–Hellman key exchange. Methods from the area of quantum cryptography (like BB84 key exchange protocol) and the area of post-quantum cryptography (like McEliece, WOTS, Merkle-Signature-Scheme, XMSS, XMSS_MT, and SPHINCS) are implemented. In addition to the algorithms, solvers (analyzers) are included, especially for classical ciphers. Other methods (for instance Huffman code, AES, Keccak, MSS) are visualized. In addition it contains: didactical games (like Number Shark, Divider Game, or Zudo-Ku) and interactive tutorials about primes, elementary number theory, and lattice-based cryptography. Development, history and roadmap The development of CrypTool started in 1998. Originally developed by German companies and universities, it is an open-source project since 2001. More than sixty people worldwide contribute regularly to the project. Contributions as software plugins came from universities or schools in the following towns: Belgrad, Berlin, Bochum, Brisbane, Darmstadt, Dubai, Duisburg-Essen, Eindhoven, Hagenberg, Jena, Kassel, Klagenfurt, Koblenz, London, Madrid, Mannheim, San Jose, Siegen, Utrecht, Warsaw. Currently 4 versions of CrypTool are maintained and developed: The CrypTool 1 (CT1) software is available in 6 languages (English, German, Polish, Spanish, Serbian, and Fren
https://en.wikipedia.org/wiki/Arcus%20II%3A%20Silent%20Symphony
is a computer game developed and released in Japan by Wolf Team. Narumi Kakinouchi, co-creator of Vampire Princess Miyu, was the art director for this game. The music for the game was composed by Masaaki Uno, Motoi Sakuraba, and Yasunori Shiono. See also Arcus Odyssey
https://en.wikipedia.org/wiki/Reciprocal%20determinism
Reciprocal determinism is the theory set forth by psychologist Albert Bandura which states that a person's behavior both influences and is influenced by personal factors and the social environment. Bandura accepts the possibility that an individual's behavior may be conditioned through the use of consequences. At the same time he asserts that a person's behavior (and personal factors, such as cognitive skills or attitudes) can impact the environment. Bandura was able to show this when he created the Bandura's Box experiment. As an example, Bandura's reciprocal determinism could occur when a child is acting out in school. The child doesn't like going to school; therefore, they act out in class. This results in teachers and administrators of the school disliking having the child around. When confronted by the situation, the child admits they hate school and other peers don't like them. This results in the child acting inappropriately, forcing the administrators who dislike having them around to create a more restrictive environment for children of this stature. Each behavioral and environmental factor coincides with the child and so forth resulting in a continuous battle on all three levels. Reciprocal determinism is the idea that behavior is controlled or determined by the individual, through cognitive processes, and by the environment, through external social stimulus events. The basis of reciprocal determinism should transform individual behavior by allowing subjective thought processes transparency when contrasted with cognitive, environmental, and external social stimulus events. Actions do not go one way or the other, as it is affected by repercussions, meaning one's behavior is complicated and can't be thought of as individual and environmental means. Behavior consist of environmental and individual parts that interlink together to function. Many studies showed reciprocal associations between people and their environments over time. Research Research cond
https://en.wikipedia.org/wiki/Pyramid%20%28geometry%29
In geometry, a pyramid () is a polyhedron formed by connecting a polygonal base and a point, called the apex. Each base edge and apex form a triangle, called a lateral face. It is a conic solid with polygonal base. A pyramid with an base has vertices, faces, and edges. All pyramids are self-dual. Terminology A right pyramid has its apex directly above the centroid of its base. Nonright pyramids are called oblique pyramids. A regular pyramid has a regular polygon base and is usually implied to be a right pyramid. When unspecified, a pyramid is usually assumed to be a regular square pyramid, like the physical pyramid structures. A triangle-based pyramid is more often called a tetrahedron. Among oblique pyramids, like acute and obtuse triangles, a pyramid can be called acute if its apex is above the interior of the base and obtuse if its apex is above the exterior of the base. A right-angled pyramid has its apex above an edge or vertex of the base. In a tetrahedron these qualifiers change based on which face is considered the base. Pyramids are a class of the prismatoids. Pyramids can be doubled into bipyramids by adding a second offset point on the other side of the base plane. A pyramid cut off by a plane is called a truncated pyramid; if the truncation plane is parallel to the pyramid's base, it is called a frustum. Right pyramids with a regular base A right pyramid with a regular base has isosceles triangle sides, with symmetry is Cnv or [1,n], with order 2n. It can be given an extended Schläfli symbol ( ) ∨ {n}, representing a point, ( ), joined (orthogonally offset) to a regular polygon, {n}. A join operation creates a new edge between all pairs of vertices of the two joined figures. The trigonal or triangular pyramid with all equilateral triangle faces becomes the regular tetrahedron, one of the Platonic solids. A lower symmetry case of the triangular pyramid is C3v, which has an equilateral triangle base, and 3 identical isosceles triangle sides. Th
https://en.wikipedia.org/wiki/Molybdenum%20disilicide
Molybdenum disilicide (MoSi2, or molybdenum silicide), an intermetallic compound, a silicide of molybdenum, is a refractory ceramic with primary use in heating elements. It has moderate density, melting point 2030 °C, and is electrically conductive. At high temperatures it forms a passivation layer of silicon dioxide, protecting it from further oxidation. The thermal stability of MoSi2 alongside its high emissivity make this material, alongside WSi2 attractive for applications as a high emissivity coatings in heat shields for atmospheric entry. MoSi2 is a gray metallic-looking material with tetragonal crystal structure (alpha-modification); its beta-modification is hexagonal and unstable. It is insoluble in most acids but soluble in nitric acid and hydrofluoric acid. While MoSi2 has excellent resistance to oxidation and high Young's modulus at temperatures above 1000 °C, it is brittle in lower temperatures. Also, at above 1200 °C it loses creep resistance. These properties limits its use as a structural material, but may be offset by using it together with another material as a composite material. Molybdenum disilicide and MoSi2-based materials are usually made by sintering. Plasma spraying can be used for producing its dense monolithic and composite forms; material produced this way may contain a proportion of β-MoSi2 due to its rapid cooling. Molybdenum disilicide heating elements can be used for temperatures up to 1800 °C, in electric furnaces used in laboratory and production environment in production of glass, steel, electronics, ceramics, and in heat treatment of materials. While the elements are brittle, they can operate at high power without aging, and their electrical resistivity does not increase with operation time. Their maximum operating temperature has to be lowered in atmospheres with low oxygen content due to breakdown of the passivation layer. Other ceramic materials used for heating elements include silicon carbide, barium titanate, and lead t
https://en.wikipedia.org/wiki/Plugboard
A plugboard or control panel (the term used depends on the application area) is an array of jacks or sockets (often called hubs) into which patch cords can be inserted to complete an electrical circuit. Control panels are sometimes used to direct the operation of unit record equipment, cipher machines, and early computers. Unit record equipment Main article: Unit record equipment The earliest machines were hardwired for specific applications. Control panels were introduced in 1906 for the Hollerith Type 1 Tabulator (photo of Type 3 with built-in control panel here). Removable control panels were introduced with the Hollerith (IBM) type 3-S tabulator in the 1920s. Applications then could be wired on separate control panels, and inserted into tabulators as needed. Removable control panels came to be used in all unit record machines where the machine's use for different applications required rewiring. IBM removable control panels ranged in size from 6 1/4" by 10 3/4" (for machines such as the IBM 077, IBM 550, IBM 514) to roughly one to two feet (300 to 600 mm) on a side and had a rectangular array of hubs. Plugs at each end of a single-conductor patch cord were inserted into hubs, making a connection between two contacts on the machine when the control panel was placed in the machine, thereby connecting an emitting hub to an accepting or entry hub. For example, in a card duplicator application a card column reading (emitting) hub might be connected to a punch magnet entry hub. It was a relatively simple matter to copy some fields, perhaps to different columns, and ignore other columns by suitable wiring. Tabulator control panels could require dozens of patch cords for some applications. Tabulator functions were implemented with both mechanical and electrical components. Control panels simplified the changing of electrical connections for different applications, but changing most tabulator's use still required mechanical changes. The IBM 407 was the first IBM t
https://en.wikipedia.org/wiki/Reprogramming
In biology, reprogramming refers to erasure and remodeling of epigenetic marks, such as DNA methylation, during mammalian development or in cell culture. Such control is also often associated with alternative covalent modifications of histones. Reprogrammings that are both large scale (10% to 100% of epigenetic marks) and rapid (hours to a few days) occur at three life stages of mammals. Almost 100% of epigenetic marks are reprogrammed in two short periods early in development after fertilization of an ovum by a sperm. In addition, almost 10% of DNA methylations in neurons of the hippocampus can be rapidly altered during formation of a strong fear memory. After fertilization in mammals, DNA methylation patterns are largely erased and then re-established during early embryonic development. Almost all of the methylations from the parents are erased, first during early embryogenesis, and again in gametogenesis, with demethylation and remethylation occurring each time. Demethylation during early embryogenesis occurs in the preimplantation period. After a sperm fertilizes an ovum to form a zygote, rapid DNA demethylation of the paternal DNA and slower demethylation of the maternal DNA occurs until formation of a morula, which has almost no methylation. After the blastocyst is formed, methylation can begin, and with formation of the epiblast a wave of methylation then takes place until the implantation stage of the embryo. Another period of rapid and almost complete demethylation occurs during gametogenesis within the primordial germ cells (PGCs). Other than the PGCs, in the post-implantation stage, methylation patterns in somatic cells are stage- and tissue-specific with changes that presumably define each individual cell type and last stably over a long time. Embryonic development The mouse sperm genome is 80–90% methylated at its CpG sites in DNA, amounting to about 20 million methylated sites. After fertilization, the paternal chromosome is almost compl
https://en.wikipedia.org/wiki/International%20Technology%20Roadmap%20for%20Semiconductors
The International Technology Roadmap for Semiconductors (ITRS) is a set of documents produced by a group of semiconductor industry experts. These experts are representative of the sponsoring organisations which include the Semiconductor Industry Associations of Taiwan, South Korea, the United States, Europe, Japan, and China. As of 2017, ITRS is no longer being updated. Its successor is the International Roadmap for Devices and Systems. The documents carried disclaimer: "The ITRS is devised and intended for technology assessment only and is without regard to any commercial considerations pertaining to individual products or equipment". The documents represent best opinion on the directions of research and time-lines up to about 15 years into the future for the following areas of technology: History Constructing an integrated circuit, or any semiconductor device, requires a series of operations—photolithography, etching, metal deposition, and so on. As the industry evolved, each of these operations were typically performed by specialized machines built by a variety of commercial companies. This specialization may potentially make it difficult for the industry to advance, since in many cases it does no good for one company to introduce a new product if the other needed steps are not available around the same time. A technology roadmap can help this by giving an idea when a certain capability will be needed. Then each supplier can target this date for their piece of the puzzle. With the progressive externalization of production tools to the suppliers of specialized equipment, participants identified a need for a clear roadmap to anticipate the evolution of the market and to plan and control the technological needs of IC production. For several years, the Semiconductor Industry Association (SIA) gave this responsibility of coordination to the United States, which led to the creation of an American style roadmap, the National Technology Roadmap for Semiconductors (
https://en.wikipedia.org/wiki/Fallacies%20of%20distributed%20computing
The fallacies of distributed computing are a set of assertions made by L Peter Deutsch and others at Sun Microsystems describing false assumptions that programmers new to distributed applications invariably make. The fallacies The fallacies are The network is reliable; Latency is zero; Bandwidth is infinite; The network is secure; Topology doesn't change; There is one administrator; Transport cost is zero; The network is homogeneous. The effects of the fallacies Software applications are written with little error-handling on networking errors. During a network outage, such applications may stall or infinitely wait for an answer packet, permanently consuming memory or other resources. When the failed network becomes available, those applications may also fail to retry any stalled operations or require a (manual) restart. Ignorance of network latency, and of the packet loss it can cause, induces application- and transport-layer developers to allow unbounded traffic, greatly increasing dropped packets and wasting bandwidth. Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks. Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures. Changes in network topology can have effects on both bandwidth and latency issues, and therefore can have similar problems. Multiple administrators, as with subnets for rival companies, may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths. The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls. If a system assumes a homogeneous network, then it can lead to the same problems that result from the first three fallacies. History The list of fallacies generally came about at Sun Microsystems. L. Peter Deutsch, one of the original Sun "Fel
https://en.wikipedia.org/wiki/Architecture%20of%20macOS
The architecture of macOS describes the layers of the operating system that is the culmination of Apple Inc.'s decade-long research and development process to replace the classic Mac OS. After the failures of their previous attempts—Pink, which started as an Apple project but evolved into a joint venture with IBM called Taligent, and Copland, which started in 1994 and was cancelled two years later—Apple began development of Mac OS X, later renamed OS X and then macOS, with the acquisition of NeXT's NeXTSTEP in 1997. Development NeXTSTEP NeXTSTEP used a hybrid kernel that combined the Mach 2.5 kernel developed at Carnegie Mellon University with subsystems from 4.3BSD. NeXTSTEP also introduced a new windowing system based on Display PostScript that intended to achieve better WYSIWYG systems by using the same language to draw content on monitors that drew content on printers. NeXT also included object-oriented programming tools based on the Objective-C language that they had acquired from Stepstone and a collection of Frameworks (or Kits) that were intended to speed software development. NeXTSTEP originally ran on Motorola's 68k processors, but was later ported to Intel's x86, Hewlett-Packard's PA-RISC and Sun Microsystems' SPARC processors. Later on, the developer tools and frameworks were released, as OpenStep, as a development platform that would run on other operating systems. Rhapsody On February 4, 1997, Apple acquired NeXT and began development of the Rhapsody operating system. Rhapsody built on NeXTSTEP, porting the core system to the PowerPC architecture and adding a redesigned user interface based on the Platinum user interface from Mac OS 8. An emulation layer called Blue Box allowed Mac OS applications to run within an actual instance of the Mac OS and an integrated Java platform. The Objective-C developer tools and Frameworks were referred to as the Yellow Box and also made available separately for Microsoft Windows. The Rhapsody project eventually b
https://en.wikipedia.org/wiki/Sverdrup%20balance
The Sverdrup balance, or Sverdrup relation, is a theoretical relationship between the wind stress exerted on the surface of the open ocean and the vertically integrated meridional (north-south) transport of ocean water. History Aside from the oscillatory motions associated with tidal flow, there are two primary causes of large scale flow in the ocean: (1) thermohaline processes, which induce motion by introducing changes at the surface in temperature and salinity, and therefore in seawater density, and (2) wind forcing. In the 1940s, when Harald Sverdrup was thinking about calculating the gross features of ocean circulation, he chose to consider exclusively the wind stress component of the forcing. As he says in his 1947 paper, in which he presented the Sverdrup relation, this is probably the more important of the two. After making the assumption that frictional dissipation is negligible, Sverdrup obtained the simple result that the meridional mass transport (the Sverdrup transport) is proportional to the curl of the wind stress. This is known as the Sverdrup relation; . Here, is the rate of change of the Coriolis parameter, f, with meridional distance; is the vertically integrated meridional mass transport including the geostrophic interior mass transport and the Ekman mass transport; is the unit vector in the vertical direction; is the wind stress vector. Physical interpretation Sverdrup balance may be thought of as a consistency relationship for flow which is dominated by the Earth's rotation. Such flow will be characterized by weak rates of spin compared to that of the earth. Any parcel at rest with respect to the surface of the earth must match the spin of the earth underneath it. Looking down on the earth at the north pole, this spin is in a counterclockwise direction, which is defined as positive rotation or vorticity. At the south pole it is in a clockwise direction, corresponding to negative rotation. Thus to move a parcel of fluid from the
https://en.wikipedia.org/wiki/Price-to-cash%20flow%20ratio
The price/cash flow ratio (also called price-to-cash flow ratio or P/CF), is a ratio used to compare a company's market value to its cash flow. It is calculated by dividing the company's market cap by the company's operating cash flow in the most recent fiscal year (or the most recent four fiscal quarters); or, equivalently, divide the per-share stock price by the per-share operating cash flow. In theory, the lower a stock's price/cash flow ratio is, the better value that stock is. For example, if the stock price for two companies is and one company has a cash flow of (=5) and the other company has a cash flow of (=2.5), then if all else is equal, the company with the higher cash flow (lower ratio, P/CF=2.5) has the better value. CFPS = (NI + Depreciation + Amortization)/ Common Shares Outstanding
https://en.wikipedia.org/wiki/Ferranti-Packard%206000
The FP-6000 was a second-generation mainframe computer developed and built by Ferranti-Packard, the Canadian division of Ferranti, in the early 1960s. It is particularly notable for supporting multitasking, being one of the first commercial machines to do so. Only six FP-6000s were sold before the computer division of Ferranti-Packard was sold off by Ferranti's UK headquarters in 1963, the FP-6000 becoming the basis for the mid-range machines of the ICT 1900, which sold into the thousands in Europe. Background What was to become the FP-6000 had its genesis in a Royal Canadian Navy project starting in 1949 called DATAR. For DATAR, Ferranti-Packard (then still known as Ferranti Canada) built an experimental computer to share information among ships in a convoy. Although the prototype was a success, the failure rate of the vacuum tubes was a concern to everyone and Ferranti suggested they re-build the machine using transistors instead. DATAR ran out of funds before this conversion could take place, but Ferranti put the experience to good use in a series of one-off transistorized machines. One such example was a cheque sorting system built for the Federal Reserve Bank, itself a modification of a system developed to sort mail for the Canadian Post Office. The developmental series eventually culminated in ReserVec. ReserVec was the first computerized reservation system to enter service when it took over all bookings for Air Canada in 1961. Ferranti initially had high hopes for the machine, thinking that it would be successful in Europe if sold by the UK headquarters' sales staff. As had happened many times in the past, however, the UK computer team suffered from a terminal case of not invented here, and decided it was better if they designed their own instead. Their project was never delivered, and ReserVec withered. Ferranti-Packard was unwilling to simply let the development effort go to waste, and started looking for ways to commercialize the ReserVec hardware into
https://en.wikipedia.org/wiki/Impact%20ionization
Impact ionization is the process in a material by which one energetic charge carrier can lose energy by the creation of other charge carriers. For example, in semiconductors, an electron (or hole) with enough kinetic energy can knock a bound electron out of its bound state (in the valence band) and promote it to a state in the conduction band, creating an electron-hole pair. For carriers to have sufficient kinetic energy a sufficiently large electric field must be applied, in essence requiring a sufficiently large voltage but not necessarily a large current. If this occurs in a region of high electrical field then it can result in avalanche breakdown. This process is exploited in avalanche diodes, by which a small optical signal is amplified before entering an external electronic circuit. In an avalanche photodiode the original charge carrier is created by the absorption of a photon. The impact ionization process is used in modern cosmic dust detectors like the Galileo Dust Detector and dust analyzers Cassini CDA, Stardust CIDA and the Surface Dust Analyser for the identification of dust impacts and the compositional analysis of cosmic dust particles. In some sense, impact ionization is the reverse process to Auger recombination. Avalanche photodiodes (APD) are used in optical receivers before the signal is given to the receiver circuitry the photon is multiplied with the photocurrent and this increases the sensitivity of the receiver since photocurrent is multiplied before encountering of the thermal noise associated with the receiver circuit. See also Multiphoton ionization Avalanche breakdown Avalanche diode Avalanche photodiode
https://en.wikipedia.org/wiki/Central%20venous%20pressure
Central venous pressure (CVP) is the blood pressure in the venae cavae, near the right atrium of the heart. CVP reflects the amount of blood returning to the heart and the ability of the heart to pump the blood back into the arterial system. CVP is often a good approximation of right atrial pressure (RAP), although the two terms are not identical, as a pressure differential can sometimes exist between the venae cavae and the right atrium. CVP and RAP can differ when arterial tone is altered. This can be graphically depicted as changes in the slope of the venous return plotted against right atrial pressure (where central venous pressure increases, but right atrial pressure stays the same; VR = CVP − RAP). CVP has been, and often still is, used as a surrogate for preload, and changes in CVP in response to infusions of intravenous fluid have been used to predict volume-responsiveness (i.e. whether more fluid will improve cardiac output). However, there is increasing evidence that CVP, whether as an absolute value or in terms of changes in response to fluid, does not correlate with ventricular volume (i.e. preload) or volume-responsiveness, and so should not be used to guide intravenous fluid therapy. Nevertheless, CVP monitoring is a useful tool to guide hemodynamic therapy. The cardiopulmonary baroreflex responds to an increase in CVP by decreasing systemic vascular resistance while increasing heart rate and ventricular contractility in dogs. Measurement Normal CVP in patients can be measured from two points of reference: Sternum: 0–14 cm H2O Midaxillary line: 8–15 cm H2O CVP can be measured by connecting the patient's central venous catheter to a special infusion set which is connected to a small diameter water column. If the water column is calibrated properly the height of the column indicates the CVP. In most intensive care units, facilities are available to measure CVP continuously. Normal values vary between 4 and 12 cm H2O. Factors affecting CVP Fac
https://en.wikipedia.org/wiki/C.%20Thomas%20Elliott
Charles Thomas Elliott (known as Tom Elliott), (born 16 January 1939), is a scientist in the fields of narrow gap semiconductor and infrared detector research. Early life Hailing from County Durham, he attended Washington Grammar Technical School. After gaining his Ph.D. he worked at the University of Manchester Career He joined RRE in Malvern, Worcestershire in the late 1960s. In the 1970s he invented the SPRITE detector (Signal PRocessing In The Element) which was also known as the TED (Tom Elliott's Detector). This was a photoconductor device in which the infrared scene was scanned across the detector (made from HgCdTe) at the same rate as the carriers drifted under an applied controlled constant bias current. This device became part of TICM - the standard UK thermal imaging common module used since the 1980s by UK armed forces. Tom Elliott received a Rank Prize in 1982 for this work and was elected a Fellow of the Royal Society in 1988. He was appointed CBE in the 1994 Birthday Honours. He won the Clifford Paterson Medal and Prize in 1997. Tom Elliott also contributed to the development of the semiconductor indium antimonide (InSb) as an infrared detector, magnetic sensor and fast, low voltage transistor material. He was involved in the exploitation of negative luminescence in diode structures. He retired from the successor to RRE, DERA in 1999 and is an honorary professor at Heriot-Watt University. Personal life A conference centre at DERA Malvern (by 2007 QinetiQ) was named 'The Tom Elliott Centre' in his honour when opened by the Princess Royal in 2007. He lives in Malvern. Bibliography Infrared Detectors and Emitters: Materials and Devices, edited by Peter Capper and C T Elliott, Springer (2000) An infrared detector with integrated signal processing, C. T. Elliott, Electron Devices Meeting, 1982 International, Vol. 28 Page(s): 132 - 135 (1982) Uncooled InSb/In1–xAlxSb mid-infrared emitter, T. Ashley, C. T. Elliott, N. T. Gordon, R. S. Hall, A
https://en.wikipedia.org/wiki/Ridable%20miniature%20railway
A ridable miniature railway (US: riding railroad or grand scale railroad) is a large scale, usually ground-level railway that hauls passengers using locomotives that are often models of full-sized railway locomotives (powered by diesel or petrol engines, live steam or electric motors). Overview Typically miniature railways have a rail track gauge between and under , though both larger and smaller gauges are used. At gauges of and less, the track is commonly raised above ground level. Flat cars are arranged with foot boards so that driver and passengers sit astride the track. The track is often multi-gauged, to accommodate , , and sometimes gauge locomotives. The smaller gauges of miniature railway track can also be portable and is generally / gauge on raised track or as / on ground level. Typically portable track is used to carry passengers at temporary events such as fêtes and summer fairs. Typically miniature lines are operated by not for profit organisations - often model engineering societies - though some are entirely in private grounds and others operate commercially. There are many national organisations representing and providing guidance on miniature railway operations including the Australian Association of Live Steamers and Southern Federation of Model Engineering Societies. Distinctions between model, miniature, and minimum-gauge railway A 'model railway' is one where the gauge is too small for people to ride on the trains. Due to the use of mixed gauge tracks, passengers may ride on a miniature railway which shares the same gauge as, and is pulled by, a large model locomotive on a smaller model gauge, although this is rare. 'Miniature railways' are railways that can be ridden by people and are used for pleasure/as a pastime for their constructors and passengers. In the USA, miniature railways are also known as 'riding railroads' or 'grand scale railroads'. The track gauges recognised as being miniature railways vary by country, but in the UK
https://en.wikipedia.org/wiki/Single-ended%20triode
A single-ended triode (SET) is a vacuum tube electronic amplifier that uses a single triode to produce an output, in contrast to a push-pull amplifier which uses a pair of devices with antiphase inputs to generate an output with the wanted signals added and the distortion components subtracted. Single-ended amplifiers normally operate in Class A; push-pull amplifiers can also operate in Classes AB or B without excessive net distortion, due to cancellation. The term single-ended triode amplifier is mainly used for output stages of audio power amplifiers. The phrase directly heated triode single-ended triode amplifier (abbreviated to DHT SET) is used when directly heated triodes are used. There are also single-ended tetrode, beam tetrode/beam power tube/kinkless tetrode, and pentode amplifiers with the same functionality and similar circuitry; e.g. this Mullard design. Audio power amplifiers A typical triode audio power amplifier will have a driver that provides voltage gain, coupled to a triode (like 2A3 and 300B) or a pentode or kinkless tetrode such as EL34 or KT88 connected as a triode, connected to the loudspeaker through an audio transformer in a common cathode arrangement. The triode is biased to Class A operation by applying a suitable negative bias voltage to its input control grid (see diagram), or by raising the cathode potential with biasing components. In traditional SET amp, the direct current of output triode (from 30 mA for triode-strapped 6V6 to 250 mA for 6C33C) flows continuously through the primary winding of a transformer. This requires inserting a gap in the transformer core to prevent core saturation by DC current; adding a gap decreases primary inductance and limits bass response; the inductance and bass response can be restored by using a larger transformer than if the DC were not present. An alternative schematic, parafeed amplifier, solves bandwidth problem by blocking direct current from output transformer (which does not need to be
https://en.wikipedia.org/wiki/European%20Food%20Safety%20Authority
The European Food Safety Authority (EFSA) is the agency of the European Union (EU) that provides independent scientific advice and communicates on existing and emerging risks associated with the food chain. EFSA was established in February 2002, is based in Parma, Italy, and for 2021 it has a budget of €118.6 million, and a total staff of 542. The work of EFSA covers all matters with a direct or indirect impact on food and feed safety, including animal health and welfare, plant protection and plant health and nutrition. EFSA supports the European Commission, the European Parliament and EU member states in taking effective and timely risk management decisions that ensure the protection of the health of European consumers and the safety of the food and feed chain. EFSA also communicates to the public in an open and transparent way on all matters within its remit. Structure Based on a regulation of 2002, the EFSA is composed of four bodies: Management Board Executive Director Advisory Forum Scientific Committee and Scientific Panels The Management Board sets the budget, approves work programmes, and is responsible for ensuring that EFSA co-operates successfully with partner organisations across the EU and beyond. It is composed of fourteen members appointed by the Council of the European Union in consultation with the European Parliament from a list drawn up by the European Commission, plus one representative of the European Commission. The Executive Director is EFSA's legal representative and is responsible for day-to-day administration, drafting and implementing work programmes, and implementing other decisions adopted by the Management Board. They are appointed by the Management Board. The Advisory Forum advises the Executive Director, in particular in drafting a proposal for the EFSA's work programmes. It is composed of representatives of national bodies responsible for risk assessment in the Member States, with observers from Norway, Iceland, Switzerland
https://en.wikipedia.org/wiki/Nightmares%20in%20the%20Sky
Nightmares in the Sky: Gargoyles and Grotesques is a coffee table book about architectural gargoyles and grotesques, photographed by f-stop Fitzgerald with accompanying text by Stephen King, and published in 1988. An excerpt was published in the September 1988 issue of Penthouse. Some of the images in the book were used as textures in the video games Doom and Doom II. Reception Kirkus Reviews found some King's text took a "teen stance" occasionally, but that it "evokes the weight and brooding presence" of gargoyles, coming to a possibility to their purpose quoting King, "venting the waste material of our own hidden fears". However, it was the stark photographs from f-stop Fitzgerald that truly stood out to the reviewer.
https://en.wikipedia.org/wiki/Mint%20sauce
Mint sauce is a green sauce originating in the United Kingdom, made from finely chopped spearmint (Mentha spicata) leaves soaked in vinegar, and a small amount of sugar. Lime juice is sometimes added. The sauce based on mint and vinegar has a rather thin consistency and is flecked with chopped leaves of the herb. In British and Irish cuisine it is often served as a condiment for roast lamb or, in some areas, mushy peas. It is often purchased ready-made, being easy to find in British food shops. A popular alternative is Mint jelly, which is of a thicker consistency and sweeter than mint sauce. Similar herb-based green sauces were common throughout Medieval Europe, with the use of mint being more common in French and Italian cuisine of the period than that of the English; however, they became less common and mostly died out as Europe entered the Modern Era. Variations Mint chutney is a mint based sauce which is served with Indian snacks and breakfast items like Idly, Dhokla, etc. It is made with ground fresh mint leaves with a variety of ingredients like cilantro, green chili, lemon juice (in the northern parts of India) or tamarind (in southern India), salt, fried bengal gram and optionally curd. In Tunisia a similar sauce is made out of dried mint and can be served with a méchoui, a mulukhiyah or as a base for a vinaigrette. Dried and fresh mint are also part of several dishes of Tunisian cuisine. Mint sauces may include fruits in their preparation, such as raspberries. See also List of sauces Chutney in South Asian cuisine may be made with mint