source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Glory%20hole
A glory hole (also spelled gloryhole and glory-hole) is a hole in a wall or partition, often between public lavatory cubicles or sex video arcade booths and lounges, for people to engage in sexual activity or observe the person on the opposite side. Glory holes are especially associated with gay male culture, and anal or oral sex, and come from a history of persecution. The partition maintains anonymity and a sense of reassurance that the people involved would not be identified and possibly arrested. However, they are not exclusively favoured by gay people, and have become more commonly acknowledged as a fetish for heterosexual and bisexual individuals. In more recent years, public glory holes have faded in popularity in many countries, though some gay websites offer directories of the remaining glory holes. Glory holes are sometimes the topic of erotic literature, and pornographic films have been devoted to the uses of glory holes. Motivations Numerous motivations can be ascribed to the use and eroticism of glory holes. As a wall separates the two participants, they have no contact except for a penis and a mouth, hand, anus, or vagina. Almost total anonymity is maintained, as no other attributes are taken into consideration. The glory hole is seen as an erotic oasis in gay subcultures around the world; people's motives, experiences and attributions of value in its use are varied. In light of the ongoing HIV pandemic, many gay men re-evaluated their sexual and erotic desires and practices. It is suggested by queer theorist Tim Dean that glory holes allow for a physical barrier, which may be an extension of psychological barriers, in which there is internalized homophobia (a result of many societies' reluctance towards discussing LGBT practices and people). For some gay men, a glory hole serves to depersonalize their partner altogether as a disembodied object of sexual desire. History The first documented instance of a glory hole was in a 1707 court case known
https://en.wikipedia.org/wiki/Open%20Mobile%20Terminal%20Platform
The Open Mobile Terminal Platform (OMTP) was a forum created by mobile network operators to discuss standards with manufacturers of mobile phones and other mobile devices. During its lifetime, the OMTP included manufacturers such as Huawei, LG Electronics, Motorola, Nokia, Samsung and Sony Ericsson. Membership OMTP was originally set up by leading mobile operators. At the time it transitioned into the Wholesale Applications Community at the end of June 2010, there were nine full members: AT&T, Deutsche Telekom AG, KT, Orange, Smart Communications, Telecom Italia, Telefónica, Telenor and Vodafone. OMTP also had the support of two sponsors, Ericsson and Nokia. Activities OMTP recommendations have hugely helped to standardise mobile operator terminal requirements, and its work has gone towards helping to defragment and deoptionalise operators' recommendations. OMTP's focus was on gathering and driving mobile terminal requirements, and publishing their findings in their Recommendations. OMTP was technology neutral, with its recommendations intended for deployment across the range of technology platforms, operating systems (OS) and middleware layers. OMTP is perhaps best known for its work in the field of mobile security, but its work encompassed the full range of mobile device capabilities. OMTP published recommendations in 2007 and early 2008 on areas such as Positioning Enablers, Advanced Device Management, IMS and Mobile VoIP. Later, the Advanced Trusted Environment: OMTP TR1 and its supporting document, 'Security Threats on Embedded Consumer Devices' were released, with the endorsement of the UK Home Secretary, Jacqui Smith. OMTP also published requirements document addressing support for advanced SIM cards. This document defines also advanced profiles for Smart Card Web Server, High Speed Protocol, Mobile TV and Contactless. OMTP has also made significant progress in getting support for the use of micro-USB as a standard connector for data and power. A full
https://en.wikipedia.org/wiki/Spatial%E2%80%93temporal%20reasoning
Spatial–temporal reasoning is an area of artificial intelligence that draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space. Influence from cognitive psychology A convergent result in cognitive psychology is that the connection relation is the first spatial relation that human babies acquire, followed by understanding orientation relations and distance relations. Internal relations among the three kinds of spatial relations can be computationally and systematically explained within the theory of cognitive prism as follows: (1) the connection relation is primitive; (2) an orientation relation is a distance comparison relation: you being in front of me can be interpreted as you are nearer to my front side than my other sides; (3) a distance relation is a connection relation using a third object: you being one meter away from me can be interpreted as a one meter long object connected with you and me simultaneously. Fragmentary representations of temporal calculi Without addressing internal relations among spatial relations, AI researchers contributed many fragmentary representations. Examples of temporal calculi include Allen's interval algebra, and Vilain's & Kautz's point algebra. The most prominent spatial calculi are mereotopological calculi, Frank's cardinal direction calculus, Freksa's double cross calculus, Egenhofer and Franzosa's 4- and 9-intersection calculi, Ligozat's flip-flop calculus, various region connection calculi (RCC), and the Oriented Point Relation Algebra. Recently, spatio-temporal calculi have been designed that combine spatial and temporal information. For example, the spatiotemporal constraint calculus (STCC) by Gerevini and Nebel combines Allen's inte
https://en.wikipedia.org/wiki/Blue%20moon%20%28ice%20cream%29
Blue moon is an ice cream flavor with bright blue coloring, available in the Upper Midwest of the United States. Multiple cities in the region claim to be the originator, with the popular theories including Milwaukee, Wisconsin, and Ludington, Michigan. The Chicago Tribune has described the ice cream as "Smurf-blue, marshmallow-sweet". Blue moon ice cream is one of the flavors that make up Superman ice cream in certain states. Blue moon is found mainly in the Midwest—Wisconsin and Michigan in particular. It is found less frequently in other U.S. states, though it has been sold as far east as Altoona, Pennsylvania. Kilwins also provides this flavor in various states. Characteristics The varieties of blue moon vary in both color and flavor. Many aficionados of each variety of blue moon claim that their variety is the "real one", the "original", etc. Some dairies that make blue moon keep their ingredients a secret, adding to the mystique. Varieties that have distinct berry or vanilla flavor notes are sometimes theorized to have been originally flavored with castoreum. Similar international flavors A similar flavor has been sold in both Italy and Malta under the name , which is Italian for 'Smurf', as well as in Germany under the names and , which translate to 'Smurf' and 'angel blue', respectively. In France, it is called and in Spain (both meaning 'Smurf'). In Slovenia is ('blue sky'), and in Argentina as ('sky cream'). In Poland, this variety of ice cream is called ('Smurf-like') and is usually bubble-gum flavored.
https://en.wikipedia.org/wiki/Perpetual%20beta
Perpetual beta is the keeping of software or a system at the beta development stage for an extended or indefinite period of time. It is often used by developers when they continue to release new features that might not be fully tested. Perpetual beta software is not recommended for mission critical machines. However, many operational systems find this to be a much more rapid and agile approach to development, staging, and deployment. Definition Perpetual beta has come to be associated with the development and release of a service in which constant updates are the foundation for the habitability or usability of a service. According to publisher and open source advocate Tim O'Reilly: Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, "release early and release often", in fact has morphed into an even more radical position, "the perpetual beta", in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a "Beta" logo for years at a time. Used in the larger conversation of what defines Web 2.0, O'Reilly described the concept of perpetual beta as part of a customized Internet environment with these applications as distinguishing characteristics: Services, not packaged software, with cost-effective scalability Control over unique, hard-to-recreate data sources that get richer as more people use them Trusting users as co-developers Harnessing collective intelligence Leveraging the long tail through customer self-service Software above the level of a single device Lightweight user interfaces, development models, and business models.However, the Internet and the development of open source programs have changed the role of the (e
https://en.wikipedia.org/wiki/Pwpaw
PWPAW A Projector Augmented Wave (PAW) code for electronic structure calculation. It is a free software package, distributed under the copyleft GNU General Public License. It is a plane wave implementation of the projector augmented wave (PAW) method developed by Peter E. Blöchl for electronic structure calculations within the framework of density functional theory. In addition to the self-consistent calculation of the electronic structure of a periodic solid, the program has a number of other capabilities, including structural geometry optimization and molecular dynamics simulations within the Born–Oppenheimer approximation. See also Atompaw Software package for electron configuration calculations EXCITING Bloch's theorem
https://en.wikipedia.org/wiki/FD%20Trinitron/WEGA
FD Trinitron/WEGA is Sony's flat version of the Trinitron picture tube. This technology was also used in computer monitors bearing the Trinitron mark. The FD Trinitron used computer-controlled feedback systems to ensure sharp focus across a flat screen. The FD Trinitron reduces the amount of glare on the screen by reflecting much less ambient light than spherical or vertically flat CRTs. Flat screens also increase total image viewing angle and have less geometric distortion in comparison to curved screens. The FD Trinitron line featured key standard improvements over prior Trinitron designs including a finer pitch aperture grille, an electron gun with a greater focal length for corner focus, and an improved deflection yoke for color convergence. Sony would go on to receive an Emmy Award from the National Academy of Television Arts and Sciences for its development of flat screen CRT technology. Initially introduced on their 32 and 36 inch models in 1998, the new tubes were offered in a variety of resolutions for different uses. The basic WEGA models supported normal 480i signals, but a larger version offered 16:9 aspect ratios. The technology was quickly applied to the entire Trinitron range, from 13 to 40 inch along with high resolution versions; Hi-Scan and Super Fine Pitch. With the introduction of the FD Trinitron, Sony also introduced a new industrial style, leaving the charcoal-colored sets introduced in the 1980s for a new silver styling. In 2001, the FD Trinitron WEGA series had become the top selling television model in the United States. By 2003, over 40 million sets had been sold worldwide. As the television market shifted towards LCD technology, Sony eventually ended production of the Trinitron in Japan in 2004, and in the US in 2006. Sony would continue to sell the Trinitron in China, India, and regions of South America using tubes delivered from their Singapore plant. Worldwide production ended when Singapore And Malaysia ceased production in end of
https://en.wikipedia.org/wiki/IBM%20SAN%20Volume%20Controller
The IBM SAN Volume Controller (SVC) is a block storage virtualization appliance that belongs to the IBM System Storage product family. SVC implements an indirection, or "virtualization", layer in a Fibre Channel storage area network (SAN). Architecture The IBM 2145 SAN Volume Controller (SVC) is an inline virtualization or "gateway" device. It logically sits between hosts and storage arrays, presenting itself to hosts as the storage provider (target) and presenting itself to storage arrays as one big host. SVC is physically attached to one or several SAN fabrics. The virtualization approach allows for non-disruptive replacements of any part in the storage infrastructure, including the SVC devices themselves. It also aims at simplifying compatibility requirements in strongly heterogeneous server and storage landscapes. All advanced functions are therefore implemented in the virtualization layer, which allows switching storage array vendors without impact. Finally, spreading an SVC installation across two or more sites (stretched clustering) enables basic disaster protection paired with continuous availability. SVC nodes are always clustered, with a minimum of 2 and a maximum of 8 nodes, and linear scalability. Nodes are rack-mounted appliances derived from IBM System x servers, protected by redundant power supplies and integrated batteries. Earlier models featured external battery-backed power supplies. Each node has Fibre Channel ports simultaneously used for incoming, outgoing, and intracluster data traffic. Hosts may also be attached via FCoE and iSCSI Gbit Ethernet ports. Intracluster communication includes maintaining read/write cache integrity, sharing status information, and forwarding reads and writes to any port. These ports must be zoned together. Write cache is protected by mirroring within a pair of SVC nodes, called I/O group. Virtualized resources (= storage volumes presented to hosts) are distributed across I/O groups to improve performance. Volum
https://en.wikipedia.org/wiki/List%20of%20accelerators%20in%20particle%20physics
A list of particle accelerators used for particle physics experiments. Some early particle accelerators that more properly did nuclear physics, but existed prior to the separation of particle physics from that field, are also included. Although a modern accelerator complex usually has several stages of accelerators, only accelerators whose output has been used directly for experiments are listed. Early accelerators These all used single beams with fixed targets. They tended to have very briefly run, inexpensive, and unnamed experiments. Cyclotrons [1] The magnetic pole pieces and return yoke from the 60-inch cyclotron were later moved to UC Davis and incorporated into a 76-inch isochronous cyclotron which is still in use today Other early accelerator types Synchrotrons Fixed-target accelerators More modern accelerators that were also run in fixed target mode; often, they will also have been run as colliders, or accelerated particles for use in subsequently built colliders. High intensity hadron accelerators (Meson and neutron sources) Electron and low intensity hadron accelerators Colliders Electron–positron colliders Hadron colliders Electron-proton colliders Light sources Hypothetical accelerators Besides the real accelerators listed above, there are hypothetical accelerators often used as hypothetical examples or optimistic projects by particle physicists. Eloisatron (Eurasiatic Long Intersecting Storage Accelerator) was a project of INFN headed by Antonio Zichichi at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Sicily. The center-of-mass energy was planned to be 200 TeV, and the size was planned to span parts of Europe and Asia. Fermitron was an accelerator sketched by Enrico Fermi on a notepad in the 1940s proposing an accelerator in stable orbit around the Earth. The undulator radiation collider is a design for an accelerator with a center-of-mass energy around the GUT scale. It would be light-weeks across a
https://en.wikipedia.org/wiki/Nuclear%20Instrumentation%20Module
The Nuclear Instrumentation Module (NIM) standard defines mechanical and electrical specifications for electronics modules used in experimental particle and nuclear physics. The concept of modules in electronic systems offers enormous advantages in flexibility, interchange of instruments, reduced design effort, ease in updating and maintaining the instruments. The NIM standard is one of the first (and perhaps the simplest) such standards. First defined by the U.S. Atomic Energy Commission's report TID-20893 in 1968–1969, NIM was most recently revised in 1990 (DOE/ER-0457T). It provides a common footprint for electronic modules (amplifiers, ADCs, DACs, CFDs, etc.), which plug into a larger chassis (NIM crate, or NIM bin). The crate must supply ±12 and ±24 volts DC power to the modules via a backplane; the standard also specifies ±6 V DC and 220 V or 110 V AC pins, but not all NIM bins provide them. Mechanically, NIM modules must have a minimum standard width of 1.35 in (34 mm), a maximum faceplate height of 8.7 in (221 mm) and depth of 9.7 in (246 mm). They can, however, also be built in multiples of this standard width, that is, double-width, triple-width etc. The NIM standard also specifies cabling, connectors, impedances and levels for logic signals. The fast logic standard (commonly known as NIM logic) is a current-based logic, negative "true" (at −16 mA into 50 ohms = −0.8 volts) and 0 mA for "false"; is also specified. Apart from the above mentioned mechanical/physical and electrical specifications/restrictions, the individual is free to design their module in any way desired, thus allowing for new developments and improvements for efficiency or looks/aesthetics. NIM modules cannot communicate with each other through the crate backplane; this is a feature of later standards such as CAMAC and VMEbus. As a consequence, NIM-based ADC modules are nowadays uncommon in nuclear and particle physics. NIM is still widely used for amplifiers, discriminators, nuclear
https://en.wikipedia.org/wiki/Mating%20of%20yeast
The yeast Saccharomyces cerevisiae is a simple single-celled eukaryote with both a diploid and haploid mode of existence. The mating of yeast only occurs between haploids, which can be either the a or α (alpha) mating type and thus display simple sexual differentiation. Mating type is determined by a single locus, MAT, which in turn governs the sexual behaviour of both haploid and diploid cells. Through a form of genetic recombination, haploid yeast can switch mating type as often as every cell cycle. Mating type and the life cycle of Saccharomyces cerevisiae S. cerevisiae (yeast) can stably exist as either a diploid or a haploid. Both haploid and diploid yeast cells reproduce by mitosis, with daughter cells budding off of mother cells. Haploid cells are capable of mating with other haploid cells of the opposite mating type (an a cell can only mate with an α cell, and vice versa) to produce a stable diploid cell. Diploid cells, usually upon facing stressful conditions such as nutrient depletion, can undergo meiosis to produce four haploid spores: two a spores and two α spores. Differences between a and α cells a cells produce 'a-factor', a mating pheromone which signals the presence of an a cell to neighbouring α cells. a cells respond to α-factor, the α cell mating pheromone, by growing a projection (known as a shmoo, due to its distinctive shape resembling the Al Capp cartoon character Shmoo) towards the source of α-factor. Similarly, α cells produce α-factor, and respond to a-factor by growing a projection towards the source of the pheromone. The response of haploid cells only to the mating pheromones of the opposite mating type allows mating between a and α cells, but not between cells of the same mating type. These phenotypic differences between a and α cells are due to a different set of genes being actively transcribed and repressed in cells of the two mating types. a cells activate genes which produce a-factor and produce a cell surface receptor (Ste2) w
https://en.wikipedia.org/wiki/IBM%207070
IBM 7070 is a decimal-architecture intermediate data-processing system that was introduced by IBM in 1958. It was part of the IBM 700/7000 series, and was based on discrete transistors rather than the vacuum tubes of the 1950s. It was the company's first transistorized stored-program computer. The 7070 was expected to be a "common successor to at least the 650 and the 705". The 7070 was not designed to be instruction set compatible with the 650, as the latter had a second jump address in every instruction to allow optimal use of the drum, something unnecessary and wasteful in a computer with random-access core memory. As a result, a simulator was needed to run old programs. The 7070 was also marketed as an IBM 705 upgrade, but failed miserably due to its incompatibilities, including an inability to fully represent the 705 character set; forcing IBM to quickly introduce the IBM 7080 as a "transistorized IBM 705" that was fully compatible. The 7070 series stored data in words containing 10 decimal digits plus a sign. Digits were encoded using a two-out-of-five code. Characters were represented by a two-digit code. The machine shipped with 5,000 or 9,990 words of core memory and the CPU speed was about 27KIPS. A typical system was leased for $17,400 per month or could be purchased for $813,000. The 7070 weighed . Later systems in this series were the faster IBM 7074 introduced in July 1960 and the IBM 7072 (1961), a less expensive system using the slower 7330 instead of 729 tape drives. The 7074 could be expanded to 30K words. They were eventually replaced by the System/360, announced in 1964. Hardware implementation The 7070 was implemented using both CTDL (in the logic and control sections) and current-mode logic (in the timing storage and core storage sections) on Standard Modular System (SMS) cards. A total of about 30,000 alloy-junction germanium transistors and 22,000 germanium diodes are used, on approximately 14,000 SMS cards. Input/Output in original an
https://en.wikipedia.org/wiki/Corpora%20amylacea
Corpora amylacea (CA) (from the Latin meaning "starch-like bodies;" also known as wasteosomes) is a general term for small hyaline masses found in the prostate gland, nervous system, lung, and sometimes in other organs of the body. Corpora amylacea increase in number and size with advancing age, although this increase varies from person to person. In the nervous system, they are particularly abundant in certain neurodegenerative diseases. While their significance is largely unknown, some researchers have suggested that corpora amylacea play a role in the clearance of debris. The composition and appearance of corpora amylacea can differ in different organs. In the prostate gland, where they are also known as prostatic concretions, corpora amylacea are rich in aggregated protein that has many of the features of amyloid, whereas those in the central nervous system are generally smaller and do not contain amyloid. Corpora amylacea in the central nervous system occur in the foot processes of astrocytes, and they are usually present beneath the pia mater, in the tissues surrounding the ventricles, and around blood vessels. They have been proposed to be part of a family of polyglucosan diseases, in which polymers of glucose collect to form abnormal structures known as polyglucosan bodies. Polyglucosan bodies bearing at least partial resemblance to human corpora amylacea have been observed in various nonhuman species.
https://en.wikipedia.org/wiki/DREAM%20%28software%29
The Distributed Real-time Embedded Analysis Method (DREAM) is a platform-independent open-source tool for the verification and analysis of distributed real-time and embedded (DRE) systems which focuses on the practical application of formal verification and timing analysis to real-time middleware. DREAM supports formal verification of scheduling based on task timed automata using the Uppaal model checker and the Verimag IF toolset as well as the random testing of real-time components using a discrete event simulator. DREAM is developed at the Center for Embedded Computer Systems at the University of California, Irvine, in cooperation with researchers from Vanderbilt University. External links DREAM website Center for Embedded Computer Systems Uppaal website IF toolset website Formal methods
https://en.wikipedia.org/wiki/Extramedullary%20hematopoiesis
Extramedullary hematopoiesis (EMH or sometimes EH) refers to hematopoiesis occurring outside of the medulla of the bone (bone marrow). It can be physiologic or pathologic. Physiologic EMH occurs during embryonic and fetal development; during this time the main site of fetal hematopoiesis are liver and the spleen. Pathologic EMH can occur during adulthood when physiologic hematopoiesis can't work properly in the bone marrow and the hematopoietic stem cells (HSC) have to migrate to other tissues in order to continue with the formation of blood cellular components. Pathologic EMH can be caused by myelofibrosis, thalassemias or disorders caused in the hematopoietic system. Physiologic EMH During fetal development, hematopoiesis occurs mainly in the fetal liver and in the spleen followed by localization to the bone marrow. Hematopoiesis also takes place in many other tissues or organs such as the yolk sac, the aorta-gonad mesonephros (AGM) region, and lymph nodes. During development, vertebrates go through a primitive and a definitive phase of hematopoiesis. The lungs also play a role in platelet production in adults. Primitive hematopoiesis Primitive hematopoiesis occurs in the yolk sac during early embryonic development. It is characterized by the production of primitive nucleated erythroid cells, which is thought to originate from endothelial cells or hemangioblasts, which are capable of forming both endothelium and primitive blood cells. The main objective of the production of these cells will be the facilitation of tissue oxygenation to support rapid embryonic growth. This primitive phase is transitory and the cells that are produced express embryonic hemoglobins (HBZ and HBE1 genes produce the alpha and beta chains, respectively) aren't pluripotent, and aren't capable of self-renewal. Definitive hematopoiesis Definitive hematopoiesis differs from the primitive phase through the production of hematopoietic stem cells. The formation of these cells occurs in t
https://en.wikipedia.org/wiki/Positive%20and%20negative%20parts
In mathematics, the positive part of a real or extended real-valued function is defined by the formula Intuitively, the graph of is obtained by taking the graph of , chopping off the part under the x-axis, and letting take the value zero there. Similarly, the negative part of f is defined as Note that both f+ and f− are non-negative functions. A peculiarity of terminology is that the 'negative part' is neither negative nor a part (like the imaginary part of a complex number is neither imaginary nor a part). The function f can be expressed in terms of f+ and f− as Also note that . Using these two equations one may express the positive and negative parts as Another representation, using the Iverson bracket is One may define the positive and negative part of any function with values in a linearly ordered group. The unit ramp function is the positive part of the identity function. Measure-theoretic properties Given a measurable space (X,Σ), an extended real-valued function f is measurable if and only if its positive and negative parts are. Therefore, if such a function f is measurable, so is its absolute value |f|, being the sum of two measurable functions. The converse, though, does not necessarily hold: for example, taking f as where V is a Vitali set, it is clear that f is not measurable, but its absolute value is, being a constant function. The positive part and negative part of a function are used to define the Lebesgue integral for a real-valued function. Analogously to this decomposition of a function, one may decompose a signed measure into positive and negative parts — see the Hahn decomposition theorem. See also Rectifier (neural networks) Even and odd functions Real and imaginary parts
https://en.wikipedia.org/wiki/Ballistic%20conduction
In mesoscopic physics, ballistic conduction (ballistic transport) is the unimpeded flow (or transport) of charge carriers (usually electrons), or energy-carrying particles, over relatively long distances in a material. In general, the resistivity of a material exists because an electron, while moving inside a medium, is scattered by impurities, defects, thermal fluctuations of ions in a crystalline solid, or, generally, by any freely-moving atom/molecule composing a gas or liquid. Without scattering, electrons simply obey Newton's second law of motion at non-relativistic speeds. The mean free path of a particle can be described as the average length that the particle can travel freely, i.e., before a collision, which could change its momentum. The mean free path can be increased by reducing the number of impurities in a crystal or by lowering its temperature. Ballistic transport is observed when the mean free path of the particle is (much) longer than the dimension of the medium through which the particle travels. The particle alters its motion only upon collision with the walls. In the case of a wire suspended in air/vacuum the surface of the wire plays the role of the box reflecting the electrons and preventing them from exiting toward the empty space/open air. This is because there is an energy to be paid to extract the electron from the medium (work function). Ballistic conduction is typically observed in quasi-1D structures, such as carbon nanotubes or silicon nanowires, because of extreme size quantization effects in these materials. Ballistic conduction is not limited to electrons (or holes) but can also apply to phonons. It is theoretically possible for ballistic conduction to be extended to other quasi-particles, but this has not been experimentally verified. For a specific example, ballistic transport can be observed in a metal nanowire: due to the small size of the wire (nanometer-scale or 10−9 meters scale) and the mean free path which can be longer th
https://en.wikipedia.org/wiki/Levenshtein%20automaton
In computer science, a Levenshtein automaton for a string w and a number n is a finite-state automaton that can recognize the set of all strings whose Levenshtein distance from w is at most n. That is, a string x is in the formal language recognized by the Levenshtein automaton if and only if x can be transformed into w by at most n single-character insertions, deletions, and substitutions. Applications Levenshtein automata may be used for spelling correction, by finding words in a given dictionary that are close to a misspelled word. In this application, once a word is identified as being misspelled, its Levenshtein automaton may be constructed, and then applied to all of the words in the dictionary to determine which ones are close to the misspelled word. If the dictionary is stored in compressed form as a trie, the time for this algorithm (after the automaton has been constructed) is proportional to the number of nodes in the trie, significantly faster than using dynamic programming to compute the Levenshtein distance separately for each dictionary word. It is also possible to find words in a regular language, rather than a finite dictionary, that are close to a given target word, by computing the Levenshtein automaton for the word, and then using a Cartesian product construction to combine it with an automaton for the regular language, giving an automaton for the intersection language. Alternatively, rather than using the product construction, both the Levenshtein automaton and the automaton for the given regular language may be traversed simultaneously using a backtracking algorithm. Levenshtein automata are used in Lucene for full-text searches that can return relevant documents even if the query is misspelled. Construction For any fixed constant n, the Levenshtein automaton for w and n may be constructed in time O(|w|). Mitankin studies a variant of this construction called the universal Levenshtein automaton, determined only by a numeric parameter n, th
https://en.wikipedia.org/wiki/Configuration%20state%20function
In quantum chemistry, a configuration state function (CSF), is a symmetry-adapted linear combination of Slater determinants. A CSF must not be confused with a configuration. In general, one configuration gives rise to several CSFs; all have the same total quantum numbers for spin and spatial parts but differ in their intermediate couplings. Definition A configuration state function (CSF), is a symmetry-adapted linear combination of Slater determinants. It is constructed to have the same quantum numbers as the wavefunction, , of the system being studied. In the method of configuration interaction, the wavefunction can be expressed as a linear combination of CSFs, that is in the form where denotes the set of CSFs. The coefficients, , are found by using the expansion of to compute a Hamiltonian matrix. When this is diagonalized, the eigenvectors are chosen as the expansion coefficients. CSFs rather than just Slater determinants can also be used as a basis in multi-configurational self-consistent field computations. In atomic structure, a CSF is an eigenstate of the square of the angular momentum operator, the z-projection of angular momentum the square of the spin operator the z-projection of the spin operator In linear molecules, does not commute with the Hamiltonian for the system and therefore CSFs are not eigenstates of . However, the z-projection of angular momentum is still a good quantum number and CSFs are constructed to be eigenstates of and . In non-linear (which implies polyatomic) molecules, neither nor commutes with the Hamiltonian. The CSFs are constructed to have the spatial transformation properties of one of the irreducible representations of the point group to which the nuclear framework belongs. This is because the Hamiltonian operator transforms in the same way. and are still valid quantum numbers and CSFs are built to be eigenfunctions of these operators. From configurations to configuration state functions CSFs are however d
https://en.wikipedia.org/wiki/Iodine-125
Iodine-125 (125I) is a radioisotope of iodine which has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumors. It is the second longest-lived radioisotope of iodine, after iodine-129. Its half-life is 59.49 days and it decays by electron capture to an excited state of tellurium-125. This state is not the metastable 125mTe, but rather a lower energy state that decays immediately by gamma decay with a maximum energy of 35 keV. Some of the excess energy of the excited 125Te may be internally converted ejected electrons (also at 35 keV), or to x-rays (from electron bremsstrahlung), and also a total of 21 Auger electrons, which are produced at the low energies of 50 to 500 electron volts. Eventually, stable ground state 125Te is produced as the final decay product. In medical applications, the internal conversion and Auger electrons cause little damage outside the cell which contains the isotope atom. The X-rays and gamma rays are of low enough energy to deliver a higher radiation dose selectively to nearby tissues, in "permanent" brachytherapy where the isotope capsules are left in place (125I competes with palladium-103 in such uses). Because of its relatively long half-life and emission of low-energy photons which can be detected by gamma-counter crystal detectors, 125I is a preferred isotope for tagging antibodies in radioimmunoassay and other gamma-counting procedures involving proteins outside the body. The same properties of the isotope make it useful for brachytherapy, and for certain nuclear medicine scanning procedures, in which it is attached to proteins (albumin or fibrinogen), and where a half-life longer than that provided by 123I is required for diagnostic or lab tests lasting several days. Iodine-125 can be used in scanning/imaging the thyroid, but iodine-123 is preferred for this purpose, due to better radiation penetr
https://en.wikipedia.org/wiki/Riparian%20zone
A riparian zone or riparian area is the interface between land and a river or stream. In some regions, the terms riparian woodland, riparian forest, riparian buffer zone, riparian corridor, and riparian strip are used to characterize a riparian zone. The word riparian is derived from Latin ripa, meaning "river bank". Riparian is also the proper nomenclature for one of the terrestrial biomes of the Earth. Plant habitats and communities along the river margins and banks are called riparian vegetation, characterized by hydrophilic plants. Riparian zones are important in ecology, environmental resource management, and civil engineering because of their role in soil conservation, their habitat biodiversity, and the influence they have on terrestrial and semiaquatic fauna as well as aquatic ecosystems, including grasslands, woodlands, wetlands, and even non-vegetative areas. Riparian zones may be natural or engineered for soil stabilization or restoration. These zones are important natural biofilters, protecting aquatic environments from excessive sedimentation, polluted surface runoff, and erosion. They supply shelter and food for many aquatic animals and shade that limits stream temperature change. When riparian zones are damaged by construction, agriculture or silviculture, biological restoration can take place, usually by human intervention in erosion control and revegetation. If the area adjacent to a watercourse has standing water or saturated soil for as long as a season, it is normally termed a wetland because of its hydric soil characteristics. Because of their prominent role in supporting a diversity of species, riparian zones are often the subject of national protection in a biodiversity action plan. These are also known as a "plant or vegetation waste buffer". Research shows that riparian zones are instrumental in water quality improvement for both surface runoff and water flowing into streams through subsurface or groundwater flow. Riparian zones can play
https://en.wikipedia.org/wiki/Network%20calculus
Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs, digital circuits and communication networks." Network calculus gives a theoretical framework for analysing performance guarantees in computer networks. As traffic flows through a network it is subject to constraints imposed by the system components, for example: link capacity traffic shapers (leaky buckets) congestion control background traffic These constraints can be expressed and analysed with network calculus methods. Constraint curves can be combined using convolution under min-plus algebra. Network calculus can also be used to express traffic arrival and departure functions as well as service curves. The calculus uses "alternate algebras ... to transform complex non-linear network systems into analytically tractable linear systems." Currently, there exists two branches in network calculus: one handling deterministic bounded, and one handling stochastic bounds. System modelling Modelling flow and server In network calculus, a flow is modelled as cumulative functions , where represents the amount of data (number of bits for example) send by the flow in the interval . Such functions are non-negative and non-decreasing. The time domain is often the set of non negative reals. A server can be a link, a scheduler, a traffic shaper, or a whole network. It is simply modelled as a relation between some arrival cumulative curve and some departure cumulative curve . It is required that , to model the fact that the departure of some data can not occur before its arrival. Modelling backlog and delay Given some arrival and departure curve and , the backlog at any instant , denoted can be defined as the difference between and . The delay at , is defined as the minimal amount of time such that the departure function reached the arrival function. When considering the whole flows, the supremum of these values is used. In general, the fl
https://en.wikipedia.org/wiki/Simple%20Grid%20Protocol
Simple Grid Protocol is a free open source grid computing package. Developed & maintained by Brendan Kosowski, the package includes the protocol & software tools needed to get a computational grid up and running on Linux & BSD. Coded in SBCL (Steel Bank Common Lisp), Simple Grid Protocol allows computer programs to utilize the unused CPU resources of other computers on a network or the Internet. As of version 1.2, Simple Grid Protocol can execute multiple programming threads on multiple computers concurrently. Custom multi-threading functions (utilizing operating system threads) for Linux & BSD allow multi-threading on single-thread SBCL implementations. Originally coded in CLISP, version 1.2 included the change to SBCL coding. BSD Operating Systems supported include FreeBSD, NetBSD, OpenBSD & DragonFly BSD. An optional XML interface allows any XML capable programming language to send Lisp programs to the grid for execution. External links Simple Grid Protocol home page Grid computing Common Lisp (programming language) software
https://en.wikipedia.org/wiki/Group%20signature
A group signature scheme is a method for allowing a member of a group to anonymously sign a message on behalf of the group. The concept was first introduced by David Chaum and Eugene van Heyst in 1991. For example, a group signature scheme could be used by an employee of a large company where it is sufficient for a verifier to know a message was signed by an employee, but not which particular employee signed it. Another application is for keycard access to restricted areas where it is inappropriate to track individual employee's movements, but necessary to secure areas to only employees in the group. Essential to a group signature scheme is a group manager, who is in charge of adding group members and has the ability to reveal the original signer in the event of disputes. In some systems the responsibilities of adding members and revoking signature anonymity are separated and given to a membership manager and revocation manager respectively. Many schemes have been proposed, however all should follow these basic requirements: Soundness and completeness Valid signatures by group members always verify correctly, and invalid signatures always fail verification. Unforgeable Only members of the group can create valid group signatures. Anonymity Given a message and its signature, the identity of the individual signer cannot be determined without the group manager's secret key. Traceability Given any valid signature, the group manager should be able to trace which user issued the signature. (This and the previous requirement imply that only the group manager can break users' anonymity.) Unlinkability Given two messages and their signatures, we cannot tell if the signatures were from the same signer or not. No framing Even if all other group members (and the managers) collude, they cannot forge a signature for a non-participating group member. Unforgeable tracing verification The revocation manager cannot falsely accuse a signer of creating a signature he did not create. Co
https://en.wikipedia.org/wiki/Cleanroom%20software%20engineering
The cleanroom software engineering process is a software development process intended to produce software with a certifiable level of reliability. The central principles are software development based on formal methods, incremental implementation under statistical quality control, and statistically sound testing. History The cleanroom process was originally developed by Harlan Mills and several of his colleagues including Alan Hevner at IBM. The cleanroom process first saw use in the mid to late 1980s. Demonstration projects within the military began in the early 1990s. Recent work on the cleanroom process has examined fusing cleanroom with the automated verification capabilities provided by specifications expressed in CSP. Philosophy The focus of the cleanroom process is on defect prevention, rather than defect removal. The name "cleanroom" was chosen to evoke the cleanrooms used in the electronics industry to prevent the introduction of defects during the fabrication of semiconductors. Central principles The basic principles of the cleanroom process are Software development based on formal methods Software tool support based on some mathematical formalism includes model checking, process algebras, and Petri nets. The Box Structure Method might be one such means of specifying and designing a software product. Verification that the design correctly implements the specification is performed through team review, often with software tool support. Incremental implementation under statistical quality control Cleanroom development uses an iterative approach, in which the product is developed in increments that gradually increase the implemented functionality. The quality of each increment is measured against pre-established standards to verify that the development process is proceeding acceptably. A failure to meet quality standards results in the cessation of testing for the current increment, and a return to the design phase. Statistically sound testing Softw
https://en.wikipedia.org/wiki/Agent%20Communications%20Language
Agent Communication Language (ACL), proposed by the Foundation for Intelligent Physical Agents (FIPA), is a proposed standard language for agent communications. Knowledge Query and Manipulation Language (KQML) is another proposed standard. The most popular ACLs are: FIPA-ACL (by the Foundation for Intelligent Physical Agents, a standardization consortium) KQML (Knowledge Query and Manipulation Language) Both rely on speech act theory developed by Searle in the 1960s and enhanced by Winograd and Flores in the 1970s. They define a set of performatives, also called Communicative Acts, and their meaning (e.g. ask-one). The content of the performative is not standardized, but varies from system to system. To make agents understand each other they have to not only speak the same language, but also have a common ontology. An ontology is a part of the agent's knowledge base that describes what kind of things an agent can deal with and how they are related to each other. Examples of frameworks that implement a standard agent communication language (FIPA-ACL) include FIPA-OS and Jade.
https://en.wikipedia.org/wiki/Secondary%20succession
Secondary succession is the secondary ecological succession of a plant's life. As opposed to the first, primary succession, secondary succession is a process started by an event (e.g. forest fire, harvesting, hurricane, etc.) that reduces an already established ecosystem (e.g. a forest or a wheat field) to a smaller population of species, and as such secondary succession occurs on preexisting soil whereas primary succession usually occurs in a place lacking soil. Many factors can affect secondary succession, such as trophic interaction, initial composition, and competition-colonization trade-offs. The factors that control the increase in abundance of a species during succession may be determined mainly by seed production and dispersal, micro climate; landscape structure (habitat patch size and distance to outside seed sources); bulk density, pH, and soil texture (sand and clay). Secondary succession is the ecological succession that occurs after the initial succession has been disrupted and some plants and animals still exist. It is usually faster than primary succession as soil is already present, and seeds, roots, and the underground vegetative organs of plants may still survive in the soil. Examples Imperata Imperata grasslands are caused by human activities such as logging, forest clearing for shifting cultivation, agriculture and grazing, and also by frequent fires. The latter is a frequent result of human interference. However, when not maintained by frequent fires and human disturbances, they regenerate naturally and speedily to secondary young forest. The time of succession in Imperata grassland (for example in Samboja Lestari area), Imperata cylindrica has the highest coverage but it becomes less dominant from the fourth year onwards. While Imperata decreases, the percentage of shrubs and young trees clearly increases with time. In the burned plots, Melastoma malabathricum, Eupatorium inulaefolium, Ficus sp., and Vitex pinnata. strongly increase with
https://en.wikipedia.org/wiki/Alexander%20Aitken
Alexander Craig "Alec" Aitken (1 April 1895 – 3 November 1967) was one of New Zealand's most eminent mathematicians. In a 1935 paper he introduced the concept of generalized least squares, along with now standard vector/matrix notation for the linear regression model. Another influential paper co-authored with his student Harold Silverstone established the lower bound on the variance of an estimator, now known as Cramér–Rao bound. He was elected to the Royal Society of Literature for his World War I memoir, Gallipoli to the Somme. Life and work Aitken was born on 1 April 1895 in Dunedin, the eldest of the seven children of Elizabeth Towers and William Aitken. He was of Scottish descent, his grandfather having emigrated from Lanarkshire in 1868. His mother was from Wolverhampton. He was educated at Otago Boys' High School in Dunedin (1908–13) where he was school dux and won the Thomas Baker Calculus Scholarship in his last year at school. He saw active service during World War I enlisting in April 1915 with the New Zealand Expeditionary Force, and serving in Gallipoli from November 1915, in Egypt, and at the Western Front. He was seriously wounded at the Somme. He spent several months in hospital in Chelsea before being invalided out of the army and shipped home to New Zealand in March 1917. Resuming his studies Aitken graduated with an MA degree from the University of Otago in 1920, then worked as a schoolmaster at Otago Boys' High School from 1920 to 1923. Aitken studied for a doctorate (PhD) at the University of Edinburgh, in Scotland, under Edmund Taylor Whittaker where his dissertation, "Smoothing of Data", was considered so impressive that he was awarded a DSc degree in 1925. Aitken's impact at the university had been so great that he had been elected a Fellow of the Royal Society of Edinburgh (FRSE) the year before the award of his degree, upon the proposal of Sir Edmund Whittaker, Sir Charles Galton Darwin, Edward Copson and David Gibb. Aitken was awar
https://en.wikipedia.org/wiki/Polignac%27s%20conjecture
In number theory, Polignac's conjecture was made by Alphonse de Polignac in 1849 and states: For any positive even number n, there are infinitely many prime gaps of size n. In other words: There are infinitely many cases of two consecutive prime numbers with difference n. Although the conjecture has not yet been proven or disproven for any given value of n, in 2013 an important breakthrough was made by Zhang Yitang who proved that there are infinitely many prime gaps of size n for some value of n < 70,000,000. Later that year, James Maynard announced a related breakthrough which proved that there are infinitely many prime gaps of some size less than or equal to 600. As of April 14, 2014, one year after Zhang's announcement, according to the Polymath project wiki, n has been reduced to 246. Further, assuming the Elliott–Halberstam conjecture and its generalized form, the Polymath project wiki states that n has been reduced to 12 and 6, respectively. For n = 2, it is the twin prime conjecture. For n = 4, it says there are infinitely many cousin primes (p, p + 4). For n = 6, it says there are infinitely many sexy primes (p, p + 6) with no prime between p and p + 6. Dickson's conjecture generalizes Polignac's conjecture to cover all prime constellations. Conjectured density Let for even n be the number of prime gaps of size n below x. The first Hardy–Littlewood conjecture says the asymptotic density is of form where Cn is a function of n, and means that the quotient of two expressions tends to 1 as x approaches infinity. C2 is the twin prime constant where the product extends over all prime numbers p ≥ 3. Cn is C2 multiplied by a number which depends on the odd prime factors q of n: For example, C4 = C2 and C6 = 2C2. Twin primes have the same conjectured density as cousin primes, and half that of sexy primes. Note that each odd prime factor q of n increases the conjectured density compared to twin primes by a factor of . A heuristic argument follows. It r
https://en.wikipedia.org/wiki/Alpha%20roll
The alpha roll is a dog training technique that is considered outdated by many modern-day dog trainers. The theory behind the training method is that dogs are hierarchical animals. The technique is used to teach the dog that the trainer or owner of the dog is the pack leader (alpha animal). Methods include when a dog misbehaves to pin the dog on its back and held in that position, sometimes by the throat. History The alpha roll was first popularized by the Monks of New Skete, in the 1978 book How To Be Your Dog's Best Friend. However, in the 2002 second edition of the book, the monks recanted and strongly discouraged the technique, describing it as "too risky and demanding for the average dog owner." Although the 1978 book is widely regarded as a classic in dog training literature and highly recommended for people trying to better understand their dog, the alpha roll is now highly controversial among animal behaviorists because the theory of canine dominance has since been disproved. In the original context, the alpha roll was meant to be used only in the most serious cases. The theory behind the alpha roll is based on a research study of captive wolves kept in an area too small for their numbers and composed of members that would not be found together in a wild pack. These conditions resulted in increased numbers of conflicts that scientists today know are not typical of wolves living in the wild. Behaviors seen in wolves (specifically the alpha roll) living in atypical social groups and crowded conditions do not translate to domestic dog training, especially because using the technique can be harmful to both the handler and the dog. Effects It has been argued by some that a dog will only forcibly flip another onto its back during a serious fight where the intent may be to kill the opponent. The name "alpha roll" is considered to be a misnomer by some wolf researchers because the practice when used as a behavioral correction bears little relation to the natur
https://en.wikipedia.org/wiki/Charles%20Darwin%20Research%20Station
Charles Darwin Research Station (CDRS) (, ECCD) is a biological research station in Puerto Ayora, Santa Cruz Island, Galápagos, Ecuador. The station is operated by the Charles Darwin Foundation which was founded in 1959 under the auspices of UNESCO and the World Conservation Union. The research station serves as the headquarters for the Foundation, and is used to conduct scientific research and promote environmental education. It is located on the shore of Academy Bay in the village of Puerto Ayora on Santa Cruz Island in the Galapagos Islands, with satellite offices on Isabela and San Cristóbal islands. Field station The Charles Darwin Research Station (CDRS) (, ECCD) is a biological research station operated by the Charles Darwin Foundation. It is located on the shore of Academy Bay in the village of Puerto Ayora on Santa Cruz Island in the Galapagos Islands, with satellite offices on Isabela and San Cristóbal islands. In Puerto Ayora, Santa Cruz Island, Ecuadorian and foreign scientists work on research and projects for conservation of the Galápagos terrestrial and marine ecosystems. The Research Station, established in 1959 and dedicated in 1964, has a natural history interpretation center and also carries out educational projects in support of conservation of the Galápagos Islands, and in support of external researchers visiting the islands to conduct field work. Objectives and work The objectives of the CDRS is to conduct scientific research and environmental education for conservation. The Station has a team of over a hundred scientists, educators, volunteers, research students, and support staff from all over the world. Scientific research and monitoring projects are conducted at the CDRS in conjunction and cooperation with its chief partner, the Galápagos National Park Directorate (GNPD), which functions as the principal government authority in charge of conservation and natural resource issues in the Galapagos. The work of the CDRS has as its main
https://en.wikipedia.org/wiki/J%C3%B3zsef%20Beck
József Beck (Budapest, Hungary, February 14, 1952) is a Harold H. Martin Professor of Mathematics at Rutgers University. His contributions to combinatorics include the partial colouring lemma and the Beck–Fiala theorem in discrepancy theory, the algorithmic version of the Lovász local lemma, the two extremes theorem in combinatorial geometry and the second moment method in the theory of positional games, among others. Beck was awarded the Fulkerson Prize in 1985 for a paper titled "Roth's estimate of the discrepancy of integer sequences is nearly sharp", which introduced the notion of discrepancy on hypergraphs and established an upper bound on the discrepancy of the family of arithmetic progressions contained in {1,2,...,n}, matching the classical lower bound up to a polylogarithmic factor. Jiří Matoušek and Joel Spencer later succeeded in getting rid of this factor, showing that the bound was really sharp. Beck gave an invited talk at the 1986 International Congress of Mathematicians. He is an external member of the Hungarian Academy of Sciences (2004). Books Irregularities of Distribution (with William W. L. Chen, Cambridge Tracts in Mathematics 89, Cambridge University Press, 1987) Combinatorial Games: Tic-Tac-Toe Theory (Encyclopedia of Mathematics and its Applications 114, Cambridge University Press, 2008) Inevitable Randomness in Discrete Mathematics (University Lecture Series 49, American Mathematical Society, 2009) Probabilistic Diophantine Approximation: Randomness in Lattice Point Counting (Springer Monographs in Mathematics. Springer-Verlag, 2014) Strong Uniformity and Large Dynamical Systems (World Scientific Publishing, 2018)
https://en.wikipedia.org/wiki/Sign%20extension
Sign extension (sometimes abbreviated as sext, particularly in mnemonics) is the operation, in computer arithmetic, of increasing the number of bits of a binary number while preserving the number's sign (positive/negative) and value. This is done by appending digits to the most significant side of the number, following a procedure dependent on the particular signed number representation used. For example, if six bits are used to represent the number "00 1010" (decimal positive 10) and the sign extend operation increases the word length to 16 bits, then the new representation is simply "0000 0000 0000 1010". Thus, both the value and the fact that the value was positive are maintained. If ten bits are used to represent the value "11 1111 0001" (decimal negative 15) using two's complement, and this is sign extended to 16 bits, the new representation is "1111 1111 1111 0001". Thus, by padding the left side with ones, the negative sign and the value of the original number are maintained. In the Intel x86 instruction set, for example, there are two ways of doing sign extension: using the instructions cbw, cwd, cwde, and cdq: convert byte to word, word to doubleword, word to extended doubleword, and doubleword to quadword, respectively (in the x86 context a byte has 8 bits, a word 16 bits, a doubleword and extended doubleword 32 bits, and a quadword 64 bits); using one of the sign extended moves, accomplished by the movsx ("move with sign extension") family of instructions. Zero extension A similar concept is zero extension (sometimes abbreviated as zext). In a move or convert operation, zero extension refers to setting the high bits of the destination to zero, rather than setting them to a copy of the most significant bit of the source. If the source of the operation is an unsigned number, then zero extension is usually the correct way to move it to a larger field while preserving its numeric value, while sign extension is correct for signed numbers. In the x86
https://en.wikipedia.org/wiki/Steel%20Bank%20Common%20Lisp
Steel Bank Common Lisp (SBCL) is a free Common Lisp implementation that features a high-performance native compiler, Unicode support and threading. The name "Steel Bank Common Lisp" is a reference to Carnegie Mellon University Common Lisp from which SBCL forked: Andrew Carnegie made his fortune in the steel industry and Andrew Mellon was a successful banker. History SBCL descends from CMUCL (created at Carnegie Mellon University), which is itself descended from Spice Lisp, including early implementations for the Mach operating system on the IBM RT PC, and the Three Rivers Computing Corporation PERQ computer, in the 1980s. William Newman originally announced SBCL as a variant of CMUCL in December 1999. The main point of divergence at the time was a clean bootstrapping procedure: CMUCL requires an already compiled executable binary of itself to compile the CMUCL source code, whereas SBCL supported bootstrapping from theoretically any ANSI-compliant Common Lisp implementation. SBCL became a SourceForge project in September 2000. The original rationale for the fork was to continue the initial work done by Newman without destabilizing CMUCL which was at the time already a mature and much-used implementation. The forking was amicable, and there have since then been significant flows of code and other cross-pollination between the two projects. Since then SBCL has attracted several developers, been ported to multiple hardware architectures and operating systems, and undergone many changes and enhancements: while it has dropped support for several CMUCL extensions that it considers beyond the scope of the project (such as the Motif interface) it has also developed many new ones, including native threading and Unicode support. Version 1.0 was released in November 2006, and active development continues. William Newman stepped down as project administrator for SBCL in April 2008. Several other developers have taken over interim management of releases for the time being
https://en.wikipedia.org/wiki/Racket%20%28programming%20language%29
Racket is a general-purpose, multi-paradigm programming language and a multi-platform distribution that includes the Racket language, compiler, large standard library, IDE, development tools, and a set of additional languages including Typed Racket (a sister language of Racket with a static type-checker), Swindle, FrTime, Lazy Racket, R5RS & R6RS Scheme, Scribble, Datalog, Racklog, Algol 60 and several teaching languages. The Racket language is a modern dialect of Lisp and a descendant of Scheme. It is designed as a platform for programming language design and implementation. In addition to the core Racket language, Racket is also used to refer to the family of programming languages and set of tools supporting development on and with Racket. Racket is also used for scripting, computer science education, and research. The Racket platform provides an implementation of the Racket language (including a runtime system, libraries, and compiler supporting several compilation modes: machine code, machine-independent, interpreted, and JIT) along with the DrRacket integrated development environment (IDE) written in Racket. Racket is used by the ProgramByDesign outreach program, which aims to turn computer science into "an indispensable part of the liberal arts curriculum". The core Racket language is known for its extensive macro system which enables creating embedded and domain-specific languages, language constructs such as classes or modules, and separate dialects of Racket with different semantics. The platform distribution is free and open-source software distributed under the Apache 2.0 and MIT licenses. Extensions and packages written by the community may be uploaded to Racket's package catalog. History Development Matthias Felleisen founded PLT Inc. in the mid 1990s, first as a research group, soon after as a project dedicated to producing pedagogic materials for novice programmers (lectures, exercises/projects, software). In January 1995, the group decided to
https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin%20theorem
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process. History Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914. The case of a continuous-time process For continuous time, the Wiener–Khinchin theorem says that if is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, exists and is finite at every lag , then there exists a monotone function in the frequency domain , or equivalently a non negative Radon measure on the frequency domain, such that where the integral is a Riemann–Stieltjes integral. The asterisk denotes complex conjugate, and can be omitted if the random process is real-valued. This is a kind of spectral decomposition of the auto-correlation function. F is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum. The Fourier transform of does not exist in general, because stochastic random functions are not generally either square-integrable or absolutely integrable. Nor is assumed to be absolutely integrable, so it need not have a Fourier transform either. However, if the measure is absolutely continuous, for example, if the process is purely indeterministic, then is differentiable almost everywhere and we can write . In this case, one can determine , the power spectral density of , by taking the averaged derivative of .
https://en.wikipedia.org/wiki/Gompertz%20function
The Gompertz curve or Gompertz function is a type of mathematical model for a time series, named after Benjamin Gompertz (1779–1865). It is a sigmoid function which describes growth as being slowest at the start and end of a given time period. The right-side or future value asymptote of the function is approached much more gradually by the curve than the left-side or lower valued asymptote. This is in contrast to the simple logistic function in which both asymptotes are approached by the curve symmetrically. It is a special case of the generalised logistic function. The function was originally designed to describe human mortality, but since has been modified to be applied in biology, with regard to detailing populations. History Benjamin Gompertz (1779–1865) was an actuary in London who was privately educated. He was elected a fellow of the Royal Society in 1819. The function was first presented in his June 16, 1825 paper at the bottom of page 518. The Gompertz function reduced a significant collection of data in life tables into a single function. It is based on the assumption that the mortality rate increases exponentially as a person ages. The resulting Gompertz function is for the number of individuals living at a given age as a function of age. Earlier work on the construction of functional models of mortality was done by the French mathematician Abraham de Moivre (1667–1754) in the 1750s. However, de Moivre assumed that the mortality rate was constant. An extension to Gompertz's work was proposed by the English actuary and mathematician William Matthew Makeham (1826–1891) in 1860, who added a constant background mortality rate to Gompertz’s exponentially increasing one. Formula where a is an asymptote, since b sets the displacement along the x-axis (translates the graph to the left or right). c sets the growth rate (y scaling) e is Euler's Number (e = 2.71828...) Properties The curve has the same shape as after an affine transform. The ha
https://en.wikipedia.org/wiki/Supersingular%20variety
In mathematics, a supersingular variety is (usually) a smooth projective variety in nonzero characteristic such that for all n the slopes of the Newton polygon of the nth crystalline cohomology are all n/2 . For special classes of varieties such as elliptic curves it is common to use various ad hoc definitions of "supersingular", which are (usually) equivalent to the one given above. The term "singular elliptic curve" (or "singular j-invariant") was at one times used to refer to complex elliptic curves whose ring of endomorphisms has rank 2, the maximum possible. Helmut Hasse discovered that, in finite characteristic, elliptic curves can have larger rings of endomorphisms of rank 4, and these were called "supersingular elliptic curves". Supersingular elliptic curves can also be characterized by the slopes of their crystalline cohomology, and the term "supersingular" was later extended to other varieties whose cohomology has similar properties. The terms "supersingular" or "singular" do not mean that the variety has singularities. Examples include: Supersingular elliptic curve. Elliptic curves in non-zero characteristic with an unusually large ring of endomorphisms of rank 4. Supersingular Abelian variety Sometimes defined to be an abelian variety isogenous to a product of supersingular elliptic curves, and sometimes defined to be an abelian variety of some rank g whose endomorphism ring has rank (2g)2. Supersingular K3 surface. Certain K3 surfaces in non-zero characteristic. Supersingular Enriques surface. Certain Enriques surfaces in characteristic 2. A surface is called Shioda supersingular if the rank of its Néron–Severi group is equal to its second Betti number. A surface is called Artin supersingular if its formal Brauer group has infinite height.
https://en.wikipedia.org/wiki/Walker%E2%80%93Warburg%20syndrome
Walker–Warburg syndrome (WWS), also called Warburg syndrome, Chemke syndrome, HARD syndrome (Hydrocephalus, Agyria and Retinal Dysplasia), Pagon syndrome, cerebroocular dysgenesis (COD) or cerebroocular dysplasia-muscular dystrophy syndrome (COD-MD), is a rare form of autosomal recessive congenital muscular dystrophy. It is associated with brain (lissencephaly, hydrocephalus, cerebellar malformations) and eye abnormalities. This condition has a worldwide distribution. Walker-Warburg syndrome is estimated to affect 1 in 60,500 newborns worldwide. Presentation The clinical manifestations present at birth are generalized hypotonia, muscle weakness, developmental delay with intellectual disability and occasional seizures. The congenital muscular dystrophy is characterized by hypoglycosylation of α-dystroglycan. Those born with the disease also experience severe ocular and brain defects. Half of all children with WWS are born with encephalocele, which is a gap in the skull that will not seal. The meninges of the brain protrude through this gap due to the neural tube failing to close during development. A malformation of the a baby's cerebellum is often a sign of this disease. Common ocular issues associated with WWS are abnormally small eyes and retinal abnormalities cause by an underdeveloped light-sensitive area in the back of the eye. Genetics Several genes have been implicated in the etiology of Walker–Warburg syndrome, and others are as yet unknown. Several mutations were found in the protein O-Mannosyltransferase POMT1 and POMT2 genes, and one mutation was found in each of the fukutin and fukutin-related protein genes. Another gene that has been linked to this condition is Beta-1,3-N-acetylgalactosaminyltransferase 2 (B3GALNT2). Diagnosis Laboratory investigations usually show elevated creatine kinase, myopathic/dystrophic muscle pathology and altered α-dystroglycan. Antenatal diagnosis is possible in families with known mutations. Prenatal ultrasound may be he
https://en.wikipedia.org/wiki/Maximum%20satisfiability%20problem
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true. Example The conjunctive normal form formula is not satisfiable: no matter which truth values are assigned to its two variables, at least one of its four clauses will be false. However, it is possible to assign truth values in such a way as to make three out of four clauses true; indeed, every truth assignment will do this. Therefore, if this formula is given as an instance of the MAX-SAT problem, the solution to the problem is the number three. Hardness The MAX-SAT problem is OptP-complete, and thus NP-hard, since its solution easily leads to the solution of the boolean satisfiability problem, which is NP-complete. It is also difficult to find an approximate solution of the problem, that satisfies a number of clauses within a guaranteed approximation ratio of the optimal solution. More precisely, the problem is APX-complete, and thus does not admit a polynomial-time approximation scheme unless P = NP. Weighted MAX-SAT More generally, one can define a weighted version of MAX-SAT as follows: given a conjunctive normal form formula with non-negative weights assigned to each clause, find truth values for its variables that maximize the combined weight of the satisfied clauses. The MAX-SAT problem is an instance of weighted MAX-SAT where all weights are 1. Approximation algorithms 1/2-approximation Randomly assigning each variable to be true with probability 1/2 gives an expected 2-approximation. More precisely, if each clause has at least variables, then this yields a (1 − 2−)-approximation. This algorithm can be derandomized using the meth
https://en.wikipedia.org/wiki/Map%20communication%20model
The Map Communication Model is a theory in cartography that characterizes mapping as a process of transmitting geographic information via the map from the cartographer to the end-user. It was perhaps the first paradigm to gain widespread acceptance in cartography in the international cartographic community and between academic and practising cartographers. Overview By the mid-20th century, according to Crampton (2001) "cartographers as Arthur H. Robinson and others had begun to see the map as primarily a communication tool, and so developed a specific model for map communication, the map communication model (MCM)". This model, according to Andrews (1988) "can be grouped with the other major communication models of the time, such as the Shannon-Weaver and Lasswell models of communication. The map communication model led to a whole new body of research, methodologies and map design paradigms" One of the implications of this communication model according to Crampton (2001) "endorsed an “epistemic break” that shifted our understandings of maps as communication systems to investigating them in terms of fields of power relations and exploring the “mapping environments in which knowledge is constructed”... This involved examining the social contexts in which maps were both produced and used, a departure from simply seeing maps as artifacts to be understood apart from this context". A second implication of this model is the presumption inherited from positivism that it is possible to separate facts from values. As Harley stated: Maps are never value-free images; except in the narrowest Euclidean sense they are not in themselves either true or false. Both in the selectivity of their content and in their signs and styles of representation maps are a way of conceiving, articulating, and structuring the human world which is biased towards, promoted by, and exerts influence upon particular sets of social relations. By accepting such premises it becomes easier to see how app
https://en.wikipedia.org/wiki/Lombard%20band
A Lombard band is a decorative blind arcade, usually located on the exterior of building. It was frequently used during the Romanesque and Gothic periods of Western architecture. It resembles a frieze of arches. Lombard bands are believed to have been first used during the First Romanesque period, in the early 11th century. At that time, they were the most common architectural decorative motif for facades in regions such as Lombardy, Aragon and Catalonia. Arches of early Christian buildings of Ravenna, such as the Mausoleum of Galla Placidia, have been suggested as the origin of Lombard bands. See also Lombard architecture Lesene (low-relief pillars), another Lombardic element Similar-looking structures: Corbels Jettying External links An illustrated article by Peter Hubert on the developments of Lombard bands Architectural elements Ornaments (architecture) Arches and vaults Gothic architecture Lombard architecture Romanesque architecture
https://en.wikipedia.org/wiki/Wavelet%20transform
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. Definition A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space of square integrable functions. The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of , for integers . If under the standard inner product on , this family is orthonormal, it is an orthonormal system: where is the Kronecker delta. Completeness is satisfied if every function may be expanded in the basis as with convergence of the series understood to be convergence in norm. Such a representation of f is known as a wavelet series. This implies that an orthonormal wavelet is self-dual. The integral wavelet transform is the integral transform defined as The wavelet coefficients are then given by Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position. Principle The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape. This is achieved by choosing suitable basis functions that allow for this. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing, where represents time and angular frequency (, where is ordinary frequency). The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of . When is large, Bad time resolution Good frequency resolution Low frequency, large scaling factor When is small Good time
https://en.wikipedia.org/wiki/Ghost-canceling%20reference
Ghost-canceling reference (GCR) is a special sub-signal on a television channel that receivers can use to compensate for the ghosting effect of a television signal distorted by multipath propagation between transmitter and receiver. In the United States, the GCR signal is a chirp in frequency of the modulating signal from 0 Hz to 4.2 MHz, transmitted during the vertical blanking interval over one video line (line 19 in the U.S.), shifted in phase by 180° once per frame, with this pattern inverted every four lines. Television receivers generate their own local versions of this signal and use the comparison between the local and remote signals to tune an adaptive equalizer that removes ghost images on the screen. GCR was introduced after its recommendation in 1993 by the Advanced Television Systems Committee.
https://en.wikipedia.org/wiki/Fossa%20for%20lacrimal%20gland
The lacrimal fossa (or fossa for lacrimal gland) is located on the inferior surface of each orbital plate of the frontal bone. It is smooth and concave, and presents, laterally, underneath the zygomatic process, a shallow depression for the lacrimal gland. See also Fossa for lacrimal sac
https://en.wikipedia.org/wiki/Glossary%20of%20invasion%20biology%20terms
The need for a clearly defined and consistent invasion biology terminology has been acknowledged by many sources. Invasive species, or invasive exotics, is a nomenclature term and categorization phrase used for flora and fauna, and for specific restoration-preservation processes in native habitats. Invasion biology is the study of these organisms and the processes of species invasion. The terminology in this article contains definitions for invasion biology terms in common usage today, taken from accessible publications. References for each definition are included. Terminology relates primarily to invasion biology terms with some ecology terms included to clarify language and phrases on linked articles. Introduction Definitions of "invasive non-indigenous species have been inconsistent", which has led to confusion both in literature and in popular publications (Williams and Meffe 2005). Also, many scientists and managers feel that there is no firm definition of non-indigenous species, native species, exotic species, "and so on, and ecologists do not use the terms consistently." (Shrader-Frechette 2001) Another question asked is whether current language is likely to promote "effective and appropriate action" towards invasive species through cohesive language (Larson 2005). Biologists today spend more time and effort on invasive species work because of the rapid spread, economic cost, and effects on ecological systems, so the importance of effective communication about invasive species is clear. (Larson 2005) Controversy in invasion biology terms exists because of past usage and because of preferences for certain terms. Even for biologists, defining a species as native may be far from being a straightforward matter of biological classification based on the location or the discipline a biologist is working in (Helmreich 2005). Questions often arise as to what exactly makes a species native as opposed to non-native, because some non-native species have no kno
https://en.wikipedia.org/wiki/Dual%20wavelet
In mathematics, a dual wavelet is the dual to a wavelet. In general, the wavelet series generated by a square-integrable function will have a dual series, in the sense of the Riesz representation theorem. However, the dual series is not itself in general representable by a square-integrable function. Definition Given a square-integrable function , define the series by for integers . Such a function is called an R-function if the linear span of is dense in , and if there exist positive constants A, B with such that for all bi-infinite square summable series . Here, denotes the square-sum norm: and denotes the usual norm on : By the Riesz representation theorem, there exists a unique dual basis such that where is the Kronecker delta and is the usual inner product on . Indeed, there exists a unique series representation for a square-integrable function f expressed in this basis: If there exists a function such that then is called the dual wavelet or the wavelet dual to ψ. In general, for some given R-function ψ, the dual will not exist. In the special case of , the wavelet is said to be an orthogonal wavelet. An example of an R-function without a dual is easy to construct. Let be an orthogonal wavelet. Then define for some complex number z. It is straightforward to show that this ψ does not have a wavelet dual. See also Multiresolution analysis
https://en.wikipedia.org/wiki/Beer%20Barrel%20Man
The Barrelman is a mascot logo used by two baseball teams in Milwaukee nicknamed "Brewers". Introduction The character was first used in the 1940s by the Milwaukee Brewers, a Minor League Baseball team based in Milwaukee, Wisconsin. At the time, he was known as "Owgust". With a beer barrel for a torso and tap for his nose, the Beer Barrel Man embodied the whimsical spirit of the minor leagues in the early to mid-twentieth century. In the 1940s and 1950s, a whole series of Beer Barrel Men were used as logos by the club – pitching, batting, fielding balls and running the bases. The December 1944 issue of Brewer News, the club's newsletter, depicted Owgust in a Santa Claus suit and long white beard. The Beer Barrel Man was used until spring training of 1953, when the Boston Braves displaced the Brewers in Milwaukee. Major Leagues After the Braves moved to Atlanta after the 1965 season, former Braves minority owner Bud Selig announced the formation of a group to bring major league baseball club back to Milwaukee, adopting the batting Beer Barrel Man as his organization's logo. When Selig's group was awarded the bankrupt American League Seattle Pilots franchise, he moved them to Milwaukee and the Beer Barrel Man made a comeback as the first logo of the new Milwaukee Brewers. The Beer Barrel Man was used by the American League club through the 1977 season. Legacy Since then, he has made appearances on stadium giveaways, such as the 1999 Turn Ahead the Clock promotion, and has found new life on Cooperstown Collection merchandise. The Beer Barrel Man was also featured in the winning design for the Brewers' "Design A Youniform" contest in 2013. The contest received nearly 700 entries and the winning design, created by Ben Peters of Richfield, Minnesota, used the Beer Barrelman as the cap logo and sleeve patch. This design was used in exhibitions games on March 22 in Arizona against the Chicago Cubs and once again March 30 in a game at Miller Park in Milwaukee ag
https://en.wikipedia.org/wiki/George%20Andrews%20%28mathematician%29
George Eyre Andrews (born December 4, 1938) is an American mathematician working in special functions, number theory, analysis and combinatorics. Education and career He is currently an Evan Pugh Professor of Mathematics at Pennsylvania State University. He did his undergraduate studies at Oregon State University and received his PhD in 1964 at the University of Pennsylvania where his advisor was Hans Rademacher. During 2008–2009 he was president of the American Mathematical Society. Contributions Andrews's contributions include several monographs and over 250 research and popular articles on q-series, special functions, combinatorics and applications. He is considered to be the world's leading expert in the theory of integer partitions. In 1976 he discovered Ramanujan's Lost Notebook. He is interested in mathematical pedagogy. His book The Theory of Partitions is the standard reference on the subject of integer partitions. He has advanced mathematics in the theories of partitions and q-series. His work at the interface of number theory and combinatorics has also led to many important applications in physics. Awards and honors In 2003 Andrews was elected a member of the National Academy of Sciences. He was elected a Fellow of the American Academy of Arts and Sciences in 1997. In 1998 he was an Invited Speaker at the International Congress of Mathematicians in Berlin. In 2012 he became a fellow of the American Mathematical Society. He was given honorary doctorates from the University of Parma in 1998, the University of Florida in 2002, the University of Waterloo in 2004, SASTRA University in Kumbakonam, India in 2012, and University of Illinois at Urbana–Champaign in 2014 Publications Selected Works of George E Andrews (With Commentary) (World Scientific Publishing, 2012, ) Number Theory (Dover, 1994, ) The Theory of Partitions (Cambridge University Press, 1998, ) Integer Partitions (with Eriksson, Kimmo) (Cambridge University Press, 2004, ) Ramanujan's
https://en.wikipedia.org/wiki/Ferroportin
Ferroportin-1, also known as solute carrier family 40 member 1 (SLC40A1) or iron-regulated transporter 1 (IREG1), is a protein that in humans is encoded by the SLC40A1 gene, and is part of the Ferroportin (Fpn) Family (TC# 2.A.100). Ferroportin is a transmembrane protein that transports iron from the inside of a cell to the outside of the cell. Ferroportin is the only known iron exporter. After dietary iron is absorbed into the cells of the small intestine, ferroportin allows that iron to be transported out of those cells and into the bloodstream. Fpn also mediates the efflux of iron recycled from macrophages resident in the spleen and liver. Ferroportin is regulated by hepcidin, a hormone produced by the liver; hepcidin binds to Fpn and limits its iron-efflux activity, thereby reducing iron delivery to the blood plasma. Therefore, the interaction between Fpn and hepcidin controls systemic iron homeostasis. Structure and function Members of the ferroportin family consist of 400-800 amino acid residues, with a highly conserved histidine at residue position 32 (H32), and exhibit 8-12 putative transmembrane domains. Human Fpn consists of 571 amino acid residues. When H32 is mutated in mice, iron transport activity is impaired. Recent crystal structures generated from a bacterial homologue of ferroportin (from Bdellovibrio bacteriovorus) revealed that the Fpn structure resembles that of major facilitator superfamily (MFS) transporters. The prospective substrate binding site is located at the interface between the N-terminal and C-terminal halves of the protein, and is alternately accessible from either side of the cell membrane, consistent with MFS transporters. Ferroportin-mediated iron efflux is calcium-activated; studies of human Fpn expressed in Xenopus laevis oocytes demonstrated that calcium is a required cofactor for Fpn, but that Fpn does not transport calcium. Thus, Fpn does not function as an iron/calcium antiporter. The thermodynamic driving force for
https://en.wikipedia.org/wiki/Microbial%20intelligence
Microbial intelligence (known as bacterial intelligence) is the intelligence shown by microorganisms. The concept encompasses complex adaptive behavior shown by single cells, and altruistic or cooperative behavior in populations of like or unlike cells mediated by chemical signalling that induces physiological or behavioral changes in cells and influences colony structures. Complex cells, like protozoa or algae, show remarkable abilities to organize themselves in changing circumstances. Shell-building by amoebae reveals complex discrimination and manipulative skills that are ordinarily thought to occur only in multicellular organisms. Even bacteria can display more behavior as a population. These behaviors occur in single species populations, or mixed species populations. Examples are colonies or swarms of myxobacteria, quorum sensing, and biofilms. It has been suggested that a bacterial colony loosely mimics a biological neural network. The bacteria can take inputs in form of chemical signals, process them and then produce output chemicals to signal other bacteria in the colony. Bacteria communication and self-organization in the context of network theory has been investigated by Eshel Ben-Jacob research group at Tel Aviv University which developed a fractal model of bacterial colony and identified linguistic and social patterns in colony lifecycle. Examples of microbial intelligence Bacterial Bacterial biofilms can emerge through the collective behavior of thousands or millions of cells Biofilms formed by Bacillus subtilis can use electric signals (ion transmission) to synchronize growth so that the innermost cells of the biofilm do not starve. Under nutritional stress bacterial colonies can organize themselves in such a way so as to maximize nutrient availability. Bacteria reorganize themselves under antibiotic stress. Bacteria can swap genes (such as genes coding antibiotic resistance) between members of mixed species colonies. Individual cells of
https://en.wikipedia.org/wiki/Social%20influences%20on%20fitness%20behavior
Physical fitness is maintained by a range of physical activities. Physical activity is defined by the World Health Organization as "any bodily movement produced by skeletal muscles that requires energy expenditure." Human factors and social influences are important in starting and maintaining such activities. Social environments can influence motivation and persistence, through pressures towards social conformity. Obesity Obesity is a physical marker of poor health, increasing the likelihood of various diseases. Due to social constructs surrounding health, the belief that being skinny is healthy and discrimination against those perceived to be 'unhealthy', people who are considered overweight or obese on the BMI scale face many social challenges. Challenges can range from basic things such as buying clothes, pressure from society to change their body, and being unable to get a job. This can lead to various problems such as eating disorders, self-esteem issues, and misdiagnosis and improper treatment of physical ailments due to discrimination. People who are obese are also less likely to seek medical care than people who are not obese, even if their weight is caused by medical problems. Adult Children Obesity can lower mood and lower self-esteem. Reasons for inactivity In the US, only 26% of adults engage in vigorous leisure-time activity (which includes a sport) or exercising three or more times per week. In an effort to increase adult involvement and decrease the percentage of adult inactivity, the US Department of Health and Human Services has set a national health objective for 2010 that hopes to "Reduce the prevalence of no leisure time activity from more than 25 percent to 20 percent of US adults" (Berlin, Storti, and Brach 1137). In Australia, the Australian Bureau of Statistics found that in 2011/12 adults spent an average of 33 minutes per day doing physical activity with 60% of the population doing less than 30 minutes and fewer than 20% doing an hour
https://en.wikipedia.org/wiki/Emerald%20network
The Emerald network is a network of Areas of Special Conservation Interest to conserve wild flora and fauna and their natural habitats of Europe, which was launched in 1989 by the Council of Europe as part of its work under the Berne Convention on the Conservation of European Wildlife and Natural Habitats that came into force on 1 June 1982. It is to be set up in each Contracting Party or observer state to the convention. The Bern Convention is signed by the 46 member states of the Council of Europe, together with the European Union, Monaco, Burkina Faso, Morocco, Tunisia and Senegal. Algeria, Belarus, Bosnia and Herzegovina, Cape Verde, Vatican City, San Marino and Russia are among non-signatories that have observer status at meetings of the committee. The European Union, as such, is also a Contracting Party to the Bern Convention. In order to fulfil its obligations arising from the convention, particularly in respect of habitat protection, it produced the Habitats Directive in 1992 and subsequently set up the Natura 2000 network. The development of the Emerald Network in Africa has started with the implementation of pilot projects in Burkina Faso, Senegal and Morocco (ongoing). The Emerald Network could also be launched in Tunisia, at the request of the national authorities. See also Biogeographic regions of Europe Ecological network
https://en.wikipedia.org/wiki/Nonlinear%20control
Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output. Control theory is divided into two branches. Linear control theory applies to systems made of devices which obey the superposition principle. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems can be solved by powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. Nonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. Nonlinear system
https://en.wikipedia.org/wiki/Goldenhar%20syndrome
Goldenhar syndrome is a rare congenital defect characterized by incomplete development of the ear, nose, soft palate, lip and mandible on usually one side of the body. Common clinical manifestations include limbal dermoids, preauricular skin tags and strabismus. It is associated with anomalous development of the first branchial arch and second branchial arch. The term is sometimes used interchangeably with hemifacial microsomia, although this definition is usually reserved for cases without internal organ and vertebrae disruption. It affects between 1 in 3,500 and 1 in 5,600 live births, with a male-to-female ratio of 3:2. Signs and symptoms Chief markers of Goldenhar syndrome are incomplete development of the ear, nose, soft palate, lip, and mandible on usually one side of the body. Additionally, some patients will have growing issues with internal organs, especially heart, kidneys and lungs. Typically, the organ will either not be present on one side or will be underdeveloped. While it is more usual for there to be problems on only one side, it has been known for defects to occur bilaterally (approximate incidence 10% of confirmed GS cases). Other problems can include severe scoliosis (twisting of the vertebrae), limbal dermoids and hearing loss (see hearing loss with craniofacial syndromes), and deafness or blindness in one or both ears/eyes. Granulosa cell tumors may be associated as well. Causes The cause of Goldenhar syndrome is largely unknown. However, it is thought to be multifactorial, although there may be a genetic component, which would account for certain familial patterns. It has been suggested that there is a branchial arch development issue late in the first trimester. An increase in Goldenhar syndrome in the children of Gulf War veterans has been suggested, but the difference was shown to be statistically insignificant. Diagnosis No general consensus on the minimal diagnostic criteria exists. The syndrome is characterized by hemifacial micr
https://en.wikipedia.org/wiki/Norton%20SystemWorks
Norton SystemWorks is a discontinued utility software suite by Symantec Corp. It integrates three of Symantec's most popular products – Norton Utilities, Norton CrashGuard and Norton AntiVirus – into one program designed to simplify solving common PC issues. Backup software was added later to high-end editions. SystemWorks was innovative in that it combined several applications into an all-in-one software for managing computer health, thus saving significant costs and time often spent on using different unrelated programs. SystemWorks, which was introduced in 1998 has since inspired a host of competitors such as iolo System Mechanic, McAfee Nuts And Bolts, Badosoft First Aid and many others. Norton SystemWorks for Windows was initially offered alongside Norton Utilities until it replaced it as Symantec's flagship (and only) utility software in 2003. SystemWorks was discontinued in 2009, allowing Norton Utilities to return as Symantec's main utility suite. The Mac edition, lasting only three versions, was discontinued in 2004 to allow Symantec to concentrate its efforts solely on Internet security products for the Mac. Norton NT Tools The precursor of Norton SystemWorks was released in March 1996 for PCs running Windows NT 3.51 or later. It includes Norton AntiVirus Scanner, Norton File Manager (based on Norton Navigator), UNC browser, Norton Fast Find, Norton Zip/Unzip, Norton Folder Synchronization, Folder Compare, Norton System Doctor, System Information, Norton Control Center. Norton Protected Desktop Solution An application suite built similar to Norton SystemWorks but includes different set of tools to enable support of DOS, Windows 3.1, Windows 95, or Windows NT. Released in July 1998, it includes Norton Software Distribution Utility 2.0, Norton CrashGuard 2.0 for Windows NT, Norton CrashGuard 3.0 for Windows 95, Norton Speed Disk for Windows 95/NT, Norton Disk Doctor for Windows 95/NT, Norton AntiVirus 4.0 for DOS/Windows 3.1, and Norton AntiVirus 4.0 fo
https://en.wikipedia.org/wiki/Incubator%20%28culture%29
An incubator is a device used to grow and maintain microbiological cultures or cell cultures. The incubator maintains optimal temperature, humidity and other conditions such as the CO2 and oxygen content of the atmosphere inside. Incubators are essential for much experimental work in cell biology, microbiology and molecular biology and are used to culture both bacterial and eukaryotic cells. An incubator is made up of a chamber with a regulated temperature. Some incubators also regulate humidity, gas composition, or ventilation within that chamber. The simplest incubators are insulated boxes with an adjustable heater, typically going up to 60 to 65 °C (140 to 150 °F), though some can go slightly higher (generally to no more than 100 °C). The most commonly used temperature both for bacteria such as the frequently used E. coli as well as for mammalian cells is approximately 37 °C (99 °F), as these organisms grow well under such conditions. For other organisms used in biological experiments, such as the budding yeast Saccharomyces cerevisiae, a growth temperature of 30 °C (86 °F) is optimal. More elaborate incubators can also include the ability to lower the temperature (via refrigeration), or the ability to control humidity or CO2 levels. This is important in the cultivation of mammalian cells, where the relative humidity is typically >80% to prevent evaporation and a slightly acidic pH is achieved by maintaining a CO2 level of 5%. History of the laboratory incubator From aiding in hatching chicken eggs to enabling scientists to understand and develop vaccines for deadly viruses, the laboratory incubator has seen numerous applications over the years it has been in use. The incubator has also provided a foundation for medical advances and experimental work in cellular and molecular biology. While many technological advances have occurred since the primitive incubators first used in ancient Egypt and China, the main purpose of the incubator has remained unchanged
https://en.wikipedia.org/wiki/Paul%20%C3%89mile%20Appell
M. P. Appell is the same person: it stands for Monsieur Paul Appell. Paul Émile Appell (27 September 1855, in Strasbourg – 24 October 1930, in Paris) was a French mathematician and Rector of the University of Paris. Appell polynomials and Appell's equations of motion are named after him, as is rue Paul Appell in the 14th arrondissement of Paris and the minor planet 988 Appella. Life Paul Appell entered the École Normale Supérieure in 1873. He was elected to the French Academy of Sciences in 1892. In 1895, he became a Professor at the École Centrale Paris. Between 1903 and 1920 he was Dean of the Faculty of Science of the University of Paris, then Rector of the University of Paris from 1920 to 1925. Appell was the President of the Société astronomique de France (SAF), the French astronomical society, from 1919 to 1921. His daughter Marguerite Appell (1883–1969), who married the mathematician Émile Borel, is known as a novelist under her pen-name Camille Marbo. Appell was an atheist. He was awarded Order of the White Eagle. Work He worked first on projective geometry in the line of Chasles, then on algebraic functions, differential equations, and complex analysis. Appell was the editor of the collected works of Henri Poincaré. Jules Drach was co-editor of the first volume. Appell series He introduced a set of four hypergeometric series F1, F2, F3, F4 of two variables, now called Appell series, that generalize Gauss's hypergeometric series. He established the set of partial differential equations of which these functions are solutions, and found formulas and expressions of these series in terms of hypergeometric series of one variable. In 1926, with Professor Joseph-Marie Kampé de Fériet, he authored a treatise on generalized hypergeometric series. Mechanics In mechanics, he proposed an alternative formulation of analytical mechanics known as Appell's equation of motion. He discovered a physical interpretation of the imaginary period of the doubly period
https://en.wikipedia.org/wiki/Virtual%20instrumentation
Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments. Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation. The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular. Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems. The newly updated
https://en.wikipedia.org/wiki/Countercontrol
Countercontrol is a term used by Dr. B.F. Skinner in 1953 as a functional class in the analysis of social behavior. Opposition or resistance to intervention defines countercontrol, however little systematic research has been conducted to document its occurrence. Skinner also distinguished it from the literature of freedom, which he said did not provide effective countercontrol strategies. The concept was identified as a mechanism to oppose control such as escape from the controller or waging an attack in order to weaken or destroy the controlling power. For this purpose, Skinner stressed the role of the individual as an instrument of countercontrol, emphasizing the notion of vigilance along with the concepts of freedom and dignity. Behavior Counter control can embed itself in both passive and active behavior. An individual may not respond to the demanding interventionist or may completely withdraw from the situation passively. The foundation for countercontrol is that human behavior is both a function of the environment and a source of control over it. Counter control originates from the essential behavior-analytic position which states that behavior is always caused or controlled. For Skinner, countercontrol is constituted by the behaviors that determine the behavior of the controller or those who hold authority. Fundamental Control is fundamental in conceptual, experimental and applied behavior analysis, as it is fundamental in all experimental science. To study functional relations in behavior and environment, one must manipulate (control) environmental variables to study their effect in behavior. Countercontrol can be defined as human operant behavior as a response to social aversive control. The individual that is exposed to aversive control may try to oppose controlling attempts through the process of negative reinforcement, such as by escaping, attacking, or passively resisting. Countercontrol is a way in which individuals regain behavioral freedom when f
https://en.wikipedia.org/wiki/Nicotiana%20benthamiana
Nicotiana benthamiana, colloquially known as benth or benthi, is a species of Nicotiana indigenous to Australia. It is a close relative of tobacco. A synonym for this species is Nicotiana suaveolens var. cordifolia, a description given by George Bentham in Flora Australiensis in 1868. This was transferred to Nicotiana benthamiana by Karel Domin in Bibliotheca Botanica (1929), honoring the original author in the specific epithet. History The plant was used by people of Australia as a stimulant, containing nicotine and other alkaloids, before the introduction of commercial tobacco (N. tabacum and N. rustica). Indigenous names for it include tjuntiwari and muntju. It was first collected on the north coast of Australia by Benjamin Bynoe on a voyage of in 1837. Description The herbaceous plant is found amongst rocks on hills and cliffs throughout the northern regions of Australia. Variable in height and habit, the species may be erect and up to or sprawling out no taller than . The flowers are white. Research uses N. benthamiana has been used as a model organism in plant research. For example, the leaves are rather frail and can be injured in experiments to study ethylene synthesis. Ethylene is a plant hormone which is secreted, among other situations, after injuries. Using gas chromatography, the quantity of ethylene emitted can be measured. Due to the large number of plant pathogens able to infect it, N. benthamiana is widely used in the field of plant virology. It is also an excellent target plant for agroinfiltration. N. benthamiana has a number of wild strains across Australia, and the laboratory strain is an extremophile originating from a population that has retained a loss-of-function mutation in Rdr1 (RNA-dependent RNA polymerase 1), rendering it hypersusceptible to viruses. Biotechnology N. benthamiana is also a common plant used for "pharming" of monoclonal antibodies and other recombinant proteins; for example, the drug ZMapp was produced using thes
https://en.wikipedia.org/wiki/System%20Management%20Mode
System Management Mode (SMM, sometimes called ring −2 in reference to protection rings) is an operating mode of x86 central processor units (CPUs) in which all normal execution, including the operating system, is suspended. An alternate software system which usually resides in the computer's firmware, or a hardware-assisted debugger, is then executed with high privileges. It was first released with the Intel 386SL. While initially special SL versions were required for SMM, Intel incorporated SMM in its mainline 486 and Pentium processors in 1993. AMD implemented Intel's SMM with the Am386 processors in 1991. It is available in all later microprocessors in the x86 architecture. In ARM architecture the Exception Level 3 (EL3) mode is also referred as Secure Monitor Mode or System Management Mode. Operation SMM is a special-purpose operating mode provided for handling system-wide functions like power management, system hardware control, or proprietary OEM designed code. It is intended for use only by system firmware (BIOS or UEFI), not by applications software or general-purpose systems software. The main benefit of SMM is that it offers a distinct and easily isolated processor environment that operates transparently to the operating system or executive and software applications. In order to achieve transparency, SMM imposes certain rules. The SMM can only be entered through SMI (System Management Interrupt). The processor executes the SMM code in a separate address space (SMRAM) that has to be made inaccessible to other operating modes of the CPU by the firmware. System Management Mode can address up to 4 GB memory as huge real mode. In x86-64 processors, SMM can address >4 GB memory as real address mode. Usage Initially, System Management Mode was used for implementing power management and hardware control features like Advanced Power Management (APM). However, BIOS manufacturers and OEMs have relied on SMM for newer functionality like Advanced Configuration a
https://en.wikipedia.org/wiki/Monoicy
Monoicy () is a sexual system in haploid plants (mainly bryophytes) where both sperm and eggs are produced on the same gametophyte, in contrast with dioicy, where each gametophyte produces only sperm or eggs but never both. Both monoicous () and dioicous gametophytes produce gametes in gametangia by mitosis rather than meiosis, so that sperm and eggs are genetically identical with their parent gametophyte. It has been suggested that monoicy may have benefits in dry habitats where the ability to produce sporophytes is limited due to lack of water. Monoicy is similar to, and often conflated with, monoecy, which applies to seed plants (spermatophytes) and refers to separate male and female cones or flowers on the same plant. Etymology and history The word monoicous and the related forms mon(o)ecious are derived from the Greek mόνος (mónos), single, and οἶκος (oîkos) or οἰκία (oikía), house. The words dioicous and di(o)ecious are derived from οἶκος or οἰκία and δι- (di-), twice, double. ((o)e is the Latin way of transliterating Greek οι, whereas oi is a more straightforward modern way.) Generally, the terms "monoicous" and "dioicous" have been restricted to description of haploid sexuality (gametophytic sexuality), and are thus used primarily to describe bryophytes in which the gametophyte is the dominant generation. Meanwhile, "monoecious" and "dioecious" are used to describe diploid sexuality (sporophytic sexuality), and thus are used to describe tracheophytes (vascular plants) in which the sporophyte is the dominant generation. However, this usage, although precise, is not universal, and "monoecious" and "dioecious" are still used by some bryologists for the gametophyte. Occurrence 40% of mosses are monoicious. Bryophyte sexuality Bryophytes have life cycles that are gametophyte dominated. The longer lived, more prominent autotrophic plant is the gametophyte. The sporophyte in mosses and liverworts consists of an unbranched stalk (a seta) bearing a single
https://en.wikipedia.org/wiki/Green-beard%20effect
The green-beard effect is a thought experiment used in evolutionary biology to explain selective altruism among individuals of a species. The idea of a green-beard gene was proposed by William D. Hamilton in his articles of 1964, and got the name from the example used by Richard Dawkins ("I have a green beard and I will be altruistic to anyone else with green beard") in The Selfish Gene (1976). A green-beard effect occurs when an allele, or a set of linked alleles, produce three expressed (or phenotypic) effects: a perceptible trait—the hypothetical "green beard" recognition of this trait by others; and preferential treatment of individuals with the trait by others with the trait The carrier of the gene (or a specific allele) is essentially recognizing copies of the same gene (or a specific allele) in other individuals. Whereas kin selection involves altruism to related individuals who share genes in a non-specific way, green-beard alleles promote altruism toward individuals who share a gene that is expressed by a specific phenotypic trait. Some authors also note that the green-beard effects can include "spite" for individuals lacking the "green-beard" gene. This can have the effect of delineating a subset of organisms within a population that is characterized by members who show greater cooperation toward each other, this forming a "clique" that can be advantageous to its members who are not necessarily kin. Green-beard effect could increase altruism on green-beard phenotypes and therefore its presence in a population even if genes assist in the increase of genes that are not exact copies; all that is required is that they express the three required characteristics. Green-beard alleles are vulnerable to mutations that produce the perceptible trait without the helping behaviour. Altruistic behaviour is paradoxical when viewed in the light of old ideas of evolutionary theory that emphasised the role of competition. The evolution of altruism is better expl
https://en.wikipedia.org/wiki/Ray%20%28optics%29
In optics, a ray is an idealized geometrical model of light or other electromagnetic radiation, obtained by choosing a curve that is perpendicular to the wavefronts of the actual light, and that points in the direction of energy flow. Rays are used to model the propagation of light through an optical system, by dividing the real light field up into discrete rays that can be computationally propagated through the system by the techniques of ray tracing. This allows even very complex optical systems to be analyzed mathematically or simulated by computer. Ray tracing uses approximate solutions to Maxwell's equations that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray optics or geometrical optics does not describe phenomena such as diffraction, which require wave optics theory. Some wave phenomena such as interference can be modeled in limited circumstances by adding phase to the ray model. Definition A light ray is a line (straight or curved) that is perpendicular to the light's wavefronts; its tangent is collinear with the wave vector. Light rays in homogeneous media are straight. They bend at the interface between two dissimilar media and may be curved in a medium in which the refractive index changes. Geometric optics describes how rays propagate through an optical system. Objects to be imaged are treated as collections of independent point sources, each producing spherical wavefronts and corresponding outward rays. Rays from each object point can be mathematically propagated to locate the corresponding point on the image. A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. Special rays There are many special rays that are used in optical modelling to analyze an optical system. These are defined and described below,
https://en.wikipedia.org/wiki/Carnegie%20Mellon%20University%20Usable%20Privacy%20and%20Security%20Laboratory
The Carnegie Mellon University Usable Privacy and Security Laboratory (CUPS) was established in the Spring of 2004 to bring together Carnegie Mellon University researchers working on a diverse set of projects related to understanding and improving the usability of privacy and security software and systems. The privacy and security research community has become increasingly aware that usability problems severely impact the effectiveness of mechanisms designed to provide security and privacy in software systems. Indeed, one of the four grand research challenges in information security and assurance identified by the Computing Research Association in 2003 is: "Give end-users security controls they can understand and privacy they can control for the dynamic, pervasive computing environments of the future." This is the challenge that CUPS strives to address. CUPS is affiliated with Carnegie Mellon CyLab and has members from the Engineering and Public Policy Department, the School of Computer Science, the Electrical and Computer Engineering Department, the Heinz College, and the Department of Social and Decision Sciences. It is directed by Lorrie Cranor. Projects P3P and computer-readable privacy policies Two members of the CUPS Lab are members of the W3C P3P Working Group, working on developing the P3P 1.1 specification. In the fall of 2005, AT&T gave the rights to the source code and trademarks surrounding Privacy Bird, their P3P user-agent. Privacy Bird is currently maintained and distributed by the lab. In the summer of 2005, the lab made available to the public a "P3P-enabled search engine", known as Privacy Finder. It allowed a user to reorder search results based on whether each site complied with his or her privacy preferences. This information was gleaned from P3P policies found on the web sites. Since 2012, Privacy Finder has been "temporarily out of service", with no indication of when service would be restored. Additionally, the lab archives web sites
https://en.wikipedia.org/wiki/Patrick%27s%20test
Patrick's test or FABER test is performed to evaluate pathology of the hip joint or the sacroiliac joint. The test is performed by having the tested leg flexed and the thigh abducted and externally rotated. If pain is elicited on the ipsilateral side anteriorly, it is suggestive of a hip joint disorder on the same side. If pain is elicited on the contralateral side posteriorly around the sacroiliac joint, it is suggestive of pain mediated by dysfunction in that joint. History Patrick's test is named after the American neurologist Hugh Talbot Patrick. See also Gaenslen's test Physical medicine and rehabilitation
https://en.wikipedia.org/wiki/Rho%28D%29%20immune%20globulin
Rho(D) immune globulin (RhIG) is a medication used to prevent RhD isoimmunization in mothers who are RhD negative and to treat idiopathic thrombocytopenic purpura (ITP) in people who are Rh positive. It is often given both during and following pregnancy. It may also be used when RhD-negative people are given RhD-positive blood. It is given by injection into muscle or a vein. A single dose lasts 12 weeks. It is made from human blood plasma. Common side effects include fever, headache, pain at the site of injection, and red blood cell breakdown. Other side effects include allergic reactions, kidney problems, and a very small risk of viral infections. In those with ITP, the amount of red blood cell breakdown may be significant. Use is safe with breastfeeding. Rho(D) immune globulin is made up of antibodies to the antigen Rho(D) present on some red blood cells. It is believed to work by blocking a person's immune system from recognizing this antigen. Rho(D) immune globulin came into medical use in the 1960s, following the pioneering work of John G. Gorman. In 1980, Gorman shared the Lasker-DeBakey Clinical Medical Research Award for pioneering work on the rhesus blood group system. RhIG is on the World Health Organization's List of Essential Medicines. Medical uses In a pregnancy where the mother is RhD negative and the father is RhD positive, the probability of the fetus having RhD positive blood is dependent on whether the father is homozygous for RhD (i.e., both RhD alleles are present) or heterozygous (i.e., only one RhD allele is present). If the father is homozygous, the fetus will necessarily be RhD positive, as the father will necessarily pass on a Rh D positive allele. If the father is heterozygous, there is a 50% chance that the fetus will be RhD positive, as he will randomly pass on either the RhD positive allele or not. If a fetus is RhD positive and the mother is RhD negative, the mother is at risk of RhD alloimmunization, where the mother mounts an i
https://en.wikipedia.org/wiki/Irregular%20matrix
An irregular matrix, or ragged matrix, is a matrix that has a different number of elements in each row. Ragged matrices are not used in linear algebra, since standard matrix transformations cannot be performed on them, but they are useful in computing as arrays which are called jagged arrays. Irregular matrices are typically stored using Iliffe vectors. For example, the following is an irregular matrix: See also Regular matrix (disambiguation) Empty matrix Sparse matrix
https://en.wikipedia.org/wiki/Moore%20method
The Moore method is a deductive manner of instruction used in advanced mathematics courses. It is named after Robert Lee Moore, a famous topologist who first used a stronger version of the method at the University of Pennsylvania when he began teaching there in 1911. (Zitarelli, 2004) The way the course is conducted varies from instructor to instructor, but the content of the course is usually presented in whole or in part by the students themselves. Instead of using a textbook, the students are given a list of definitions and, based on these, theorems which they are to prove and present in class, leading them through the subject material. The Moore method typically limits the amount of material that a class is able to cover, but its advocates claim that it induces a depth of understanding that listening to lectures cannot give. The original method F. Burton Jones, a student of Moore and a practitioner of his method, described it as follows: The students were forbidden to read any book or article about the subject. They were even forbidden to talk about it outside of class. Hersh and John-Steiner (1977) claim that, "this method is reminiscent of a well-known, old method of teaching swimming called 'sink or swim' ". Quotations "That student is taught the best who is told the least." Moore, quote in Parker (2005: vii). "I hear, I forget. I see, I remember. I do, I understand." (Chinese proverb that was a favorite of Moore's. Quoted in Halmos, P.R. (1985) I want to be a mathematician: an automathography. Springer-Verlag: 258)
https://en.wikipedia.org/wiki/Artificial%20reproduction
Artificial reproduction is the re-creation of life by other than the natural means and natural causes. It involves building of new life following human plans and projects. Examples include, artificial selection, artificial insemination, in vitro fertilization, artificial womb, artificial cloning, and kinematic replication. Artificial reproduction is one aspect of artificial life. Artificial reproduction follow in two classes according to its capacity to be self-sufficient: non-assisted reproductive technology and assisted reproductive technology. Cutting plants' stems and placing them in compost is a form of assisted artificial reproduction, xenobots are an example of a more autonomous type of reproduction, while the artificial womb presented in the movie the Matrix illustrates a non assisted hypothetical technology. The idea of artificial reproduction has led to various technologies. Theology Humans have aspired to create life since immemorial times. Most theologies and religions have conceived this possibility as exclusive of deities. Christian religions consider the possibility of artificial reproduction, in most cases, as heretical and sinful. Philosophy Although ancient Greek philosophy raised the possibility that man could imitate the creative capacity of nature, it was thought that if possible human beings would reproduce things as nature does, and vice versa, nature would do the things that man does in the same way. Aristotle, for example, wrote that if nature made tables, it would make them just as men do. In other words, Aristotle said that if nature were to create a table, such table will look like a human-made table. Similarly, Descartes envisioned the human body, and nature, as a machine. Cartesian philosophy does not stop seeing a perfect mirror between nature and the artificial. However, Kant revolutionized this old idea by criticizing such naturalism. Kant pedagogically wrote: Humans are not instructed by nature but rather use nature as raw
https://en.wikipedia.org/wiki/Distributed%20algorithm
A distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors. Distributed algorithms are used in different application areas of distributed computing, such as telecommunications, scientific computing, distributed information processing, and real-time process control. Standard problems solved by distributed algorithms include leader election, consensus, distributed search, spanning tree generation, mutual exclusion, and resource allocation. Distributed algorithms are a sub-type of parallel algorithm, typically executed concurrently, with separate parts of the algorithm being run simultaneously on independent processors, and having limited information about what the other parts of the algorithm are doing. One of the major challenges in developing and implementing distributed algorithms is successfully coordinating the behavior of the independent parts of the algorithm in the face of processor failures and unreliable communications links. The choice of an appropriate distributed algorithm to solve a given problem depends on both the characteristics of the problem, and characteristics of the system the algorithm will run on such as the type and probability of processor or link failures, the kind of inter-process communication that can be performed, and the level of timing synchronization between separate processes. Standard problems Atomic commit An atomic commit is an operation where a set of distinct changes is applied as a single operation. If the atomic commit succeeds, it means that all the changes have been applied. If there is a failure before the atomic commit can be completed, the "commit" is aborted and no changes will be applied. Algorithms for solving the atomic commit problem include the two-phase commit protocol and the three-phase commit protocol. Consensus Consensus algorithms try to solve the problem of a number of processes agreeing on a common decision. More precisely, a Consensus protoco
https://en.wikipedia.org/wiki/Circular%20convolution
Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the same period. Periodic convolution arises, for example, in the context of the discrete-time Fourier transform (DTFT). In particular, the DTFT of the product of two discrete sequences is the periodic convolution of the DTFTs of the individual sequences. And each DTFT is a periodic summation of a continuous Fourier transform function (see ). Although DTFTs are usually continuous functions of frequency, the concepts of periodic and circular convolution are also directly applicable to discrete sequences of data. In that context, circular convolution plays an important role in maximizing the efficiency of a certain kind of common filtering operation. Definitions The periodic convolution of two T-periodic functions, and can be defined as:   where to is an arbitrary parameter.  An alternative definition, in terms of the notation of normal linear or aperiodic convolution, follows from expressing and as periodic summations of aperiodic components and , i.e.: Then: Both forms can be called periodic convolution. The term circular convolution arises from the important special case of constraining the non-zero portions of both and to the interval Then the periodic summation becomes a periodic extension, which can also be expressed as a circular function: (any real number) And the limits of integration reduce to the length of function : Discrete sequences Similarly, for discrete sequences, and a parameter N, we can write a circular convolution of aperiodic functions and as: This function is N-periodic. It has at most N unique values. For the special case that the non-zero extent of both x and h are ≤ N, it is reducible to matrix multiplication where the kernel of the integral transform is a circulant matrix. Example A case of great practical interest is illustrated in the figure. The
https://en.wikipedia.org/wiki/Dynamic%20energy%20budget%20theory
The dynamic energy budget (DEB) theory is a formal metabolic theory which provides a single quantitative framework to dynamically describe the aspects of metabolism (energy and mass budgets) of all living organisms at the individual level, based on assumptions about energy uptake, storage, and utilization of various substances. The DEB theory adheres to stringent thermodynamic principles, is motivated by universally observed patterns, is non-species specific, and links different levels of biological organization (cells, organisms, and populations) as prescribed by the implications of energetics. Models based on the DEB theory have been successfully applied to over a 1000 species with real-life applications ranging from conservation, aquaculture, general ecology, and ecotoxicology (see also the Add-my-pet collection). The theory is contributing to the theoretical underpinning of the emerging field of metabolic ecology. The explicitness of the assumptions and the resulting predictions enable testing against a wide variety of experimental results at the various levels of biological organization. The theory explains many general observations, such as the body size scaling relationships of certain physiological traits, and provides a theoretical underpinning to the widely used method of indirect calorimetry. Several popular empirical models are special cases of the DEB model, or very close numerical approximations. Theoretical background The theory presents simple mechanistic rules that describe the uptake and allocation of energy (and nutrients) and the consequences for physiological organization throughout an organism's life cycle, including the relationships of energetics with aging and effects of toxicants. Assumptions of the DEB theory are delineated in an explicit way, the approach clearly distinguishes mechanisms associated with intra‐ and interspecific variation in metabolic rates, and equations for energy flows are mathematically derived following the princ
https://en.wikipedia.org/wiki/Flucytosine
Flucytosine, also known as 5-fluorocytosine (5-FC), is an antifungal medication. It is specifically used, together with amphotericin B, for serious Candida infections and cryptococcosis. It may be used by itself or with other antifungals for chromomycosis. Flucytosine is used by mouth and by injection into a vein. Common side effects include bone marrow suppression, loss of appetite, diarrhea, vomiting, and psychosis. Anaphylaxis and other allergic reactions occasionally occur. It is unclear if use in pregnancy is safe for the baby. Flucytosine is in the fluorinated pyrimidine analogue family of medications. It works by being converted into fluorouracil inside the fungus, which impairs its ability to make protein. Flucytosine was first made in 1957. It is on the World Health Organization's List of Essential Medicines. As of 2016, in the United States the medication cost about US$2,000 per day while in the United Kingdom it is about US$22 per day. It is not available in much of the third world. Medical uses Flucytosine by mouth is used for the treatment of serious infections caused by susceptible strains of Candida or Cryptococcus neoformans. It can also be used for the treatment of chromomycosis (chromoblastomycosis), if susceptible strains cause the infection. Flucytosine must not be used as a sole agent in life-threatening fungal infections due to relatively weak antifungal effects and fast development of resistance, but rather in combination with amphotericin B and/or azole antifungals such as fluconazole or itraconazole. Minor infections such as candidal cystitis may be treated with flucytosine alone. In some countries, treatment with slow intravenous infusions for no more than a week is also a therapeutic option, particular if the disease is life-threatening. Serious fungal infections may occur in those who are immunocompromised. These people benefit from combination therapy including flucytosine, but the incidence of side-effects of a combination therapy,
https://en.wikipedia.org/wiki/Beta-2%20microglobulin
β2 microglobulin (B2M) is a component of MHC class I molecules. MHC class I molecules have α1, α2, and α3 proteins which are present on all nucleated cells (excluding red blood cells). In humans, the β2 microglobulin protein is encoded by the B2M gene. Structure and function β2 microglobulin lies beside the α3 chain on the cell surface. Unlike α3, β2 has no transmembrane region. Directly above β2 (that is, further away from the cell) lies the α1 chain, which itself is next to the α2. β2 microglobulin associates not only with the alpha chain of MHC class I molecules, but also with class I-like molecules such as CD1 (5 genes in humans), MR1, the neonatal Fc receptor (FcRn), and Qa-1 (a form of alloantigen). Nevertheless, the β2 microglobulin gene is outside of the MHC (HLA) locus, on a different chromosome. An additional function is association with the HFE protein, together regulating the expression of hepcidin in the liver which targets the iron transporter ferroportin on the basolateral membrane of enterocytes and cell membrane of macrophages for degradation resulting in decreased iron uptake from food and decreased iron release from recycled red blood cells in the MPS (mononuclear phagocyte system) respectively. Loss of this function causes iron excess and hemochromatosis. In a cytomegalovirus infection, a viral protein binds to β2 microglobulin, preventing assembly of MHC class I molecules and their transport to the plasma membrane. Mice models deficient for the β2 microglobulin gene have been engineered. These mice demonstrate that β2 microglobulin is necessary for cell surface expression of MHC class I and stability of the peptide-binding groove. In fact, in the absence of β2 microglobulin, very limited amounts of MHC class I (classical and non-classical) molecules can be detected on the surface (bare lymphocyte syndrome or BLS). In the absence of MHC class I, CD8+ T cells cannot develop. (CD8+ T cells are a subset of T cells involved in the development
https://en.wikipedia.org/wiki/Weak%20formulation
Weak formulations are important tools for the analysis of mathematical equations that permit the transfer of concepts of linear algebra to solve problems in other fields such as partial differential equations. In a weak formulation, equations or conditions are no longer required to hold absolutely (and this is not even well defined) and has instead weak solutions only with respect to certain "test vectors" or "test functions". In a strong formulation, the solution space is constructed such that these equations or conditions are already fulfilled. The Lax–Milgram theorem, named after Peter Lax and Arthur Milgram who proved it in 1954, provides weak formulations for certain systems on Hilbert spaces. General concept Let be a Banach space, let be the dual space of , let , and let . A vector is a solution of the equation if and only if for all , Here, is called a test vector (in general) or a test function (if is a function space). To bring this into the generic form of a weak formulation, find such that by defining the bilinear form Example 1: linear system of equations Now, let and be a linear mapping. Then, the weak formulation of the equation involves finding such that for all the following equation holds: where denotes an inner product. Since is a linear mapping, it is sufficient to test with basis vectors, and we get Actually, expanding we obtain the matrix form of the equation where and The bilinear form associated to this weak formulation is Example 2: Poisson's equation To solve Poisson's equation on a domain with on its boundary, and to specify the solution space later, one can use the scalar product to derive the weak formulation. Then, testing with differentiable functions yields The left side of this equation can be made more symmetric by integration by parts using Green's identity and assuming that on This is what is usually called the weak formulation of Poisson's equation. Functions in the solution space must b
https://en.wikipedia.org/wiki/Cyberwarfare
Cyberwarfare is the use of cyber attacks against an enemy state, causing comparable harm to actual warfare and/or disrupting vital computer systems. Some intended outcomes could be espionage, sabotage, propaganda, manipulation or economic warfare. There is significant debate among experts regarding the definition of cyberwarfare, and even if such a thing exists. One view is that the term is a misnomer since no cyber attacks to date could be described as a war. An alternative view is that it is a suitable label for cyber attacks which cause physical damage to people and objects in the real world. Many countries including the United States, United Kingdom, Russia, China, Israel, Iran, and North Korea have active cyber capabilities for offensive and defensive operations. As states explore the use of cyber operations and combine capabilities, the likelihood of physical confrontation and violence playing out as a result of, or part of, a cyber operation is increased. However, meeting the scale and protracted nature of war is unlikely, thus ambiguity remains. The first instance of kinetic military action used in response to a cyber-attack resulting in the loss of human life was observed on 5 May 2019, when the Israel Defense Forces targeted and destroyed a building associated with an ongoing cyber-attack. Definition There is ongoing debate over how cyberwarfare should be defined and no absolute definition is widely agreed upon. While the majority of scholars, militaries, and governments use definitions that refer to state and state-sponsored actors, other definitions may include non-state actors, such as terrorist groups, companies, political or ideological extremist groups, hacktivists, and transnational criminal organizations depending on the context of the work. Examples of definitions proposed by experts in the field are as follows. Raymond Charles Parks and David P. Duggan focused on analyzing cyberwarfare in terms of computer networks and pointed out that "Cy
https://en.wikipedia.org/wiki/Ferromagnetic%20resonance
Ferromagnetic resonance, or FMR, is coupling between an electromagnetic wave and the magnetization of a medium through which it passes. This coupling induces a significant loss of power of the wave. The power is absorbed by the precessing magnetization (Larmor precession) of the material and lost as heat. For this coupling to occur, the frequency of the incident wave must be equal to the precession frequency of the magnetization (Larmor frequency) and the polarization of the wave must match the orientation of the magnetization. This effect can be used for various applications such as spectroscopic techniques or conception of microwave devices. The FMR spectroscopic technique is used to probe the magnetization of ferromagnetic materials. It is a standard tool for probing spin waves and spin dynamics. FMR is very broadly similar to electron paramagnetic resonance (EPR), and also somewhat similar to nuclear magnetic resonance (NMR), except that FMR probes the sample magnetization resulting from the magnetic moments of dipolar-coupled but unpaired electrons, while NMR probes the magnetic moment of atomic nuclei that are screened by the atomic or molecular orbitals surrounding such nuclei of non-zero nuclear spin. The FMR resonance is also the basis of various high-frequency electronic devices, such as resonance isolators or circulators. History Ferromagnetic resonance was experimentally discovered by V. K. Arkad'yev when he observed the absorption of UHF radiation by ferromagnetic materials in 1911. A qualitative explanation of FMR along with an explanation of the results from Arkad'yev was offered up by Ya. G. Dorfman in 1923, when he suggested that the optical transitions due to Zeeman splitting could provide a way to study ferromagnetic structure. A 1935 paper published by Lev Landau and Evgeny Lifshitz predicted the existence of ferromagnetic resonance of the Larmor precession, which was independently verified in experiments by J. H. E. Griffiths (UK) and E
https://en.wikipedia.org/wiki/Materials%20informatics
Materials informatics is a field of study that applies the principles of informatics and data science to materials science and engineering to improve the understanding, use, selection, development, and discovery of materials. The term "materials informatics" is frequently used interchangeably with "data science", "machine learning", and "artificial intelligence" by the community. This is an emerging field, with a goal to achieve high-speed and robust acquisition, management, analysis, and dissemination of diverse materials data with the goal of greatly reducing the time and risk required to develop, produce, and deploy new materials, which generally takes longer than 20 years. This field of endeavor is not limited to some traditional understandings of the relationship between materials and information. Some more narrow interpretations include combinatorial chemistry, process modeling, materials databases, materials data management, and product life cycle management. Materials informatics is at the convergence of these concepts, but also transcends them and has the potential to achieve greater insights and deeper understanding by applying lessons learned from data gathered on one type of material to others. By gathering appropriate meta data, the value of each individual data point can be greatly expanded. Databases Databases are essential for any informatics research and applications. In material informatics many databases exist containing both empirical data obtained experimentally, and theoretical data obtained computationally. Big data that can be used for machine learning is particularly difficult to obtain for experimental data due to the lack of a standard for reporting data and the variability in the experimental environment. This lack of big data has led to growing effort in developing machine learning techniques that utilize data extremely data sets. On the other hand, large uniform database of theoretical density functional theory (DFT) calculations exis
https://en.wikipedia.org/wiki/De%20novo%20mutation
A de novo mutation (DNM) is any mutation or alteration in the genome of an individual organism (human, animal, plant, microbe, etc.) that was not inherited from its parents. This type of mutation spontaneously occurs during the process of DNA replication during cell division. De novo mutations, by definition, are present in the affected individual but absent from both biological parents' genomes. These mutations can occur in any cell of the offspring, but those in the germ line (eggs or sperm) can be passed on to the next generation. In most cases, such a mutation has little or no effect on the affected organism due to the redundancy and robustness of the genetic code. However, in rare cases, it can have notable and serious effects on overall health, physical appearance, and other traits. Disorders that most commonly involve de novo mutations include cri-du-chat syndrome, 1p36 deletion syndrome, genetic cancer syndromes, and certain forms of autism, among others. Rate The rate at which de novo mutations occur is not static and can vary among different organisms and even among individuals. In humans, the average number of spontaneous mutations (not present in the parents) an infant has in its genome is approximately 43.86 DNMs. Various factors can influence this rate. For instance, a study in September 2019 by the University of Utah Health revealed that certain families have a higher spontaneous mutation rate than average. This finding indicates that the rate of de novo mutation can have a hereditary component, suggesting that it may "run in the family". Additionally, the age of parents, particularly the paternal age, can significantly impact the rate of de novo mutations. Older parents, especially fathers, tend to have a higher risk of having children with de novo mutations due to the higher number of cell divisions in the male germ line as men age. In genetic counselling, parents are often told that after having a first child with a condition caused by a de
https://en.wikipedia.org/wiki/Photoinduced%20charge%20separation
Photoinduced charge separation is the process of an electron in an atom or molecule, being excited to a higher energy level by the absorption of a photon and then leaving the atom or molecule to free space, or to a nearby electron acceptor. Rutherford model An atom consists of a positively-charged nucleus surrounded by bound electrons. The nucleus consists of uncharged neutrons and positively charged protons. Electrons are negatively charged. In the early part of the twentieth century Ernest Rutherford suggested that the electrons orbited the dense central nucleus in a manner analogous to planets orbiting the Sun. The centripetal force required to keep the electrons in orbit was provided by the Coulomb force of the protons in the nucleus acting upon the electrons; just like the gravitational force of the Sun acting on a planet provides the centripetal force necessary to keep the planet in orbit. This model, although appealing, doesn't hold true in the real world. Synchrotron radiation would cause the orbiting electron to lose orbital energy and spiral inward since the vector quantity of acceleration of the particle multiplied by its mass (the value of the force required to keep the electron in circular motion) would be less than the electrical force the proton applied to the electron. Once the electron spiralled into the nucleus the electron would combine with a proton to form a neutron, and the atom would cease to exist. This model is clearly wrong. Bohr model In 1913, Niels Bohr refined the Rutherford model by stating that the electrons existed in discrete quantized states called energy levels. This meant that the electrons could only occupy orbits at certain energies. The laws of quantum physics apply here, and they don't comply with the laws of classical newtonian mechanics. An electron which is stationary and completely free from the atom has an energy of 0 joules (or 0 electronvolts). An electron which is described as being at the "ground state" has a (ne
https://en.wikipedia.org/wiki/List%20of%20input%20methods%20for%20Unix%20platforms
This is intended as a non-exhaustive list of input methods for Unix platforms. An input method is a means of entering characters and glyphs that have a corresponding encoding in a character set. See the input method page for more information. Input methods Unix Computing-related lists
https://en.wikipedia.org/wiki/Heron%27s%20fountain
Heron's fountain is a hydraulic machine invented by the 1st century AD inventor, mathematician, and physicist Hero of Alexandria. Heron studied the pressure of air and steam, described the first steam engine, and built toys that would spurt water, one of them known as Heron's fountain. Various versions of Heron's fountain are used today in physics classes as a demonstration of principles of hydraulics and pneumatics. Construction In the following description, call the 3 containers: (A) Top: basin (B) Middle: water supply (C) Bottom: air supply And three pipes: P1 (on the left in the picture) from a hole in the bottom of basin (A) to the bottom of air supply container (C) P2 (on the right in the picture) from the top of the air supply container (C) to the top of the water supply container (B) P3 (in the middle of the picture) from the bottom of the water supply container (B), up through the bottom of the basin (A) to a height above the basin's rim. The fountain issues upwards through this pipe. The maximum height of P3 pipe depends on the height between B and C (see below). Container A can be closed and airtight, but it is not necessary. B and C, however, must be airtight and resistant to atmospheric pressure. Plastic bottles suffice, but glass containers work better. Balloons do not work because they cannot hold pressure without deforming. The fountain works in the following way: The energy for moving the water ultimately comes from the water in A descending into C. This means the water in B can rise into A only as much as it falls from A to C. Water falling from A down to C through pipe P1 builds up pressure in the bottom container; this pressure is proportional to the height difference between A and C. Pressure is transmitted by the air through pipe P2 into the water supply B, and pushes the water up into pipe P3. Water moving up pipe P3 replaces water falling from A into C, closing the loop. These principles explain the construction: The a
https://en.wikipedia.org/wiki/String%20art
__notoc__ String art or pin and thread art, is characterized by an arrangement of colored thread strung between points to form geometric patterns or representational designs such as a ship's sails, sometimes with other artist material comprising the remainder of the work. Thread, wire, or string is wound around a grid of nails hammered into a velvet-covered wooden board. Though straight lines are formed by the string, the slightly different angles and metric positions at which strings intersect gives the appearance of Bézier curves (as in the mathematical concept of envelope of a family of straight lines). Quadratic Bézier curve are obtained from strings based on two intersecting segments. Other forms of string art include Spirelli, which is used for cardmaking and scrapbooking, and curve stitching, in which string is stitched through holes. String art has its origins in the 'curve stitch' activities invented by Mary Everest Boole at the end of the 19th century to make mathematical ideas more accessible to children. It was popularised as a decorative craft in the late 1960s through kits and books. A computational form of string art that can produce photo-realistic artwork was introduced by Petros Vrellis, in 2016. Gallery See also Bézier curve Envelope (mathematics) N-connectedness
https://en.wikipedia.org/wiki/Diamond-square%20algorithm
The diamond-square algorithm is a method for generating heightmaps for computer graphics. It is a slightly better algorithm than the three-dimensional implementation of the midpoint displacement algorithm, which produces two-dimensional landscapes. It is also known as the random midpoint displacement fractal, the cloud fractal or the plasma fractal, because of the plasma effect produced when applied. The idea was first introduced by Fournier, Fussell and Carpenter at SIGGRAPH in 1982. The diamond-square algorithm starts with a two-dimensional grid, then randomly generates terrain height from four seed values arranged in a grid of points so that the entire plane is covered in squares. Description The diamond-square algorithm begins with a two-dimensional square array of width and height 2n + 1. The four corner points of the array must first be set to initial values. The diamond and square steps are then performed alternately until all array values have been set. The diamond step: For each square in the array, set the midpoint of that square to be the average of the four corner points plus a random value. The square step: For each diamond in the array, set the midpoint of that diamond to be the average of the four corner points plus a random value. Each random value is multiplied by a scale constant, which decreases with each iteration by a factor of 2−h, where h is a value between 0.0 and 1.0 (lower values produce rougher terrain). During the square steps, points located on the edges of the array will have only three adjacent values set, rather than four. There are a number of ways to handle this complication - the simplest being to take the average of just the three adjacent values. Another option is to 'wrap around', taking the fourth value from the other side of the array. When used with consistent initial corner values, this method also allows generated fractals to be stitched together without discontinuities. Visualization The image below shows the
https://en.wikipedia.org/wiki/ISO%2031-7
ISO 31-7 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to acoustics. It is superseded by ISO 80000-8. Its definitions include: Acoustics 00031-07
https://en.wikipedia.org/wiki/ISO%2031-11
ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology. It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019. Its definitions include the following: Mathematical logic Sets Miscellaneous signs and symbols Operations Functions Exponential and logarithmic functions Circular and hyperbolic functions Complex numbers Matrices Coordinate systems Vectors and tensors Special functions See also Mathematical symbols Mathematical notation
https://en.wikipedia.org/wiki/Metacarpophalangeal%20joint
The metacarpophalangeal joints (MCP) are situated between the metacarpal bones and the proximal phalanges of the fingers. These joints are of the condyloid kind, formed by the reception of the rounded heads of the metacarpal bones into shallow cavities on the proximal ends of the proximal phalanges. Being condyloid, they allow the movements of flexion, extension, abduction, adduction and circumduction (see anatomical terms of motion) at the joint. Structure Ligaments Each joint has: palmar ligaments of metacarpophalangeal articulations collateral ligaments of metacarpophalangeal articulations Dorsal surfaces The dorsal surfaces of these joints are covered by the expansions of the Extensor tendons, together with some loose areolar tissue which connects the deep surfaces of the tendons to the bones. Function The movements which occur in these joints are flexion, extension, adduction, abduction, and circumduction; the movements of abduction and adduction are very limited, and cannot be performed while the fingers form a fist. The muscles of flexion and extension are as follows: Clinical significance Arthritis of the MCP is a distinguishing feature of rheumatoid arthritis, as opposed to the distal interphalangeal joint in osteoarthritis. Other animals In many quadrupeds, particularly horses and other larger animals, the metacarpophalangeal joint is referred to as the "fetlock". This term is translated literally as "foot-lock". In fact, although the term fetlock does not specifically apply to other species' metacarpophalangeal joints (for instance, humans), the "second" or "mid-finger" knuckle of the human hand does anatomically correspond to the fetlock on larger quadrupeds. For lack of a better term, the shortened name may seem more practical.
https://en.wikipedia.org/wiki/Majority%20logic%20decoding
In error detection and correction, majority logic decoding is a method to decode repetition codes, based on the assumption that the largest number of occurrences of a symbol was the transmitted symbol. Theory In a binary alphabet made of , if a repetition code is used, then each input bit is mapped to the code word as a string of -replicated input bits. Generally , an odd number. The repetition codes can detect up to transmission errors. Decoding errors occur when more than these transmission errors occur. Thus, assuming bit-transmission errors are independent, the probability of error for a repetition code is given by , where is the error over the transmission channel. Algorithm Assumption: the code word is , where , an odd number. Calculate the Hamming weight of the repetition code. if , decode code word to be all 0's if , decode code word to be all 1's This algorithm is a boolean function in its own right, the majority function. Example In a code, if R=[1 0 1 1 0], then it would be decoded as, , , so R'=[1 1 1 1 1] Hence the transmitted message bit was 1.
https://en.wikipedia.org/wiki/Accelerated%20aging
Accelerated aging is testing that uses aggravated conditions of heat, humidity, oxygen, sunlight, vibration, etc. to speed up the normal aging processes of items. It is used to help determine the long-term effects of expected levels of stress within a shorter time, usually in a laboratory by controlled standard test methods. It is used to estimate the useful lifespan of a product or its shelf life when actual lifespan data is unavailable. This occurs with products that have not existed long enough to have gone through their useful lifespan: for example, a new type of car engine or a new polymer for replacement joints. Physical testing or chemical testing is carried out by subjecting the product to representative levels of stress for long time periods, unusually high levels of stress used to accelerate the effects of natural aging, or levels of stress that intentionally force failures (for further analysis). Mechanical parts are run at very high speed, far in excess of what they would receive in normal usage. Polymers are often kept at elevated temperatures, in order to accelerate chemical breakdown. Environmental chambers are often used. Also, the device or material under test can be exposed to rapid (but controlled) changes in temperature, humidity, pressure, strain, etc. For example, cycles of heat and cold can simulate the effect of day and night for a few hours or minutes. Library and archival preservation science Accelerated aging is also used in library and archival preservation science. In this context, a material, usually paper, is subjected to extreme conditions in an effort to speed up the natural aging process. Usually, the extreme conditions consist of elevated temperature, but tests making use of concentrated pollutants or intense light also exist. These tests may be used for several purposes. To predict the long-term effects of particular conservation treatments. In such a test, treated and untreated papers are both subjected to a sin
https://en.wikipedia.org/wiki/Duality%20%28optimization%29
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality. In general, the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. This fact is called strong duality. Dual problem Usually the term "dual problem" refers to the Lagrangian dual problem but other dual problems are used – for example, the Wolfe dual problem and the Fenchel dual problem. The Lagrangian dual problem is obtained by forming the Lagrangian of a minimization problem by using nonnegative Lagrange multipliers to add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity constraints). In general given two dual pairs of separated locally convex spaces and and the function , we can define the primal problem as finding such that In other words, if exists, is the minimum of the function and the infimum (greatest lower bound) of the function is attained. If there are constraint conditions, th
https://en.wikipedia.org/wiki/Test%20cross
Under the law of dominance in genetics, an individual expressing a dominant phenotype could contain either two copies of the dominant allele (homozygous dominant) or one copy of each dominant and recessive allele (heterozygous dominant). By performing a test cross, one can determine whether the individual is heterozygous or homozygous dominant. In a test cross, the individual in question is bred with another individual that is homozygous for the recessive trait and the offspring of the test cross are examined. Since the homozygous recessive individual can only pass on recessive alleles, the allele the individual in question passes on determines the phenotype of the offspring. Thus, this test yields 2 possible situations: If any of the offspring produced express the recessive trait, the individual in question is heterozygous for the dominant allele. If all of the offspring produced express the dominant trait, the individual in question is homozygous for the dominant allele. History The first uses of test crosses were in Gregor Mendel’s experiments in plant hybridization. While studying the inheritance of dominant and recessive traits in pea plants, he explains that the “signification” (now termed zygosity) of an individual for a dominant trait is determined by the expression patterns of the following generation. Rediscovery of Mendel’s work in the early 1900s led to an explosion of experiments employing the principles of test crosses. From 1908-1911, Thomas Hunt Morgan conducted test crosses while determining the inheritance pattern of a white eye-colour mutation in Drosophila. These test cross experiments became hallmarks in the discovery of sex-linked traits. Applications in model organisms Test crosses have a variety of applications. Common animal organisms, called model organisms, where test crosses are often used include Caenorhabditis elegans and Drosophila melanogaster. Basic procedures for performing test crosses in these organisms are provided belo
https://en.wikipedia.org/wiki/Three-point%20cross
In genetics, a three-point cross is used to determine the loci of three genes in an organism's genome. An individual heterozygous for three mutations is crossed with a homozygous recessive individual, and the phenotypes of the progeny are scored. The two most common phenotypes that result are the parental gametes; the two least common phenotypes that result come from a double crossover in gamete formation. By comparing the parental and double-crossover phenotypes, the geneticist can determine which gene is located between the others on the chromosome. The recombinant frequency is the ratio of non-parental phenotypes to total individuals. It is expressed as a percentage, which is equivalent to the number of map units (or centiMorgans) between two genes. For example, if 100 out of 1000 individuals display the phenotype resulting from a crossover between genes a and b, then the recombination frequency is 10 percent and genes a and b are 10 map-units apart on the chromosome. If the recombination frequency is greater than 50 percent, it means that the genes are unlinked - they are either located on different chromosomes or are sufficiently distant from each other on the same chromosome. Any recombination frequency greater than 50 percent is expressed as exactly 50 percent because, being unlinked, they are equally as likely as not to be separated during gamete formation.
https://en.wikipedia.org/wiki/Spinner%20%28cell%20culture%29
A Spinner is a type of bioreactor which features an impeller, stirrer or similar device to agitate the contents (usually a mixture of cells, medium and products like proteins that can be harvested). The vessels are usually made out of glass or stainless steel with port holes to accommodate sensors, Medium input or gas flow. Spinner type vessels are used for mammalian or plant cell culture. They are adequate for cell suspensions and attachment dependent cell types.
https://en.wikipedia.org/wiki/Hull%20number
A hull number is a serial identification number given to a boat or ship. For the military, a lower number implies an older vessel. For civilian use, the HIN is used to trace the boat's history. The precise usage varies by country and type. United States usage Civilian use For civilian craft manufactured in the United States, the hull number is given to the vessel when it is built and forms part of the hull identification number, which uniquely identifies the vessel and must be permanently affixed to the hull in at least two places. A Hull Identification Number (HIN) is a unique set of 12 characters, similar to the Vehicle Identification Number which is found on automobiles. In 1972, The United States Coast Guard was asked to create a standardized format for HINs to allow for better tracking of accidents and history of boats. This HIN format is as follows: The first three characters consist of the Manufacturers Index Code (MIC) and should only be letters. The following five characters are the unique serial number assigned by the Manufacturer, and can be a series of letters and/or numbers with the exception of the letters O, I, and Q (they can be easily mistaken). The last four characters determine the model and certification year of the boat. The HIN may be found on the aft of the vessel in the uppermost right corner. Also, the HIN may be stated on the title, registration, and insurance documents. United States military The United States Navy, United States Coast Guard, and United States National Oceanic and Atmospheric Administration employ hull numbers in conjunction with a hull classification symbol to uniquely identify vessels and to aid identification. A particular combination of hull classification and hull number is never reused and therefore provides a means to uniquely identify a particular ship. For example, there have been at least eight vessels named , but CV-6 uniquely identifies the World War II aircraft carrier from all others. For convenience, the
https://en.wikipedia.org/wiki/Earth%20systems%20engineering%20and%20management
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human–natural systems in a highly integrated and ethical fashion". ESEM is a newly emerging area of study that has taken root at the University of Virginia, Cornell and other universities throughout the United States, and at the Centre for Earth Systems Engineering Research (CESER) at Newcastle University in the United Kingdom. Founders of the discipline are Braden Allenby and Michael Gorman. Introduction to ESEM For centuries, humans have utilized the earth and its natural resources to advance civilization and develop technology. "As a principle result of Industrial Revolutions and associated changes in human demographics, technology systems, cultures, and economic systems have been the evolution of an Earth in which the dynamics of major natural systems are increasingly dominated by human activity". In many ways, ESEM views the earth as a human artifact. "In order to maintain continued stability of both natural and human systems, we need to develop the ability to rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion- an Earth Systems Engineering and Management (ESEM) capability". ESEM has been developed by a few individuals. One of particular note is Braden Allenby. Allenby holds that the foundation upon which ESEM is built is the notion that "the Earth, as it now exists, is a product of human design". In fact there are no longer any natural systems left in the world, "there are no places left on Earth that don't fall under humanity's shadow". "So the question is not, as some might wish, whether we should begin ESEM, because we have been doing it for a long time, albeit unintentionally.
https://en.wikipedia.org/wiki/Corporation%20for%20Education%20Network%20Initiatives%20in%20California
The Corporation for Education Network Initiatives in California (CENIC ( )) is a nonprofit corporation formed in 1997 to provide high-performance, high-bandwidth networking services to California universities and research institutions. Through this corporation, representatives from all of California's K-20 public education combine their networking resources toward the operation, deployment, and maintenance of the California Research and Education Network, or CalREN. Today, CalREN operates over 8,000 miles of fiber optic cable and serves more than 20 million users. History Beginning in the mid 1980s, research universities were served by a National Science Foundation (NSF) funded network, NSFNet. This funding ended, however, in 1995, as the NSF believed that the newly established commercial Internet could meet the needs of these institutions. A model for wide-area networking began to emerge in the early 1990s, separating regional network infrastructure from national or international “backbone” infrastructure. Regional networks would connect to one or more “Internet exchange points” where traffic would be sent to or received from one or more backbone networks. When NSFNet ceased operation, this new network structure carried both research and commercial traffic. Researchers at major universities soon began to complain that service from the commercial Internet was inadequate. This led to discussion of a separate network, funded by and for research universities, and the ultimate establishment of Internet2. The Internet2 backbone would have only two connection points in California. At the same time, officials at the University of California, USC, Caltech, Stanford, and the California State University system (CSU) began discussing how to connect their institutions to the proposed new Internet2 network. They recognized that the key to a comprehensive information technology strategy was the development of a cohesive and seamless statewide, high-speed, advanced service
https://en.wikipedia.org/wiki/Ant%E2%80%93fungus%20mutualism
The ant–fungus mutualism is a symbiosis seen between certain ant and fungal species, in which ants actively cultivate fungus much like humans farm crops as a food source. There is only evidence of two instances in which this form of agriculture evolved in ants resulting in a dependence on fungi for food. These instances were the attine ants and some ants that are part of the Megalomyrmex genus. In some species, the ants and fungi are dependent on each other for survival. This type of codependency is prevalent among herbivores who rely on plant material for nutrition. The fungus’ ability to convert the plant material into a food source accessible to their host makes them the ideal partner. The leafcutter ant is a well-known example of this symbiosis. Leafcutter ants species can be found in southern South America up to the United States. However, ants are not the only ground-dwelling arthropods which have developed symbioses with fungi. A similar mutualism with fungi is also noted in termites within the subfamily Macrotermitinae which are widely distributed throughout the Old World tropics with the highest diversity in Africa. Overview Fungus-growing ants actively propagate, nurture, and defend Lepiotaceae and other lineages of basidiomycete fungus. In return, the fungus provides nutrients for the ants, which may accumulate in specialized hyphal-tips known as "gongylidia". These growths are synthesized from plant substrates and are rich in lipids and carbohydrates. In some advanced genera the queen ant may take a pellet of the fungus with her when she leaves to start a new colony. There are three castes of female worker ants in Attini colonies which all participate in foraging plant matter to feed the fungal cultivar. The lowest caste, minor, is smallest in size but largest in number and is primarily responsible for maintaining the fungal cultivar for the rest of the colony. The symbiosis between basidiomycete fungi and attine ants involves the fungal pathogen, Esco
https://en.wikipedia.org/wiki/Jackson%20network
In queueing theory, a discipline within the mathematical theory of probability, a Jackson network (sometimes Jacksonian network) is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’ Jackson was inspired by the work of Burke and Reich, though Jean Walrand notes "product-form results … [are] a much less immediate result of the output theorem than Jackson himself appeared to believe in his fundamental paper". An earlier product-form solution was found by R. R. P. Jackson for tandem queues (a finite chain of queues where each customer must visit each queue in order) and cyclic networks (a loop of queues where each customer must visit each queue in order). A Jackson network consists of a number of nodes, where each node represents a queue in which the service rate can be both node-dependent (different nodes have different service rates) and state-dependent (service rates change depending on queue lengths). Jobs travel among the nodes following a fixed routing matrix. All jobs at each node belong to a single "class" and jobs follow the same service-time distribution and the same routing mechanism. Consequently, there is no notion of priority in serving the jobs: all jobs at each node are served on a first-come, first-served basis. Jackson networks where a finite population of jobs travel around a closed network also have a product-form solution described by the Gordon–Newell theorem. Necessary condition