source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Species%20discovery%20curve
In ecology, the species discovery curve (also known as a species accumulation curve or collector's curve) is a graph recording the cumulative number of species of living things recorded in a particular environment as a function of the cumulative effort expended searching for them (usually measured in person-hours). It is related to, but not identical with, the species-area curve. The species discovery curve will necessarily be increasing, and will normally be negatively accelerated (that is, its rate of increase will slow down). Plotting the curve gives a way of estimating the number of additional species that will be discovered with further effort. This is usually done by fitting some kind of functional form to the curve, either by eye or by using non-linear regression techniques. Commonly used functional forms include the logarithmic function and the negative exponential function. The advantage of the negative exponential function is that it tends to an asymptote which equals the number of species that would be discovered if infinite effort is expended. However, some theoretical approaches imply that the logarithmic curve may be more appropriate, implying that though species discovery will slow down with increasing effort, it will never entirely cease, so there is no asymptote, and if infinite effort was expended, an infinite number of species would be discovered. An example in which one would not expect the function to asymptote is in the study of genetic sequences where new mutations and sequencing errors may lead to infinite variants. The first theoretical investigation of the species-discovery process was in a classic paper by Fisher, Corbet and Williams (1943), which was based on a large collection of butterflies made in Malaya. Theoretical statistical work on the problem continues, see for example the recent paper by Chao and Shen (2004). The theory is linked to that of Zipf's law. The same approach is used in many other fields. For example, in e
https://en.wikipedia.org/wiki/Inverse%20dynamics
Inverse dynamics is an inverse problem. It commonly refers to either inverse rigid body dynamics or inverse structural dynamics. Inverse rigid-body dynamics is a method for computing forces and/or moments of force (torques) based on the kinematics (motion) of a body and the body's inertial properties (mass and moment of inertia). Typically it uses link-segment models to represent the mechanical behaviour of interconnected segments, such as the limbs of humans or animals or the joint extensions of robots, where given the kinematics of the various parts, inverse dynamics derives the minimum forces and moments responsible for the individual movements. In practice, inverse dynamics computes these internal moments and forces from measurements of the motion of limbs and external forces such as ground reaction forces, under a special set of assumptions. Applications The fields of robotics and biomechanics constitute the major application areas for inverse dynamics. Within robotics, inverse dynamics algorithms are used to calculate the torques that a robot's motors must deliver to make the robot's end-point move in the way prescribed by its current task. The "inverse dynamics problem" for robotics was solved by Eduardo Bayo in 1987. This solution calculates how each of the numerous electric motors that control a robot arm must move to produce a particular action. Humans can perform very complicated and precise movements, such as controlling the tip of a fishing rod well enough to cast the bait accurately. Before the arm moves, the brain calculates the necessary movement of each muscle involved and tells the muscles what to do as the arm swings. In the case of a robot arm, the "muscles" are the electric motors which must turn by a given amount at a given moment. Each motor must be supplied with just the right amount of electric current, at just the right time. Researchers can predict the motion of a robot arm if they know how the motors will move. This is known as the forw
https://en.wikipedia.org/wiki/Phototube
A phototube or photoelectric cell is a type of gas-filled or vacuum tube that is sensitive to light. Such a tube is more correctly called a 'photoemissive cell' to distinguish it from photovoltaic or photoconductive cells. Phototubes were previously more widely used but are now replaced in many applications by solid state photodetectors. The photomultiplier tube is one of the most sensitive light detectors, and is still widely used in physics research. Operating principles Phototubes operate according to the photoelectric effect: Incoming photons strike a photocathode, knocking electrons out of its surface, which are attracted to an anode. Thus current is dependent on the frequency and intensity of incoming photons. Unlike photomultiplier tubes, no amplification takes place, so the current through the device is typically of the order of a few microamperes. The light wavelength range over which the device is sensitive depends on the material used for the photoemissive cathode. A caesium-antimony cathode gives a device that is very sensitive in the violet to ultra-violet region with sensitivity falling off to blindness to red light. Caesium on oxidised silver gives a cathode that is most sensitive to infra-red to red light, falling off towards blue, where the sensitivity is low but not zero. Vacuum devices have a near constant anode current for a given level of illumination relative to anode voltage. Gas-filled devices are more sensitive, but the frequency response to modulated illumination falls off at lower frequencies compared to the vacuum devices. The frequency response of vacuum devices is generally limited by the transit time of the electrons from cathode to anode. Applications One major application of the phototube was the reading of optical sound tracks for projected films. Phototubes were used in a variety of light-sensing applications until some were superseded by photoresistors and photodiodes.
https://en.wikipedia.org/wiki/Degree%20distribution
In the study of graphs and networks, the degree of a node in a network is the number of connections it has to other nodes and the degree distribution is the probability distribution of these degrees over the whole network. Definition The degree of a node in a network (sometimes referred to incorrectly as the connectivity) is the number of connections or edges the node has to other nodes. If a network is directed, meaning that edges point in one direction from one node to another node, then nodes have two different degrees, the in-degree, which is the number of incoming edges, and the out-degree, which is the number of outgoing edges. The degree distribution P(k) of a network is then defined to be the fraction of nodes in the network with degree k. Thus if there are n nodes in total in a network and nk of them have degree k, we have . The same information is also sometimes presented in the form of a cumulative degree distribution, the fraction of nodes with degree smaller than k, or even the complementary cumulative degree distribution, the fraction of nodes with degree greater than or equal to k (1 - C) if one considers C as the cumulative degree distribution; i.e. the complement of C. Observed degree distributions The degree distribution is very important in studying both real networks, such as the Internet and social networks, and theoretical networks. The simplest network model, for example, the (Erdős–Rényi model) random graph, in which each of n nodes is independently connected (or not) with probability p (or 1 − p), has a binomial distribution of degrees k: (or Poisson in the limit of large n, if the average degree is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the World Wide Web, and some social networks were argu
https://en.wikipedia.org/wiki/Progressive%20contextualization
Progressive contextualization (PC) is a scientific method pioneered and developed by Andrew P. Vayda and research team between 1979 and 1984. The method was developed to help understand cause of damage and destruction of forest and land during the New Order Regime in Indonesia, as well as practical ethnography. Vayda proposed the Progressive contextualization method due to his dissatisfaction with several conventional anthropological methods to describe accurately and quickly cases of illegal logging, land destruction and the network of actor-investor protecting the actions, as well as various consequences detrimental to the environment and social life. The essence of this method is to track and assess: what the actor (actor-based) or network of certain actors (actor-based network) does in a certain location and time the series of consequences (intended or unintended) that result from what the actors and/or networks do, in a time and space that can be different from the original time and space, as long as it is in accordance with the interest of the research and the available time. Therefore, the PC method does not have to be bound to a certain research place and time pre-determined in the research design. It rejects the assumption of ecological and socio-cultural homogeneity. Instead, it focuses on diversity and it looks at how different individuals and groups operate in and adapt to their total environments through a variety of behaviors, technologies, organizations, structures and beliefs. Due attention to context in the elucidation of actions and consequences may often mean having to deal with precisely the kind of factors and processes often scanted or denied by holistic approaches: the loose, transient, and contingent interactions, the disarticulating processes, and the movements of people, resources, and ideas across whatever boundaries that ecosystems, societies, and cultures are thought to have — Vayda, 1986 Based on such a premise and through the pr
https://en.wikipedia.org/wiki/Activity%20%28UML%29
An activity in Unified Modeling Language (UML) is a major task that must take place in order to fulfill an operation contract. The Student Guide to Object-Oriented Development defines an activity as a sequence of activities that make up a process. Activities can be represented in activity diagrams An activity can represent: The invocation of an operation. A step in a business process. An entire business process. Activities can be decomposed into subactivities, until at the bottom we find atomic actions. The underlying conception of an activity has changed between UML 1.5 and UML 2.0. In UML 2.0 an activity is no longer based on the state-chart rather it is based on a Petri net like coordination mechanism. There the activity represents user-defined behavior coordinating actions. Actions in turn are pre-defined (UML offers a series of actions for this). Unified Modeling Language
https://en.wikipedia.org/wiki/Ultradian%20rhythm
In chronobiology, an ultradian rhythm is a recurrent period or cycle repeated throughout a 24-hour day. In contrast, circadian rhythms complete one cycle daily, while infradian rhythms such as the human menstrual cycle have periods longer than a day. The Oxford English Dictionary's definition of Ultradian specifies that it refers to cycles with a period shorter than a day but longer than an hour. The descriptive term ultradian is used in sleep research in reference to the 90–120 minute cycling of the sleep stages during human sleep. There is a circasemidian rhythm in body temperature and cognitive function which is technically ultradian. However, this appears to be the first harmonic of the circadian rhythm of each and not an endogenous rhythm with its own rhythm generator. Other ultradian rhythms include blood circulation, blinking, pulse, hormonal secretions such as growth hormone, heart rate, thermoregulation, micturition, bowel activity, nostril dilation, appetite, and arousal. Ultradian rhythms of appetite require antiphasic release of neuropeptide Y (NPY) and corticotropin-releasing hormone (CRH), stimulating and inhibiting appetite ultradian rhythms. Recently, ultradian rhythms of arousal lasting approximately 4 hours were attributed to the dopaminergic system in mammals. When the dopaminergic system is perturbed either by use of drugs or by genetic disruption, these 4-hour rhythms can lengthen significantly into the infradian (> 24 h) range, sometimes even lasting for days (> 110 h) when methamphetamines are provided. Ultradian mood states in bipolar disorder cycle much faster than rapid cycling; the latter is defined as four or more mood episodes in one year, sometimes occurring within a few weeks. Ultradian mood cycling is characterized by cycles shorter than 24 hours. See also Circadian Rhythm
https://en.wikipedia.org/wiki/Feshbach%20resonance
In physics, a Feshbach resonance can occur upon collision of two slow atoms, when they temporarily stick together forming an unstable compound with short lifetime (so-called resonance). It is a feature of many-body systems in which a bound state is achieved if the coupling(s) between at least one internal degree of freedom and the reaction coordinates, which lead to dissociation, vanish. The opposite situation, when a bound state is not formed, is a shape resonance. It is named after Herman Feshbach, a physicist at MIT. Feshbach resonances have become important in the study of cold atoms systems, including Fermi gases and Bose–Einstein condensates (BECs). In the context of scattering processes in many-body systems, the Feshbach resonance occurs when the energy of a bound state of an interatomic potential is equal to the kinetic energy of a colliding pair of atoms. In experimental settings, the Feshbach resonances provide a way to vary interaction strength between atoms in the cloud by changing scattering length, asc, of elastic collisions. For atomic species that possess these resonances (like K39 and K40), it is possible to vary the interaction strength by applying a uniform magnetic field. Among many uses, this tool has served to explore the transition from a BEC of fermionic molecules to weakly interacting fermion-pairs the BCS in Fermi clouds. For the BECs, Feshbach resonances have been used to study a spectrum of systems from the non-interacting ideal Bose gases to the unitary regime of interactions. Introduction Consider a general quantum scattering event between two particles. In this reaction, there are two reactant particles denoted by A and B, and two product particles denoted by A' and B' . For the case of a reaction (such as a nuclear reaction), we may denote this scattering event by or . The combination of the species and quantum states of the two reactant particles before or after the scattering event is referred to as a reaction channel.
https://en.wikipedia.org/wiki/Stokes%20operators
The Stokes operators are the quantum mechanical operators corresponding to the classical Stokes parameters. These matrix operators are identical to the Pauli matrices . External links Stokes operators, angular momentum and radiation phase. Quantum mechanics
https://en.wikipedia.org/wiki/Alvy%20Ray%20Smith
Alvy Ray Smith III (born September 8, 1943) is an American computer scientist who co-founded Lucasfilm's Computer Division and Pixar, participating in the 1980s and 1990s expansion of computer animation into feature film. Education In 1965 Alvy Smith received his bachelor's degree in electrical engineering from New Mexico State University (NMSU). He created his first computer graphic in 1965 at NMSU. In 1970 he received a Ph.D. in computer science from Stanford University, with a dissertation on cellular automata theory jointly supervised by Michael A. Arbib, Edward J. McCluskey, and Bernard Widrow. Career His first art show was at the Stanford Coffeehouse. From 1969 to 1973 he was an associate professor of Electrical Engineering and Computer Science at New York University, under chairman Herbert Freeman, one of the earliest computer graphics researchers. He taught briefly at the University of California, Berkeley in 1974. While at Xerox PARC in 1974, Smith worked with Richard Shoup on SuperPaint, one of the first computer raster graphics editor, or 'paint', programs. Smith's major contribution to this software was the creation of the HSV color space, also known as HSB. He created his first computer animations on the SuperPaint system. In 1975 Smith joined the new Computer Graphics Laboratory at New York Institute of Technology (NYIT), where he was given the job title "Information Quanta". There, working alongside a traditional cel animation studio, he met Ed Catmull and several core personnel of Pixar. Smith worked on a series of newer paint programs, including Paint3, the first true-color raster graphics editor. As part of this work he co-invented the concept of the alpha channel. He was also the programmer and collaborator on Ed Emshwiller's animation Sunstone, included in the collection of the Museum of Modern Art in New York. Smith worked at NYIT until 1979 and then briefly at the Jet Propulsion Laboratory with Jim Blinn on the Carl Sagan Cosmos: A Person
https://en.wikipedia.org/wiki/Fixed-point%20index
In mathematics, the fixed-point index is a concept in topological fixed-point theory, and in particular Nielsen theory. The fixed-point index can be thought of as a multiplicity measurement for fixed points. The index can be easily defined in the setting of complex analysis: Let f(z) be a holomorphic mapping on the complex plane, and let z0 be a fixed point of f. Then the function f(z) − z is holomorphic, and has an isolated zero at z0. We define the fixed-point index of f at z0, denoted i(f, z0), to be the multiplicity of the zero of the function f(z) − z at the point z0. In real Euclidean space, the fixed-point index is defined as follows: If x0 is an isolated fixed point of f, then let g be the function defined by Then g has an isolated singularity at x0, and maps the boundary of some deleted neighborhood of x0 to the unit sphere. We define i(f, x0) to be the Brouwer degree of the mapping induced by g on some suitably chosen small sphere around x0. The Lefschetz–Hopf theorem The importance of the fixed-point index is largely due to its role in the Lefschetz–Hopf theorem, which states: where Fix(f) is the set of fixed points of f, and Λf is the Lefschetz number of f. Since the quantity on the left-hand side of the above is clearly zero when f has no fixed points, the Lefschetz–Hopf theorem trivially implies the Lefschetz fixed-point theorem. Notes
https://en.wikipedia.org/wiki/Evolutionary%20graph%20theory
Evolutionary graph theory is an area of research lying at the intersection of graph theory, probability theory, and mathematical biology. Evolutionary graph theory is an approach to studying how topology affects evolution of a population. That the underlying topology can substantially affect the results of the evolutionary process is seen most clearly in a paper by Erez Lieberman, Christoph Hauert and Martin Nowak. In evolutionary graph theory, individuals occupy vertices of a weighted directed graph and the weight wi j of an edge from vertex i to vertex j denotes the probability of i replacing j. The weight corresponds to the biological notion of fitness where fitter types propagate more readily. One property studied on graphs with two types of individuals is the fixation probability, which is defined as the probability that a single, randomly placed mutant of type A will replace a population of type B. According to the isothermal theorem, a graph has the same fixation probability as the corresponding Moran process if and only if it is isothermal, thus the sum of all weights that lead into a vertex is the same for all vertices. Thus, for example, a complete graph with equal weights describes a Moran process. The fixation probability is where r is the relative fitness of the invading type. Graphs can be classified into amplifiers of selection and suppressors of selection. If the fixation probability of a single advantageous mutation is higher than the fixation probability of the corresponding Moran process then the graph is an amplifier, otherwise a suppressor of selection. One example of the suppressor of selection is a linear process where only vertex i-1 can replace vertex i (but not the other way around). In this case the fixation probability is (where N is the number of vertices) since this is the probability that the mutation arises in the first vertex which will eventually replace all the other ones. Since for all r greater than 1, this graph is by d
https://en.wikipedia.org/wiki/Image%20moment
In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation. Image moments are useful to describe objects after segmentation. Simple properties of the image which are found via image moments include area (or total intensity), its centroid, and information about its orientation. Raw moments For a 2D continuous function f(x,y) the moment (sometimes called "raw moment") of order (p + q) is defined as for p,q = 0,1,2,... Adapting this to scalar (greyscale) image with pixel intensities I(x,y), raw image moments Mij are calculated by In some cases, this may be calculated by considering the image as a probability density function, i.e., by dividing the above by A uniqueness theorem (Hu [1962]) states that if f(x,y) is piecewise continuous and has nonzero values only in a finite part of the xy plane, moments of all orders exist, and the moment sequence (Mpq) is uniquely determined by f(x,y). Conversely, (Mpq) uniquely determines f(x,y). In practice, the image is summarized with functions of a few lower order moments. Examples Simple image properties derived via raw moments include: Area (for binary images) or sum of grey level (for greytone images): Centroid: Central moments Central moments are defined as where and are the components of the centroid. If ƒ(x, y) is a digital image, then the previous equation becomes The central moments of order up to 3 are: It can be shown that: Central moments are translational invariant. Examples Information about image orientation can be derived by first using the second order central moments to construct a covariance matrix. The covariance matrix of the image is now . The eigenvectors of this matrix correspond to the major and minor axes of the image intensity, so the orientation can thus be extracted from the angle
https://en.wikipedia.org/wiki/Pathovar
A pathovar is a bacterial strain or set of strains with the same or similar characteristics, that is differentiated at infrasubspecific level from other strains of the same species or subspecies on the basis of distinctive pathogenicity to one or more plant hosts. Pathovars are named as a ternary or quaternary addition to the species binomial name, for example the bacterium that causes citrus canker Xanthomonas axonopodis, has several pathovars with different host ranges, X. axonopodis pv. citri is one of them; the abbreviation 'pv.' means pathovar. The type strains of pathovars are pathotypes, which are distinguished from the types (holotype, neotype, etc.) of the species to which the pathovar belongs. See also Infraspecific names in botany Phytopathology Trinomen, infraspecific names in zoology (subspecies only)
https://en.wikipedia.org/wiki/Differential%20space%E2%80%93time%20code
Differential space–time codes are ways of transmitting data in wireless communications. They are forms of space–time code that do not need to know the channel impairments at the receiver in order to be able to decode the signal. They are usually based on space–time block codes, and transmit one block-code from a set in response to a change in the input signal. The differences among the blocks in the set are designed to allow the receiver to extract the data with good reliability. The first differential space-time block code was disclosed by Vahid Tarokh and Hamid Jafarkhani.
https://en.wikipedia.org/wiki/Goertzel%20algorithm
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone multi-frequency signaling (DTMF) tones produced by the push buttons of the keypad of a traditional analog telephone. The algorithm was first described by Gerald Goertzel in 1958. Like the DFT, the Goertzel algorithm analyses one selectable frequency component from a discrete signal. Unlike direct DFT calculations, the Goertzel algorithm applies a single real-valued coefficient at each iteration, using real-valued arithmetic for real-valued input sequences. For covering a full spectrum (except when using for continuous stream of data where coefficients are reused for subsequent calculations, which has computational complexity equivalent of sliding DFT), the Goertzel algorithm has a higher order of complexity than fast Fourier transform (FFT) algorithms, but for computing a small number of selected frequency components, it is more numerically efficient. The simple structure of the Goertzel algorithm makes it well suited to small processors and embedded applications. The Goertzel algorithm can also be used "in reverse" as a sinusoid synthesis function, which requires only 1 multiplication and 1 subtraction per generated sample. The algorithm The main calculation in the Goertzel algorithm has the form of a digital filter, and for this reason the algorithm is often called a Goertzel filter. The filter operates on an input sequence in a cascade of two stages with a parameter , giving the frequency to be analysed, normalised to radians per sample. The first stage calculates an intermediate sequence, : The second stage applies the following filter to , producing output sequence : The first filter stage can be observed to be a second-order IIR filter with a direct-form structure. This particular structure has the property tha
https://en.wikipedia.org/wiki/Coulomb%20blockade
In mesoscopic physics, a Coulomb blockade (CB), named after Charles-Augustin de Coulomb's electrical force, is the decrease in electrical conductance at small bias voltages of a small electronic device comprising at least one low-capacitance tunnel junction. Because of the CB, the conductance of a device may not be constant at low bias voltages, but disappear for biases under a certain threshold, i.e. no current flows. Coulomb blockade can be observed by making a device very small, like a quantum dot. When the device is small enough, electrons inside the device will create a strong Coulomb repulsion preventing other electrons to flow. Thus, the device will no longer follow Ohm's law and the current-voltage relation of the Coulomb blockade looks like a staircase. Even though the Coulomb blockade can be used to demonstrate the quantization of the electric charge, it remains a classical effect and its main description does not require quantum mechanics. However, when few electrons are involved and an external static magnetic field is applied, Coulomb blockade provides the ground for a spin blockade (like Pauli spin blockade) and valley blockade, which include quantum mechanical effects due to spin and orbital interactions respectively between the electrons. The devices can comprise either metallic or superconducting electrodes. If the electrodes are superconducting, Cooper pairs (with a charge of minus two elementary charges ) carry the current. In the case that the electrodes are metallic or normal-conducting, i.e. neither superconducting nor semiconducting, electrons (with a charge of ) carry the current. In a tunnel junction The following section is for the case of tunnel junctions with an insulating barrier between two normal conducting electrodes (NIN junctions). The tunnel junction is, in its simplest form, a thin insulating barrier between two conducting electrodes. According to the laws of classical electrodynamics, no current can flow through an insulat
https://en.wikipedia.org/wiki/Principles%20of%20Mathematical%20Logic
Principles of Mathematical Logic is the 1950 American translation of the 1938 second edition of David Hilbert's and Wilhelm Ackermann's classic text Grundzüge der theoretischen Logik, on elementary mathematical logic. The 1928 first edition thereof is considered the first elementary text clearly grounded in the formalism now known as first-order logic (FOL). Hilbert and Ackermann also formalized FOL in a way that subsequently achieved canonical status. FOL is now a core formalism of mathematical logic, and is presupposed by contemporary treatments of Peano arithmetic and nearly all treatments of axiomatic set theory. The 1928 edition included a clear statement of the Entscheidungsproblem (decision problem) for FOL, and also asked whether that logic was complete (i.e., whether all semantic truths of FOL were theorems derivable from the FOL axioms and rules). The former problem was answered in the negative first by Alonzo Church and independently by Alan Turing in 1936. The latter was answered affirmatively by Kurt Gödel in 1929. In its description of set theory, mention is made of Russell's paradox and the Liar paradox (page 145). Contemporary notation for logic owes more to this text than it does to the notation of Principia Mathematica, long popular in the English speaking world. Notes
https://en.wikipedia.org/wiki/Meron%20%28physics%29
A meron or half-instanton is a Euclidean space-time solution of the Yang–Mills field equations. It is a singular non-self-dual solution of topological charge 1/2. The instanton is believed to be composed of two merons. A meron can be viewed as a tunneling event between two Gribov vacua. In that picture, the meron is an event which starts from vacuum, then a Wu–Yang monopole emerges, which then disappears again to leave the vacuum in another Gribov copy. See also BPST instanton Dyon Instanton Monopole Sphaleron
https://en.wikipedia.org/wiki/Wu%E2%80%93Yang%20monopole
The Wu–Yang monopole was the first solution (found in 1968 by Tai Tsun Wu and Chen Ning Yang) to the Yang–Mills field equations. It describes a magnetic monopole which is pointlike and has a potential which behaves like 1/r everywhere. See also Meron Dyon Instanton Wu–Yang dictionary Notes
https://en.wikipedia.org/wiki/Nitrosomonas
Nitrosomonas is a genus of Gram-negative bacteria, belonging to the Betaproteobacteria. It is one of the five genera of ammonia-oxidizing bacteria and, as an obligate chemolithoautotroph, uses ammonia (NH3) as an energy source and carbon dioxide (CO2) as a carbon source in presence of oxygen. Nitrosomonas are important in the global biogeochemical nitrogen cycle, since they increase the bioavailability of nitrogen to plants and in the denitrification, which is important for the release of nitrous oxide, a powerful greenhouse gas. This microbe is photophobic, and usually generate a biofilm matrix, or form clumps with other microbes, to avoid light. Nitrosomonas can be divided into six lineages: the first one includes the species Nitrosomonas europea, Nitrosomonas eutropha, Nitrosomonas halophila, and Nitrosomonas mobilis. The second lineage presents the species Nitrosomonas communis, N. sp. I and N. sp. II, meanwhile the third lineage includes only Nitrosomonas nitrosa. The fourth lineage includes the species Nitrosomonas ureae and Nitrosomonas oligotropha and the fifth and sixth lineages include the species Nitrosomonas marina, N. sp. III, Nitrosomonas estuarii and Nitrosomonas cryotolerans. Morphology All species included in this genus have ellipsoidal or rod-shaped cells in which are present extensive intracytoplasmic membranes displaying as flattened vesicles. Most species are motile with a flagellum located in the polar region of the bacillus. Three basic morphological types of Nitrosomonas were studied, which are: short rods Nitrosomonas, rods Nitrosomonas and Nitrosomonas with pointed ends. Nitrosomonas species cells have different criteria of size and shape: N. europaea shows short rods with pointed ends cells, which size is (0.8-1.1 x 1.0- 1.7) µm; motility has not been observed. N. eutropha presents rod to pear shaped cells with one or both ends pointed, with a size of (1.0-1.3 x 1.6- 2.3) µm. They show motility. N. halophila cells have a coccoid shap
https://en.wikipedia.org/wiki/Snapshot%20%28computer%20storage%29
In computer systems, a snapshot is the state of a system at a particular point in time. The term was coined as an analogy to that in photography. Rationale A full backup of a large data set may take a long time to complete. On multi-tasking or multi-user systems, there may be writes to that data while it is being backed up. This prevents the backup from being atomic and introduces a version skew that may result in data corruption. For example, if a user moves a file into a directory that has already been backed up, then that file would be completely missing on the backup media, since the backup operation had already taken place before the addition of the file. Version skew may also cause corruption with files which change their size or contents underfoot while being read. One approach to safely backing up live data is to temporarily disable write access to data during the backup, either by stopping the accessing applications or by using the locking API provided by the operating system to enforce exclusive read access. This is tolerable for low-availability systems (on desktop computers and small workgroup servers, on which regular downtime is acceptable). High-availability 24/7 systems, however, cannot bear service stoppages. To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. Most snapshot implementations are efficient and can create snapshots in O(1). In other words, the time and I/O needed to create the snapshot does not increase with the size of the data set; by contrast, the time and I/O required for a direct backup is proportional to the size of the data set. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only, and use a system of pointers to reference the initial snapshot. This method of pointer-based snapshots consumes less disk capacity tha
https://en.wikipedia.org/wiki/Alarm%20signal
In animal communication, an alarm signal is an antipredator adaptation in the form of signals emitted by social animals in response to danger. Many primates and birds have elaborate alarm calls for warning conspecifics of approaching predators. For example, the alarm call of the blackbird is a familiar sound in many gardens. Other animals, like fish and insects, may use non-auditory signals, such as chemical messages. Visual signs such as the white tail flashes of many deer have been suggested as alarm signals; they are less likely to be received by conspecifics, so have tended to be treated as a signal to the predator instead. Different calls may be used for predators on the ground or from the air. Often, the animals can tell which member of the group is making the call, so that they can disregard those of little reliability. Evidently, alarm signals promote survival by allowing the receivers of the alarm to escape from the source of peril; this can evolve by kin selection, assuming the receivers are related to the signaller. However, alarm calls can increase individual fitness, for example by informing the predator it has been detected. Alarm calls are often high-frequency sounds because these sounds are harder to localize. Selective advantage This cost/benefit tradeoff of alarm calling behaviour has sparked many interest debates among evolutionary biologists seeking to explain the occurrence of such apparently "self-sacrificing" behaviour. The central question is this: "If the ultimate purpose of any animal behaviour is to maximize the chances that an organism's own genes are passed on, with maximum fruitfulness, to future generations, why would an individual deliberately risk destroying itself (their entire genome) for the sake of saving others (other genomes)?". Altruism Some scientists have used the evidence of alarm-calling behaviour to challenge the theory that "evolution works only/primarily at the level of the gene and of the gene's 'interest' in pa
https://en.wikipedia.org/wiki/Sardine%20run
The KwaZulu-Natal sardine run of southern Africa occurs from May through July when billions of sardines – or more specifically the Southern African pilchard Sardinops sagax – spawn in the cool waters of the Agulhas Bank and move northward along the east coast of South Africa. Their sheer numbers create a feeding frenzy along the coastline. The run, containing millions of individual sardines, occurs when a current of cold water heads north from the Agulhas Bank up to Mozambique where it then leaves the coastline and goes further east into the Indian Ocean. In terms of biomass, researchers estimate the sardine run could rival East Africa's great wildebeest migration. However, little is known of the phenomenon. It is believed that the water temperature has to drop below 21 °C in order for the migration to take place. In 2003, the sardines failed to 'run' for the third time in 23 years. While 2005 saw a good run, 2006 marked another non-run. The shoals are often more than 7 km long, 1.5 km wide and 30 metres deep and are clearly visible from spotter planes or from the surface. Sardines group together when they are threatened. This instinctual behaviour is a defence mechanism, as lone individuals are more likely to be eaten than when in large groups. Causes The sardine run is still poorly understood from an ecological point of view. There have been various hypotheses, sometimes contradictory, that try to explain why and how the run occurs. A recent interpretation of the causes is that the sardine run is most likely a seasonal reproductive migration of a genetically distinct subpopulation of sardine that moves along the coast from the eastern Agulhas Bank to the coast of KwaZulu-Natal in most years if not in every year. Genomic and transcriptomic data indicate that the sardines participating in the run originate from South Africa's cool-temperate Atlantic coast. These are attracted to temporary cold-water upwelling off the south-east coast, and eventually find them
https://en.wikipedia.org/wiki/Lip
The lips are a horizontal pair of soft appendages attached to the jaws and are the most visible part of the mouth of many animals, including humans. Vertebrate lips are soft, movable and serve to facilitate the ingestion of food (e.g. suckling and gulping) and the articulation of sound and speech. Human lips are also a somatosensory organ, and can be an erogenous zone when used in kissing and other acts of intimacy. Structure The upper and lower lips are referred to as the "Labium superius oris" and "Labium inferius oris", respectively. The juncture where the lips meet the surrounding skin of the mouth area is the vermilion border, and the typically reddish area within the borders is called the vermilion zone. The vermilion border of the upper lip is known as the cupid's bow. The fleshy protuberance located in the center of the upper lip is a tubercle known by various terms including the procheilon (also spelled prochilon), the "tuberculum labii superioris", and the "labial tubercle". The vertical groove extending from the procheilon to the nasal septum is called the philtrum. The skin of the lip, with three to five cellular layers, is very thin compared to typical face skin, which has up to 16 layers. With light skin color, the lip skin contains fewer melanocytes (cells which produce melanin pigment, which give skin its color). Because of this, the blood vessels appear through the skin of the lips, which leads to their notable red coloring. With darker skin color this effect is less prominent, as in this case the skin of the lips contains more melanin and thus is visually darker. The skin of the lip forms the border between the exterior skin of the face, and the interior mucous membrane of the inside of the mouth. The lip skin is not hairy and does not have sweat glands. Therefore, it does not have the usual protection layer of sweat and body oils which keep the skin smooth, inhibit pathogens, and regulate warmth. For these reasons, the lips dry out faste
https://en.wikipedia.org/wiki/E0%20%28cipher%29
E0 is a stream cipher used in the Bluetooth protocol. It generates a sequence of pseudorandom numbers and combines it with the data using the XOR operator. The key length may vary, but is generally 128 bits. Description At each iteration, E0 generates a bit using four shift registers of differing lengths (25, 31, 33, 39 bits) and two internal states, each 2 bits long. At each clock tick, the registers are shifted and the two states are updated with the current state, the previous state and the values in the shift registers. Four bits are then extracted from the shift registers and added together. The algorithm XORs that sum with the value in the 2-bit register. The first bit of the result is output for the encoding. E0 is divided in three parts: Payload key generation Keystream generation Encoding The setup of the initial state in Bluetooth uses the same structure as the random bit stream generator. We are thus dealing with two combined E0 algorithms. An initial 132-bit state is produced at the first stage using four inputs (the 128-bit key, the Bluetooth address on 48 bits and the 26-bit master counter). The output is then processed by a polynomial operation and the resulting key goes through the second stage, which generates the stream used for encoding. The key has a variable length, but is always a multiple of 2 (between 8 and 128 bits). 128 bit keys are generally used. These are stored into the second stage's shift registers. 200 pseudorandom bits are then produced by 200 clock ticks, and the last 128 bits are inserted into the shift registers. It is the stream generator's initial state. Cryptanalysis Several attacks and attempts at cryptanalysis of E0 and the Bluetooth protocol have been made, and a number of vulnerabilities have been found. In 1999, Miia Hermelin and Kaisa Nyberg showed that E0 could be broken in 264 operations (instead of 2128), if 264 bits of output are known. This type of attack was subsequently improved by Kishan Chand Gupta an
https://en.wikipedia.org/wiki/Lattice%20multiplication
Lattice multiplication, also known as the Italian method, Chinese method, Chinese lattice, gelosia multiplication, sieve multiplication, shabakh, diagonally or Venetian squares, is a method of multiplication that uses a lattice to multiply two multi-digit numbers. It is mathematically identical to the more commonly used long multiplication algorithm, but it breaks the process into smaller steps, which some practitioners find easier to use. The method had already arisen by medieval times, and has been used for centuries in many different cultures. It is still being taught in certain curricula today. Method A grid is drawn up, and each cell is split diagonally. The two multiplicands of the product to be calculated are written along the top and right side of the lattice, respectively, with one digit per column across the top for the first multiplicand (the number written left to right), and one digit per row down the right side for the second multiplicand (the number written top-down). Then each cell of the lattice is filled in with product of its column and row digit. As an example, consider the multiplication of 58 with 213. After writing the multiplicands on the sides, consider each cell, beginning with the top left cell. In this case, the column digit is 5 and the row digit is 2. Write their product, 10, in the cell, with the digit 1 above the diagonal and the digit 0 below the diagonal (see picture for Step 1). If the simple product lacks a digit in the tens place, simply fill in the tens place with a 0. After all the cells are filled in this manner, the digits in each diagonal are summed, working from the bottom right diagonal to the top left. Each diagonal sum is written where the diagonal ends. If the sum contains more than one digit, the value of the tens place is carried into the next diagonal (see Step 2). Numbers are filled to the left and to the bottom of the grid, and the answer is the numbers read off down (on the left) and across (on the bottom).
https://en.wikipedia.org/wiki/Dual%20format
Dual format is a technique used to allow software for two systems which would normally require different disk formats to be recorded on the same floppy disk. In the late 1980s, the term was used to refer to disks that could be used to boot either an Amiga or Atari ST computer. The layout of the first track of the disk was specially laid out to contain an Amiga and an Atari ST boot sector at the same time by fooling the operating system to think that the track resolved into the format it expected. The technique was used for some commercially available games, and also for the disks covermounted on ST/Amiga Format magazine. Other games came on Amiga and PC dual-format disks, or even "tri-format" disks, which contained the Amiga, Atari ST and PC versions of the game. Most dual and tri-format disks were implemented using technology developed by Rob Computing. Later, the term was used for disks containing both Windows and Macintosh versions. Examples Action Fighter (Amiga/PC dual-format disk) Lethal Xcess - Wings of Death II (Amiga/Atari ST dual-format disks) Monster Business (Amiga/Atari ST dual-format disk) Populous: The Promised Lands (Amiga/Atari ST dual-format disk) Rick Dangerous (Amiga/PC dual-format disk) Rick Dangerous 2 (Amiga/PC dual-format disk) Stone Age (Amiga/Atari ST dual-format disk) Street Fighter (Amiga/PC dual-format disk) StarGlider 2 (Amiga/Atari ST dual-format disk) 3D Pool (Amiga/Atari ST/PC tri-format disk) Stunt Car Racer (Amiga/PC dual-format disk) Bionic Commando (Amiga/PC dual-format disk) Carrier Command (Amiga/PC dual-format disk) Blasteroids (Amiga//PC dual-format disk) E-Motion (Amiga//PC dual-format disk) Indiana Jones and the Last Crusade Action (Amiga//PC dual-format disk) Out Run (Amiga/PC dual-format disk) World Class Leader Board (Amiga/PC dual-format disk) International Soccer Challenge (Amiga/PC dual-format disk) MicroProse Soccer (Amiga/PC dual-format disk) See also
https://en.wikipedia.org/wiki/Drag%20%28physics%29
In fluid dynamics, drag (sometimes called fluid resistance) is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. This can exist between two fluid layers (or surfaces) or between a fluid and a solid surface. Unlike other resistive forces, such as dry friction, which are nearly independent of velocity, the drag force depends on velocity. Drag force is proportional to the velocity for low-speed flow and the squared velocity for high speed flow, where the distinction between low and high speed is measured by the Reynolds number. Drag forces always tend to decrease fluid velocity relative to the solid object in the fluid's path. Examples Examples of drag include the component of the net aerodynamic or hydrodynamic force acting opposite to the direction of movement of a solid object such as cars (automobile drag coefficient), aircraft and boat hulls; or acting in the same geographical direction of motion as the solid, as for sails attached to a down wind sail boat, or in intermediate directions on a sail depending on points of sail. In the case of viscous drag of fluid in a pipe, drag force on the immobile pipe decreases fluid velocity relative to the pipe. In the physics of sports, the drag force is necessary to explain the motion of balls, javelins, arrows and frisbees and the performance of runners and swimmers. Types Types of drag are generally divided into the following categories: form drag or pressure drag due to the size and shape of a body skin friction drag or viscous drag due to the friction between the fluid and a surface which may be the outside of an object or inside such as the bore of a pipe The effect of streamlining on the relative proportions of skin friction and form drag is shown for two different body sections, an airfoil, which is a streamlined body, and a cylinder, which is a bluff body. Also shown is a flat plate illustrating the effect that orientation has on the relative proportions o
https://en.wikipedia.org/wiki/Boomerang%20attack
In cryptography, the boomerang attack is a method for the cryptanalysis of block ciphers based on differential cryptanalysis. The attack was published in 1999 by David Wagner, who used it to break the COCONUT98 cipher. The boomerang attack has allowed new avenues of attack for many ciphers previously deemed safe from differential cryptanalysis. Refinements on the boomerang attack have been published: the amplified boomerang attack, and the rectangle attack. Due to the similarity of a Merkle–Damgård construction with a block cipher, this attack may also be applicable to certain hash functions such as MD5. The attack The boomerang attack is based on differential cryptanalysis. In differential cryptanalysis, an attacker exploits how differences in the input to a cipher (the plaintext) can affect the resultant difference at the output (the ciphertext). A high probability "differential" (that is, an input difference that will produce a likely output difference) is needed that covers all, or nearly all, of the cipher. The boomerang attack allows differentials to be used which cover only part of the cipher. The attack attempts to generate a so-called "quartet" structure at a point halfway through the cipher. For this purpose, say that the encryption action, E, of the cipher can be split into two consecutive stages, E0 and E1, so that E(M) = E1(E0(M)), where M is some plaintext message. Suppose we have two differentials for the two stages; say, for E0, and for E1−1 (the decryption action of E1). The basic attack proceeds as follows: Choose a random plaintext and calculate . Request the encryptions of and to obtain and Calculate and Request the decryptions of and to obtain and Compare and ; when the differentials hold, . Application to specific ciphers One attack on KASUMI, a block cipher used in 3GPP, is a related-key rectangle attack which breaks the full eight rounds of the cipher faster than exhaustive search (Biham et al., 2005). The attack req
https://en.wikipedia.org/wiki/Perfect%20fluid
In physics, a perfect fluid or ideal fluid is a fluid that can be completely characterized by its rest frame mass density and isotropic pressure p. Real fluids are "sticky" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are neglected. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. Quark–gluon plasma is the closest known substance to a perfect fluid. In space-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity vector field of the fluid and where is the metric tensor of Minkowski spacetime. In time-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity of the fluid and where is the metric tensor of Minkowski spacetime. This takes on a particularly simple form in the rest frame where is the energy density and is the pressure of the fluid. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in field theory, in particular, quantization, to be applied to fluids. Perfect fluids are used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. In general relativity, the expression for the stress–energy tensor of a perfect fluid is written as where U is the 4-velocity vector field of the fluid and where is the inverse metric, written with a space-positive signature. See also Equation of state Ideal gas Fluid solutions in general relativity Potential flow
https://en.wikipedia.org/wiki/Prewellordering
In set theory, a prewellordering on a set is a preorder on (a transitive and reflexive relation on ) that is strongly connected (meaning that any two points are comparable) and well-founded in the sense that the induced relation defined by is a well-founded relation. Prewellordering on a set A prewellordering on a set is a homogeneous binary relation on that satisfies the following conditions: Reflexivity: for all Transitivity: if and then for all Total/Strongly connected: or for all for every non-empty subset there exists some such that for all This condition is equivalent to the induced strict preorder defined by and being a well-founded relation. A homogeneous binary relation on is a prewellordering if and only if there exists a surjection into a well-ordered set such that for all if and only if Examples Given a set the binary relation on the set of all finite subsets of defined by if and only if (where denotes the set's cardinality) is a prewellordering. Properties If is a prewellordering on then the relation defined by is an equivalence relation on and induces a wellordering on the quotient The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering. A norm on a set is a map from into the ordinals. Every norm induces a prewellordering; if is a norm, the associated prewellordering is given by Conversely, every prewellordering is induced by a unique regular norm (a norm is regular if, for any and any there is such that ). Prewellordering property If is a pointclass of subsets of some collection of Polish spaces, closed under Cartesian product, and if is a prewellordering of some subset of some element of then is said to be a -prewellordering of if the relations and are elements of where for is said to have the prewellordering property if every set in admits a -prewellordering. The prewellordering property is related to the stronger
https://en.wikipedia.org/wiki/Quantum%20programming
Quantum programming is the process of designing or assembling sequences of instructions, called quantum circuits, using gates, switches, and operators to manipulate a quantum system for a desired outcome or results of a given experiment. Quantum circuit algorithms can be implemented on integrated circuits, conducted with instrumentation, or written in a programming language for use with a quantum computer or a quantum processor. With quantum processor based systems, quantum programming languages help express quantum algorithms using high-level constructs. The field is deeply rooted in the open-source philosophy and as a result most of the quantum software discussed in this article is freely available as open-source software. Quantum computers, such as those based on the KLM protocol, a linear optical quantum computing (LOQC) model, use quantum algorithms (circuits) implemented with electronics, integrated circuits, instrumentation, sensors, and/or by other physical means. Other circuits designed for experimentation related to quantum systems can be instrumentation and sensor based. Quantum instruction sets Quantum instruction sets are used to turn higher level algorithms into physical instructions that can be executed on quantum processors. Sometimes these instructions are specific to a given hardware platform, e.g. ion traps or superconducting qubits. cQASM cQASM, also known as common QASM, is a hardware-agnostic quantum assembly language which guarantees the interoperability between all the quantum compilation and simulation tools. It was introduced by the QCA Lab at TUDelft. Quil Quil is an instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng in A Practical Quantum Instruction Set Architecture. Many quantum algorithms (including quantum teleportation, quantum error correction, simulation, and optimization algorithms) require
https://en.wikipedia.org/wiki/European%20Association%20for%20Theoretical%20Computer%20Science
The European Association for Theoretical Computer Science (EATCS) is an international organization with a European focus, founded in 1972. Its aim is to facilitate the exchange of ideas and results among theoretical computer scientists as well as to stimulate cooperation between the theoretical and the practical community in computer science. The major activities of the EATCS are: Organization of ICALP, the International Colloquium on Automata, Languages and Programming; Publication of the Bulletin of the EATCS; Publication of a series of monographs and texts on theoretical computer science; Publication of the journal Theoretical Computer Science; Publication of the journal Fundamenta Informaticae. EATCS Award Each year, the EATCS Award is awarded in recognition of a distinguished career in theoretical computer science. The first award was assigned to Richard Karp in 2000; the complete list of the winners is given below: Presburger Award Starting in 2010, the European Association of Theoretical Computer Science (EATCS) confers each year at the conference ICALP the Presburger Award to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929. The complete list of the winners is given below: EATCS Fellows The EATCS Fellows Program has been established by the Association to recognize outstanding EATCS Members for their scientific achievements in the field of Theoretical Computer Science. The Fellow status is conferred by the EATCS Fellows-Selection Committee upon a person having a track record of intellectual and organizational leadership within the EATCS community. Fellows are expected to be “model citizens” of the TCS commun
https://en.wikipedia.org/wiki/Connection-oriented%20communication
In telecommunications and computer networking, connection-oriented communication is a communication protocol where a communication session or a semi-permanent connection is established before any useful data can be transferred. The established connection ensures that data is delivered in the correct order to the upper communication layer. The alternative is called connectionless communication, such as the datagram mode communication used by Internet Protocol (IP) and User Datagram Protocol, where data may be delivered out of order, since different network packets are routed independently and may be delivered over different paths. Connection-oriented communication may be implemented with a circuit switched connection, or a packet-mode virtual circuit connection. In the latter case, it may use either a transport layer virtual circuit protocol such as the TCP protocol, allowing data to be delivered in order. Although the lower-layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier reducing the overhead of routing decisions on a packet-by-packet basis for the network. Connection-oriented protocol services are often, but not always, reliable network services that provide acknowledgment after successful delivery and automatic repeat request functions in case of missing or corrupted data. Asynchronous Transfer Mode, Frame Relay and MPLS are examples of a connection-oriented, unreliable protocol. SMTP is an example of a connection-oriented protocol in which if a message is not delivered, an error report is sent to the sender which makes SMTP a reliable protocol. Because they can keep track of a conversation, connection-oriented protocols are sometimes described as stateful. Circuit switching Circuit switched communication, for example the public switched telephone network,
https://en.wikipedia.org/wiki/Air%20suspension
Air suspension is a type of vehicle suspension powered by an electric or engine-driven air pump or compressor. This compressor pumps the air into a flexible bellows, usually made from textile-reinforced rubber. Unlike hydropneumatic suspension, which offers many similar features, air suspension does not use pressurized liquid, but pressurized air. The air pressure inflates the bellows, and raises the chassis from the axle. Overview Air suspension is used in place of conventional steel springs in heavy vehicle applications such as buses and trucks, and in some passenger cars. It is widely used on semi trailers and trains (primarily passenger trains). The purpose of air suspension is to provide a smooth, constant ride quality, but in some cases is used for sports suspension. Modern electronically controlled systems in automobiles and light trucks almost always feature self-leveling along with raising and lowering functions. Although traditionally called air bags or air bellows, the correct term is air spring (although these terms are also used to describe just the rubber bellows element with its end plates). History On 7 January 1901 the British engineer Archibald Sharp patented a method for making a seal allowing pneumatic or hydraulic apparatus described as a "rolling mitten seal", and on 11 January 1901 he applied for a patent for the use of the device to provide air suspension on bicycles. Further developments using this 1901 seal followed. A company called Air Springs Ltd started producing the A.S.L. motorcycle in 1909. This was unusual in having pneumatic suspension at front and rear - rear suspension being unusual in any form of motorcycle at that time. The suspension units were similar to the normal girder forks with the spring replaced by a telescopic air unit which could be pressurised to suit the rider. Production of the motorcycles ceased in 1914. On 22 January 1901 an American, William W. Humphreys, patented an idea - a 'Pneumatic Spring for Vehicle
https://en.wikipedia.org/wiki/Cornering%20force
Cornering force or side force is the lateral (i.e., parallel to wheel axis) force produced by a vehicle tire during cornering. Cornering force is generated by tire slip and is proportional to slip angle at low slip angles. The rate at which cornering force builds up is described by relaxation length. Slip angle describes the deformation of the tire contact patch, and this deflection of the contact patch deforms the tire in a fashion akin to a spring. As with deformation of a spring, deformation of the tire contact patch generates a reaction force in the tire; the cornering force. Integrating the force generated by every tread element along the contact patch length gives the total cornering force. Although the term, "tread element" is used, the compliance in the tire that leads to this effect is actually a combination of sidewall deflection and deflection of the rubber within the contact patch. The exact ratio of sidewall compliance to tread compliance is a factor in tire construction and inflation pressure. Because the tire deformation tends to reach a maximum behind the center of the contact patch, by a distance known as pneumatic trail, it tends to generate a torque about a vertical axis known as self aligning torque. The diagram is misleading because the reaction force would appear to be acting in the wrong direction. It is simply a matter of convention to quote positive cornering force as acting in the opposite direction to positive tire slip so that calculations are simplified, since a vehicle cornering under the influence of a cornering force to the left will generate a tire slip to the right. The same principles can be applied to a tire being deformed longitudinally, or in a combination of both longitudinal and lateral directions. The behaviour of a tire under combined longitudinal and lateral deformation can be described by a traction circle. See also Camber thrust Lateral force variation Circle of forces Skidpad
https://en.wikipedia.org/wiki/Aeronomy
Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena. History The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets. Branches Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy. Terrestrial aeronomy Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology. Terrestrial aeronomers study atmospheric tides and upper-
https://en.wikipedia.org/wiki/The%20World%20%28book%29
The World, also called Treatise on the Light (French title: Traité du monde et de la lumière), is a book by René Descartes (1596–1650). Written between 1629 and 1633, it contains a nearly complete version of his philosophy, from method, to metaphysics, to physics and biology. Descartes espoused mechanical philosophy, a form of natural philosophy popular in the 17th century. He thought everything physical in the universe to be made of tiny "corpuscles" of matter. Corpuscularianism is closely related to atomism. The main difference was that Descartes maintained that there could be no vacuum, and all matter was constantly swirling to prevent a void as corpuscles moved through other matter. The World presents a corpuscularian cosmology in which swirling vortices explain, among other phenomena, the creation of the Solar System and the circular motion of planets around the Sun. The World rests on the heliocentric view, first explicated in Western Europe by Copernicus. Descartes delayed the book's release upon news of the Roman Inquisition's conviction of Galileo for "suspicion of heresy" and sentencing to house arrest. Descartes discussed his work on the book, and his decision not to release it, in letters with another philosopher, Marin Mersenne. Some material from The World was revised for publication as Principia philosophiae or Principles of Philosophy (1644), a Latin textbook at first intended by Descartes to replace the Aristotelian textbooks then used in universities. In the Principles the heliocentric tone was softened slightly with a relativist frame of reference. The last chapter of The World was published separately as De Homine (On Man) in 1662. The rest of The World was finally published in 1664, and the entire text in 1677. The void and particles in nature Before Descartes begins to describe his theories in physics, he introduces the reader to the idea that there is no relationship between our sensations and what creates these sensations, thereby cast
https://en.wikipedia.org/wiki/Ordinal%20arithmetic
In the mathematical field of set theory, ordinal arithmetic describes the three usual operations on ordinal numbers: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the result of the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the "natural" arithmetic of ordinals and the nimber operations. Addition The union of two disjoint well-ordered sets S and T can be well-ordered. The order-type of that union is the ordinal that results from adding the order-types of S and T. If two well-ordered sets are not already disjoint, then they can be replaced by order-isomorphic disjoint sets, e.g. replace S by {0} × S and T by {1} × T. This way, the well-ordered set S is written "to the left" of the well-ordered set T, meaning one defines an order on S T in which every element of S is smaller than every element of T. The sets S and T themselves keep the ordering they already have. The definition of addition α + β can also be given by transfinite recursion on β: α + 0 = α , where S denotes the successor function. when β is a limit ordinal. Ordinal addition on the natural numbers is the same as standard addition. The first transfinite ordinal is ω, the set of all natural numbers, followed by ω + 1, ω + 2, etc. The ordinal ω + ω is obtained by two copies of the natural numbers ordered in the usual fashion and the second copy completely to the right of the first. Writing 0' < 1' < 2' < ... for the second copy, ω + ω looks like 0 < 1 < 2 < 3 < ... < 0' < 1' < 2' < ... This is different from ω because in ω only 0 does not have a direct predecessor while in ω + ω the two elements 0 and 0' do not have direct predecessors. Properties Ordinal addition is, in general, not commutative. For example, since the order relation for is
https://en.wikipedia.org/wiki/Finite%20morphism
In algebraic geometry, a finite morphism between two affine varieties is a dense regular map which induces isomorphic inclusion between their coordinate rings, such that is integral over . This definition can be extended to the quasi-projective varieties, such that a regular map between quasiprojective varieties is finite if any point like has an affine neighbourhood V such that is affine and is a finite map (in view of the previous definition, because it is between affine varieties). Definition by schemes A morphism f: X → Y of schemes is a finite morphism if Y has an open cover by affine schemes such that for each i, is an open affine subscheme Spec Ai, and the restriction of f to Ui, which induces a ring homomorphism makes Ai a finitely generated module over Bi. One also says that X is finite over Y. In fact, f is finite if and only if for every open affine subscheme V = Spec B in Y, the inverse image of V in X is affine, of the form Spec A, with A a finitely generated B-module. For example, for any field k, is a finite morphism since as -modules. Geometrically, this is obviously finite since this is a ramified n-sheeted cover of the affine line which degenerates at the origin. By contrast, the inclusion of A1 − 0 into A1 is not finite. (Indeed, the Laurent polynomial ring k[y, y−1] is not finitely generated as a module over k[y].) This restricts our geometric intuition to surjective families with finite fibers. Properties of finite morphisms The composition of two finite morphisms is finite. Any base change of a finite morphism f: X → Y is finite. That is, if g: Z → Y is any morphism of schemes, then the resulting morphism X ×Y Z → Z is finite. This corresponds to the following algebraic statement: if A and C are (commutative) B-algebras, and A is finitely generated as a B-module, then the tensor product A ⊗B C is finitely generated as a C-module. Indeed, the generators can be taken to be the elements ai ⊗ 1, where ai are the given generators
https://en.wikipedia.org/wiki/Cirsium%20arvense
Cirsium arvense is a perennial species of flowering plant in the family Asteraceae, native throughout Europe and western Asia, northern Africa and widely introduced elsewhere. The standard English name in its native area is creeping thistle. It is also commonly known as Canada thistle and field thistle. The plant is beneficial for pollinators that rely on nectar. It also was a top producer of nectar sugar in a 2016 study in Britain, with a second-place ranking due to a production per floral unit of (). Alternative names A number of other names are used in other areas or have been used in the past, including: Canadian thistle, lettuce from hell thistle, California thistle, corn thistle, cursed thistle, field thistle, green thistle, hard thistle, perennial thistle, prickly thistle, setose thistle, small-flowered thistle, way thistle, and stinger-needles. Canada and Canadian thistle are in wide use in the United States, despite being a misleading designation (it is not of Canadian origin). Description Cirsium arvense is a C3 carbon fixation plant. The C3 plants originated during Mesozoic and Paleozoic eras, and tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, and ground water is plentiful. plants lose 97% of the water taken up through their roots to transpiration. Creeping thistle is a herbaceous perennial plant growing up to 150 cm, forming extensive clonal colonies from thickened roots that send up numerous erect shoots during the growing season. It is a ruderal species. Given its adaptive nature, Cirsium arvense is one of the worst invasive weeds worldwide. Through comparison of its genetic expressions, the plant evolves differently with respect to where it has established itself. Differences can be seen in their R-protein mediated defenses, sensitivities to abiotic stresses, and developmental timing. Taxonomy Cirsium arvense is placed in the subtribe Carduinae, tribe Cardueae of the family Asteraceae. Unlike other sp
https://en.wikipedia.org/wiki/A%20Course%20of%20Modern%20Analysis
A Course of Modern Analysis: an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions (colloquially known as Whittaker and Watson) is a landmark textbook on mathematical analysis written by Edmund T. Whittaker and George N. Watson, first published by Cambridge University Press in 1902. The first edition was Whittaker's alone, but later editions were co-authored with Watson. History Its first, second, third, and the fourth edition were published in 1902, 1915, 1920, and 1927, respectively. Since then, it has continuously been reprinted and is still in print today. A revised, expanded and digitally reset fifth edition, edited by Victor H. Moll, was published in 2021. The book is notable for being the standard reference and textbook for a generation of Cambridge mathematicians including Littlewood and Godfrey H. Hardy. Mary L. Cartwright studied it as preparation for her final honours on the advice of fellow student Vernon C. Morton, later Professor of Mathematics at Aberystwyth University. But its reach was much further than just the Cambridge school; André Weil in his obituary of the French mathematician Jean Delsarte noted that Delsarte always had a copy on his desk. In 1941 the book was included among a "selected list" of mathematical analysis books for use in universities in an article for that purpose published by American Mathematical Monthly. Notable features Some idiosyncratic but interesting problems from an older era of the Cambridge Mathematical Tripos are in the exercises. The book was one of the earliest to use decimal numbering for its sections, an innovation the authors attribute to Giuseppe Peano. Contents Below are the contents of the fourth edition: Part I. The Process of Analysis Part II. The Transcendental Functions Reception Reviews of the first edition George B. Mathews, in a 1903 review article published in The Mathematical Gazette opens by saying t
https://en.wikipedia.org/wiki/Product%20term
In Boolean logic, a product term is a conjunction of literals, where each literal is either a variable or its negation. Examples Examples of product terms include: Origin The terminology comes from the similarity of AND to multiplication as in the ring structure of Boolean rings. Minterms For a boolean function of variables , a product term in which each of the variables appears once (in either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator.
https://en.wikipedia.org/wiki/Keyhole%20Markup%20Language
Keyhole Markup Language (KML) is an XML notation for expressing geographic annotation and visualization within two-dimensional maps and three-dimensional Earth browsers. KML was developed for use with Google Earth, which was originally named Keyhole Earth Viewer. It was created by Keyhole, Inc, which was acquired by Google in 2004. KML became an international standard of the Open Geospatial Consortium in 2008. Google Earth was the first program able to view and graphically edit KML files, but other projects such as Marble have added KML support. Structure The KML file specifies a set of features (place marks, images, polygons, 3D models, textual descriptions, etc.) that can be displayed on maps in geospatial software implementing the KML encoding. Every place has a longitude and a latitude. Other data can make a view more specific, such as tilt, heading, or altitude, which together define a "camera view" along with a timestamp or timespan. KML shares some of the same structural grammar as Geography Markup Language (GML). Some KML information cannot be viewed in Google Maps or Mobile. KML files are very often distributed as KMZ files, which are zipped KML files with a .kmz extension. The contents of a KMZ file are a single root KML document (notionally "doc.kml") and optionally any overlays, images, icons, and COLLADA 3D models referenced in the KML including network-linked KML files. The root KML document by convention is a file named "doc.kml" at the root directory level, which is the file loaded upon opening. By convention the root KML document is at root level and referenced files are in subdirectories (e.g. images for overlay). An example KML document is: <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <Placemark> <name>New York City</name> <description>New York City</description> <Point> <coordinates>-74.006393,40.714172,0</coordinates> </Point> </Placemark> </Document> </kml> The MIME type associ
https://en.wikipedia.org/wiki/Mass%20ratio
In aerospace engineering, mass ratio is a measure of the efficiency of a rocket. It describes how much more massive the vehicle is with propellant than without; that is, the ratio of the rocket's wet mass (vehicle plus contents plus propellant) to its dry mass (vehicle plus contents). A more efficient rocket design requires less propellant to achieve a given goal, and would therefore have a lower mass ratio; however, for any given efficiency a higher mass ratio typically permits the vehicle to achieve higher delta-v. The mass ratio is a useful quantity for back-of-the-envelope rocketry calculations: it is an easy number to derive from either or from rocket and propellant mass, and therefore serves as a handy bridge between the two. It is also a useful for getting an impression of the size of a rocket: while two rockets with mass fractions of, say, 92% and 95% may appear similar, the corresponding mass ratios of 12.5 and 20 clearly indicate that the latter system requires much more propellant. Typical multistage rockets have mass ratios in the range from 8 to 20. The Space Shuttle, for example, has a mass ratio around 16. Derivation The definition arises naturally from Tsiolkovsky's rocket equation: where Δv is the desired change in the rocket's velocity ve is the effective exhaust velocity (see specific impulse) m0 is the initial mass (rocket plus contents plus propellant) m1 is the final mass (rocket plus contents) This equation can be rewritten in the following equivalent form: The fraction on the left-hand side of this equation is the rocket's mass ratio by definition. This equation indicates that a Δv of times the exhaust velocity requires a mass ratio of . For instance, for a vehicle to achieve a of 2.5 times its exhaust velocity would require a mass ratio of (approximately 12.2). One could say that a "velocity ratio" of requires a mass ratio of . Alternative definition Sutton defines the mass ratio inversely as: In this case, the values
https://en.wikipedia.org/wiki/Mason%27s%20mark
A mason's mark is an engraved symbol often found on dressed stone in buildings and other public structures. In stonemasonry Regulations issued in Scotland in 1598 by James VI's Master of Works, William Schaw, stated that on admission to the guild, every mason had to enter his name and his mark in a register. There are three types of marks used by stonemasons. Banker marks were made on stones before they were sent to be used by the walling masons. These marks served to identify the banker mason who had prepared the stones to their paymaster. This system was employed only when the stone was paid for by measure, rather than by time worked. For example, the 1306 contract between Richard of Stow, mason, and the Dean and Chapter of Lincoln Cathedral, specified that the plain walling would be paid for by measure, and indeed banker marks are found on the blocks of walling in this cathedral. Conversely, the masons responsible for walling the eastern parts of Exeter Cathedral were paid by the week, and consequently few banker marks are found on this part of the cathedral. Banker marks make up the majority of masons' marks, and are generally what are meant when the term is used without further specification. Assembly marks were used to ensure the correct installation of important pieces of stonework. For example, the stones on the window jambs in the chancel of North Luffenham church in Rutland are each marked with a Roman numeral, directing the order in which the stones were to be installed. Quarry stones were used to identify the source of a stone, or occasionally the quality. In Freemasonry Freemasonry, a fraternal order that uses an analogy to stonemasonry for much of its structure, also makes use of marks. A Freemason who takes the degree of Mark Master Mason will be asked to create his own Mark, as a type of unique signature or identifying badge. Some of these can be quite elaborate. Gallery of mason's marks See also Benchmark (surveying) Builder's signature
https://en.wikipedia.org/wiki/Adaptec
Adaptec, Inc., was a computer storage company and remains a brand for computer storage products. The company was an independent firm from 1981 to 2010, at which point it was acquired by PMC-Sierra, which itself was later acquired by Microsemi, which itself was later acquired by Microchip Technology. History Larry Boucher, Wayne Higashi, and Bernard Nieman founded Adaptec in 1981. At first, Adaptec focused on devices with Parallel SCSI interfaces. Popular host bus adapters included the 154x/15xx ISA family, the 2940 PCI family, and the 29160/-320 family. Their cross-platform ASPI was an early API for accessing and integrating non-disk devices like tape drives, scanners and optical disks. With advancements in technology, RAID functions were added while interfaces evolved to PCIe and SAS. Adaptec made a number of acquisitions in the mid-1990s to expand their reach in the SCSI peripheral market. In March 1993, they acquired Trantor Systems Ltd. of Fremont, California, for $10 million. In July 1995, they acquired Future Domain Corporation of Irvine, California, for $25 million. On May 10, 2010, PMC-Sierra, Inc. and Adaptec, Inc. announced they had entered into a definitive agreement of PMC-Sierra acquiring Adaptec's channel storage business on May 8, 2010, which included Adaptec's RAID storage product line, the Adaptec brand, a global value added reseller customer base, board logistics capabilities, and SSD cache performance solutions. The transaction was expected to close in approximately 30 days, subject to customary closing conditions. Following the sale, Adaptec would retain its Aristos ASIC technology business, certain real estate assets, more than 200 patents, and approximately $400 million in cash and marketable securities. On June 8, 2010, PMC-Sierra and Adaptec announced the completion of the acquisition. PMC-Sierra renamed the channel storage business "Adaptec by PMC". PMC-Sierra was in turn acquired by Microsemi in January 2016. The old Adaptec, Inc. cha
https://en.wikipedia.org/wiki/Ballblazer
Ballblazer is a futuristic sports game created by Lucasfilm Games and published in 1985 by Epyx. Along with Rescue on Fractalus!, it was one of the initial pair of releases from Lucasfilm Games, Ballblazer was developed and first published for the Atari 8-bit family. The principal creator and programmer was David Levine. The game was called Ballblaster during development; some pirated versions bear this name. It was ported to the Apple II, ZX Spectrum, Amstrad CPC, Commodore 64, and MSX. Atari 5200 and Atari 7800 ports were published by Atari Corporation. A version for the Famicom was released by Pony Canyon. Gameplay Ballblazer is a simple one-on-one sports-style game bearing similarities to basketball and soccer. Each side is represented by a craft called a "rotofoil", which can be controlled by either a human player or a computer-controlled "droid" with ten levels of difficulty. The game allows for human vs. human, human vs. droid, and droid vs. droid matches. The basic objective of the game is to score points by either firing or carrying a floating ball into the opponent's goal. The game takes place on a flat, checkerboard playfield, and each player's half of the screen is presented from a first-person perspective. A player can gain possession of the ball by running into it, at which point it is held in a force field in front of the craft. The opponent can attempt to knock the ball away from the player using the fire button, and the player in possession of the ball can also fire the ball toward the goal. When a player does not have possession of the ball, his or her rotofoil automatically turns at 90-degree intervals to face the ball, while possessing the ball turns the player toward the opponent's goal. The goalposts move from side to side at each end of the playfield, and as goals are scored, the goal becomes narrower. Pushing the ball through the goal scores one point, firing the ball through the posts from close range scores two points, and successfu
https://en.wikipedia.org/wiki/Domain-specific%20modeling
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, such as computer software. It involves systematic use of a domain-specific language to represent the various facets of a system. Domain-specific modeling languages tend to support higher-level abstractions than general-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system. Overview Domain-specific modeling often also includes the idea of code generation: automating the creation of executable source code directly from the domain-specific language models. Being free from the manual creation and maintenance of source code means domain-specific language can significantly improve developer productivity. The reliability of automatic generation compared to manual coding will also reduce the number of defects in the resulting programs thus improving quality. Domain-specific language differs from earlier code generation attempts in the CASE tools of the 1980s or UML tools of the 1990s. In both of these, the code generators and modeling languages were built by tool vendors. While it is possible for a tool vendor to create a domain-specific language and generators, it is more normal for domain-specific language to occur within one organization. One or a few expert developers creates the modeling language and generators, and the rest of the developers use them. Having the modeling language and generator built by the organization that will use them allows a tight fit with their exact domain and in response to changes in the domain. Domain-specific languages can usually cover a range of abstraction levels for a particular domain. For example, a domain-specific modeling language for mobile phones could allow users to specify high-level abstractions for the user interface, as well as lower-level abstractions for storing data such as phone numbers or settings. Likewise, a domain-specific modeling language for financ
https://en.wikipedia.org/wiki/Tissue%20microarray
Tissue microarrays (also TMAs) consist of paraffin blocks in which up to 1000 separate tissue cores are assembled in array fashion to allow multiplex histological analysis. History The major limitations in molecular clinical analysis of tissues include the cumbersome nature of procedures, limited availability of diagnostic reagents and limited patient sample size. The technique of tissue microarray was developed to address these issues. Multi-tissue blocks were first introduced by H. Battifora in 1986 with his so-called “multitumor (sausage) tissue block" and modified in 1990 with its improvement, "the checkerboard tissue block" . In 1998, J. Kononen and collaborators developed the current technique, which uses a novel sampling approach to produce tissues of regular size and shape that can be more densely and precisely arrayed. Procedure In the tissue microarray technique, a hollow needle is used to remove tissue cores as small as 0.6 mm in diameter from regions of interest in paraffin-embedded tissues such as clinical biopsies or tumor samples. These tissue cores are then inserted in a recipient paraffin block in a precisely spaced, array pattern. Sections from this block are cut using a microtome, mounted on a microscope slide and then analyzed by any method of standard histological analysis. Each microarray block can be cut into 100 – 500 sections, which can be subjected to independent tests. Tests commonly employed in tissue microarray include immunohistochemistry, and fluorescent in situ hybridization. Tissue microarrays are particularly useful in analysis of cancer samples. One variation is a Frozen tissue array. Use in research The use of tissue microarrays in combination with immunohistochemistry has been a preferred method to study and validate cancer biomarkers in various defined cancer patient cohorts. The possibility to assemble a large number of representative cancer samples from a defined patient cohort that also has a corresponding clini
https://en.wikipedia.org/wiki/Huggies%20Pull-Ups
Pull-Ups is a brand of disposable training pants made under the Huggies brand of baby products. The product was first introduced in 1989 and became popular with the slogan "I'm a big kid now!" The training pants are marketed with purple packaging: boys' designs are blue and currently feature characters from the Disney Junior show Mickey Mouse Funhouse; girls' designs are purple with the Disney Junior show Minnie's Bow-Toons characters. Huggies Pull-Ups variations Huggies Pull-Ups have been distributed in 4 different types which have been intact since 2011. (not counting the renaming of Wetness Liner.) Learning Designs In March 2005, the original Huggies Pull-Ups were renamed Learning Designs after the small pictures that fade when they become wet. Wetness Liner Wetness Liner Pull-Ups Training Pants were first distributed in 2005 as a competitor to the now defunct Pampers Feel 'N Learn. These Pull-Ups were much like Learning Designs Pull-Ups, except they added special liner to the Wetness Liner ones. This liner is placed on the inside of the Pull-up, where the wearer is most likely to wet, and is sensitive to urine. When the Wetness Liner is exposed to urine, it causes the wearer to feel uncomfortable, and learn that s/he shouldn't wet him/herself and should use the toilet instead. Wetness Liner Pull-Ups also have the Learning Designs, which also fade when the wearer wets the pull-up. Cool-Alert name change In 2006, the Wetness Liner Pull-Ups were replaced by Cool-Alert. This variation has been intact since. Goodnites Goodnites are used to control bedwetting. In 2008, The Goodnites disposable underwear split up from the Pull Ups brand merging with the Huggies brand, Then in 2011, Goodnites split up from the Huggies brand and formed their own brand which is the same name as the product. Night-Time At the same time that Wetness Liner was renamed Cool Alert, Pull-Ups introduced Night Time Pull-Ups. The Night Time Pull-Ups were very much like a regular Pull-Ups pan
https://en.wikipedia.org/wiki/Indium%20gallium%20phosphide
Indium gallium phosphide (InGaP), also called gallium indium phosphide (GaInP), is a semiconductor composed of indium, gallium and phosphorus. It is used in high-power and high-frequency electronics because of its superior electron velocity with respect to the more common semiconductors silicon and gallium arsenide. It is used mainly in HEMT and HBT structures, but also for the fabrication of high efficiency solar cells used for space applications and, in combination with aluminium (AlGaInP alloy) to make high brightness LEDs with orange-red, orange, yellow, and green colors. Some semiconductor devices such as EFluor Nanocrystal use InGaP as their core particle. Indium gallium phosphide is a solid solution of indium phosphide and gallium phosphide. Ga0.5In0.5P is a solid solution of special importance, which is almost lattice matched to GaAs. This allows, in combination with (AlxGa1−x)0.5In0.5, the growth of lattice matched quantum wells for red emitting semiconductor lasers, e.g. red emitting (650nm) RCLEDs or VCSELs for PMMA plastic optical fibers. Ga0.5In0.5P is used as the high energy junction on double and triple junction photovoltaic cells grown on GaAs. Recent years have shown GaInP/GaAs tandem solar cells with AM0 (sunlight incidence in space=1.35 kW/m2) efficiencies in excess of 25%. A different composition of GaInP, lattice matched to the underlying GaInAs, is utilized as the high energy junction GaInP/GaInAs/Ge triple junction photovoltaic cells. Growth of GaInP by epitaxy can be complicated by the tendency of GaInP to grow as an ordered material, rather than a truly random solid solution (i.e., a mixture). This changes the bandgap and the electronic and optical properties of the material. See also Gallium phosphide Indium(III) phosphide Indium gallium nitride Indium gallium arsenide GaInP/GaAs solar cell
https://en.wikipedia.org/wiki/Star%20Force
also released in arcades outside of Japan as Mega Force, is a vertical-scrolling shooter computer game released in 1984 by Tehkan. Gameplay In the game, the player pilots a starship called the Final Star, while shooting various enemies and destroying enemy structures for points. Unlike later vertical scrolling shooters, like Toaplan's Twin Cobra, the Final Star had only two levels of weapon power and no secondary weapons like missiles and/or bombs. Each stage in the game was named after a letter of the Greek alphabet. In certain versions of the game, there is an additional level called "Infinity" (represented by the infinity symbol) which occurs after Omega, after which the game repeats indefinitely. In the NES version, after defeating the Omega target, the player can see a black screen with Tecmo's logo, announcing the future release of the sequel Super Star Force. After that, the infinity target becomes available and the game repeats the same level and boss without increasing the difficulty. Reception In Japan, Game Machine listed Star Force on its December 1, 1984, issue as the fourteenth most-successful table arcade unit at the time. Legacy Sequels Super Star Force: Jikūreki no Himitsu, released in 1986 for the Japanese Nintendo Famicom. Final Star Force, released for arcades in 1992. Ports and related releases Star Force was ported and published in 1985 by Hudson Soft to both the MSX home computer and the Family Computer (Famicom) in Japan. Sales of the game were promoted through the first nationwide video game competition to be called "a caravan", although it was not the first event of its kind organized by Hudson (they had previously promoted Lode Runner with a similar event). The North American and European versions for the Nintendo Entertainment System (NES) were published two years later, in 1987, with significant revisions, and with Tecmo credited rather than Hudson on the title screen and box art. Although the NES version is immediately rec
https://en.wikipedia.org/wiki/Position%20effect
Position effect is the effect on the expression of a gene when its location in a chromosome is changed, often by translocation. This has been well described in Drosophila with respect to eye color and is known as position effect variegation (PEV). The phenotype is well characterised by unstable expression of a gene that results in the red eye coloration. In the mutant flies the eyes typically have a mottled appearance of white and red sectors. These phenotypes are often due to a chromosomal translocation such that the color gene is now close to a region of heterochromatin. Regions of heterochromatin can spread and influence transcription, which may result in the cessation of gene expression and subsequently, white eye sectors. Position effect is also used to describe the variation of expression exhibited by identical transgenes that insert into different regions of a genome. In this case the difference in expression is often due to enhancers that regulate neighboring genes. These local enhancers can also affect the expression pattern of the transgene. Since each transgenic organism has the transgene in a different location each transgenic organism has the potential for a unique expression pattern.
https://en.wikipedia.org/wiki/Crenation
Crenation (from modern Latin crenatus meaning "scalloped or notched", from popular Latin crena meaning "notch") in botany and zoology, describes an object's shape, especially a leaf or shell, as being round-toothed or having a scalloped edge. The descriptor can apply to objects of different types, including cells, where one mechanism of crenation is the contraction of a cell after exposure to a hypertonic solution, due to the loss of water through osmosis. In a hypertonic environment, the cell has a lower concentration of solutes than the surrounding extracellular fluid, and water diffuses out of the cell by osmosis, causing the cytoplasm to decrease in volume. As a result, the cell shrinks and the cell membrane develops abnormal notchings. Pickling cucumbers and salt-curing of meat are two practical applications of crenation. Plasmolysis is the term which describes plant cells when the cytoplasm shrinks from the cell wall in a hypertonic environment. In plasmolysis, the cell wall stays intact, but the plasma membrane shrinks and the chloroplasts of the plant cell concentrate in the center of the cell. Red blood cells Crenation is also used to describe a feature of red blood cells. These erythrocytes look as if they have projections extending from a smaller central area, like a spiked ball. The crenations may be either large, irregular spicules of acanthocytes, or smaller, more numerous, regularly irregular projections of echinocytes. Acanthocytes and echinocytes may arise from abnormalities of the cell membrane lipids or proteins, or from other disease processes, or as an ex vivo artifact. See also Crenellation Cytorrhysis Hemolysis Plasmolysis
https://en.wikipedia.org/wiki/Precision%20Time%20Protocol
The Precision Time Protocol (PTP) is a protocol used to synchronize clocks throughout a computer network. On a local area network, it achieves clock accuracy in the sub-microsecond range, making it suitable for measurement and control systems. PTP is employed to synchronize financial transactions, mobile phone tower transmissions, sub-sea acoustic arrays, and networks that require precise timing but lack access to satellite navigation signals. The first version of PTP, IEEE 1588-2002, was published in 2002. IEEE 1588-2008, also known as PTP Version 2 is not backward compatible with the 2002 version. IEEE 1588-2019 was published in November 2019 and includes backward-compatible improvements to the 2008 publication. IEEE 1588-2008 includes a profile concept defining PTP operating parameters and options. Several profiles have been defined for applications including telecommunications, electric power distribution and audiovisual. is an adaptation of PTP for use with Audio Video Bridging and Time-Sensitive Networking. History According to John Eidson, who led the IEEE 1588-2002 standardization effort, "IEEE 1588 is designed to fill a niche not well served by either of the two dominant protocols, NTP and GPS. IEEE 1588 is designed for local systems requiring accuracies beyond those attainable using NTP. It is also designed for applications that cannot bear the cost of a GPS receiver at each node, or for which GPS signals are inaccessible." PTP was originally defined in the IEEE 1588-2002 standard, officially entitled Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems, and published in 2002. In 2008, IEEE 1588-2008 was released as a revised standard; also known as PTP version 2 (PTPv2), it improves accuracy, precision and robustness but is not backward compatible with the original 2002 version. IEEE 1588-2019 was published in November 2019, is informally known as PTPv2.1 and includes backwards-compatible improvements t
https://en.wikipedia.org/wiki/Notation%20in%20probability%20and%20statistics
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory Random variables are usually written in upper case roman letters: , , etc. Particular realizations of a random variable are written in corresponding lower case letters. For example, could be a sample corresponding to the random variable . A cumulative probability is formally written to differentiate the random variable from its realization. The probability is sometimes written to distinguish it from other functions and measure P so as to avoid having to define "P is a probability" and is short for , where is the event space and is a random variable. notation is used alternatively. or indicates the probability that events A and B both occur. The joint probability distribution of random variables X and Y is denoted as , while joint probability mass function or probability density function as and joint cumulative distribution function as . or indicates the probability of either event A or event B occurring ("or" in this case means one or the other or both). σ-algebras are usually written with uppercase calligraphic (e.g. for the set of sets on which we define the probability P) Probability density functions (pdfs) and probability mass functions are denoted by lowercase letters, e.g. , or . Cumulative distribution functions (cdfs) are denoted by uppercase letters, e.g. , or . Survival functions or complementary cumulative distribution functions are often denoted by placing an overbar over the symbol for the cumulative:, or denoted as , In particular, the pdf of the standard normal distribution is denoted by , and its cdf by . Some common operators: : expected value of X : variance of X : covariance of
https://en.wikipedia.org/wiki/William%20Porcher%20Miles
William Porcher Miles (July 4, 1822 – May 11, 1899) was an American politician who was among the ardent states' rights advocates, supporters of slavery, and Southern secessionists who came to be known as the "Fire-Eaters." He is notable for having designed the most popular variant of the Confederate flag, originally rejected as the national flag in 1861 but adopted as a battle flag by the Army of Northern Virginia under General Robert E. Lee before it was reincorporated. Born in South Carolina, he showed little early interest in politics, and his early career included the study of law and a tenure as a mathematics professor at the Charleston College from 1843 to 1855. In the late 1840s, as sectional issues roiled South Carolina politics, Miles began to speak up on sectional issues. He opposed both the Wilmot Proviso and the Compromise of 1850. From then on, Miles would look at any northern efforts to restrict slavery as justification for secession. Miles was elected as mayor of Charleston in 1855 and served in the United States House of Representatives from 1857 until South Carolina seceded, in December 1860. He was a member of the state secession convention and a representative from South Carolina at the Confederate Convention in Montgomery, Alabama, which established the provisional government and constitution for the Confederate States. He represented his state in the Confederate House of Representatives during the American Civil War. Early life Miles was born in Walterboro, South Carolina, to James Saunders Miles and Sarah Bond Worley Miles. His ancestors were French Huguenots and his grandfather, Major Felix Warley, fought in the American Revolution. His primary education came at Southworth School and he later attended Willington Academy where John C. Calhoun had matriculated a generation earlier. Miles enrolled at Charleston College in 1838 where he met future secession advocates James De Bow and William Henry Trescot. Miles graduated in 1842 and in 1843
https://en.wikipedia.org/wiki/Procept
In mathematics education, a procept is an amalgam of three components: a "process" which produces a mathematical "object" and a "symbol" which is used to represent either process or object. It derives from the work of Eddie Gray and David O. Tall. The notion was first published in a paper in the Journal for Research in Mathematics Education in 1994, and is part of the process-object literature. This body of literature suggests that mathematical objects are formed by encapsulating processes, that is to say that the mathematical object 3 is formed by an encapsulation of the process of counting: 1,2,3... Gray and Tall's notion of procept improved upon the existing literature by noting that mathematical notation is often ambiguous as to whether it refers to process or object. Examples of such notations are: : refers to the process of adding as well as the outcome of the process. : refers to the process of summing an infinite sequence, and to the outcome of the process. : refers to the process of mapping x to 3x+2 as well as the outcome of that process, the function .
https://en.wikipedia.org/wiki/Glossary%20of%20probability%20and%20statistics
This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Notation in probability and statistics Probability axioms Glossary of experimental design List of statistical topics List of probability topics Glossary of areas of mathematics Glossary of calculus
https://en.wikipedia.org/wiki/Perseus%20%28spy%29
Perseus () was the code name of a hypothetical Soviet atomic spy that, if real, would have allegedly breached United States national security by infiltrating Los Alamos National Laboratory during the development of the Manhattan Project, and consequently, would have been instrumental for the Soviets in the development of nuclear weapons. Among researchers of the subject there is some consensus that Perseus was actually a creation of Soviet intelligence. Hypotheses include that "Perseus" was created as a composite of several different spies, disinformation to distract from specific spies, or may have been invented by the KGB to promote itself to the Soviet leadership to obtain more state funding. There were, however, multiple confirmed Soviet spies on the Manhattan project. They included Theodore Hall, George Koval, Morton Sobell, David Greenglass, Julius and Ethel Rosenberg, Klaus Fuchs, and Harry Gold. Character and history A reference to the profile of a spy who can be identified with Perseus describes them as an American scientist of young age at the time of the Manhattan Project. If they were real, Perseus would have supposedly been of age to participate in the Spanish Civil War. In the early 1940s Perseus would have been in New York City visiting his sick parents. During his stay in that city, they would have managed to be recruited by the GRU after looking for and contacting Morris Cohen, an American who joined the Communist Party during the Great Depression and worked as a Soviet spy. The meeting with Cohen must have been between September 1941 and July 1942, before Cohen enlisted in the army and left for the Western Front in Europe. Perseus reportedly introduced themself to Cohen as a physicist who had been invited to join the work at Los Alamos research center. By 1942, Perseus was supposedly already working at Los Alamos, being that they would have started sometime at least 18 months before German physicist and fellow atomic spy Klaus Fuchs, who joi
https://en.wikipedia.org/wiki/Ky%20Fan%20inequality
In mathematics, there are two different results that share the common name of the Ky Fan inequality. One is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval. The result was published on page 5 of the book Inequalities by Edwin F. Beckenbach and Richard E. Bellman (1961), who refer to an unpublished result of Ky Fan. They mention the result in connection with the inequality of arithmetic and geometric means and Augustin Louis Cauchy's proof of this inequality by forward-backward-induction; a method which can also be used to prove the Ky Fan inequality. This Ky Fan inequality is a special case of Levinson's inequality and also the starting point for several generalizations and refinements; some of them are given in the references below. The second Ky Fan inequality is used in game theory to investigate the existence of an equilibrium. Statement of the classical version If with for i = 1, ..., n, then with equality if and only if x1 = x2 = · · · = xn. Remark Let denote the arithmetic and geometric mean, respectively, of x1, . . ., xn, and let denote the arithmetic and geometric mean, respectively, of 1 − x1, . . ., 1 − xn. Then the Ky Fan inequality can be written as which shows the similarity to the inequality of arithmetic and geometric means given by Gn ≤ An. Generalization with weights If xi ∈ [0,½] and γi ∈ [0,1] for i = 1, . . ., n are real numbers satisfying γ1 + . . . + γn = 1, then with the convention 00 := 0. Equality holds if and only if either γixi = 0 for all i = 1, . . ., n or all xi > 0 and there exists x ∈ (0,½] such that x = xi for all i = 1, . . ., n with γi > 0. The classical version corresponds to γi = 1/n for all i = 1, . . ., n. Proof of the generalization Idea: Apply Jensen's inequality to the strictly concave function Detailed proof: (a) If at least one xi is zero, then the left-hand side of the Ky Fan inequality is zero and the inequality is proved. Equality holds if
https://en.wikipedia.org/wiki/Decision%20management
Decision management, also known as enterprise decision management (EDM) or business decision management (BDM) entails all aspects of designing, building and managing the automated decision-making systems that an organization uses to manage its interactions with customers, employees and suppliers. Computerization has changed the way organizations are approaching their decision-making because it requires that they automate more decisions, to handle response times and unattended operation required by computerization, and because it has enabled "information-based decisions" – decisions based on analysis of historical behavioral data, prior decisions, and their outcomes. Overview Decision management was described in 2005 as an "emerging important discipline, due to an increasing need to automate high-volume decisions across the enterprise and to impart precision, consistency, and agility in the decision-making process". Decision management is implemented "via the use of rule-based systems and analytic models for enabling high-volume, automated decision making". Organizations seek to improve the value created through each decision by deploying software solutions (generally developed using BRMS and predictive analytics technology) that better manage the tradeoffs between precision or accuracy, consistency, agility, speed or decision latency, and cost of decision-making within organizations. The concept of decision yield, for instance, focuses on all five key attributes of decision-making: more targeted decisions (precision); in the same way, over and over again (consistency); while being able to adapt "on-the-fly" (business agility) while reducing cost and improving speed, is an overall metric for how well an organization is making a particular decision. Organizations are adopting decision management technology and approaches because they need a higher return from previous infrastructure investments, are dealing with increasing business decision complexity, face competi
https://en.wikipedia.org/wiki/Voice%20user%20interface
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface. Voice user interfaces have been added to automobiles, home automation systems, computer operating systems, home appliances like washing machines and microwave ovens, and television remote controls. They are the primary way of interacting with virtual assistants on smartphones and smart speakers. Older automated attendants (which route phone calls to the correct extension) and interactive voice response systems (which conduct more complicated transactions over the phone) can respond to the pressing of keypad buttons via DTMF tones, but those with a full voice user interface allow callers to speak requests and responses without having to press any buttons. Newer voice command devices are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences. They are also capable of responding to several commands at once, separating vocal messages, and providing appropriate feedback, accurately imitating a natural conversation. Overview A VUI is the interface to any speech application. Only a short time ago, controlling a machine by simply talking to it was only possible in science fiction. Until recently, this area was considered to be artificial intelligence. However, advances in technologies like text-to-speech, speech-to-text, natural language processing, and cloud services contributed to the mass adoption of these types of interfaces. VUIs have become more commonplace, and people are taking advantage of the value that these hands-free, eyes-free interfaces provide in many situations. VUIs need to respond to input reliably, or they will be rejected and often ridiculed by their users. Designing a good VUI requires interdisciplinary talents of computer scie
https://en.wikipedia.org/wiki/Pruning%20%28morphology%29
The pruning algorithm is a technique used in digital image processing based on mathematical morphology. It is used as a complement to the skeleton and thinning algorithms to remove unwanted parasitic components (spurs). In this case 'parasitic' components refer to branches of a line which are not key to the overall shape of the line and should be removed. These components can often be created by edge detection algorithms or digitization. Common uses for pruning include automatic recognition of hand-printed characters. Often inconsistency in letter writing creates unwanted spurs that need to be eliminated for better characterization. Mathematical Definition The standard pruning algorithm will remove all branches shorter than a given number of points. If a parasitic branch is shorter than four points and we run the algorithm with n = 4 the branch will be removed. The second step ensures that the main trunks of each line are not shortened by the procedure. Structuring Elements The x in the arrays indicates a “don’t care” condition i.e. the image could have either a 1 or a 0 in the spot. Step 1: Thinning Apply this step a given (n) times to eliminate any branch with (n) or less pixels. Step 2: Find End Points Wherever the structuring elements are satisfied, the center of the 3x3 matrix is considered an endpoint. Step 3: Dilate End Points Perform dilation using a 3x3 matrix (H) consisting of all 1's and only insert 1's where the original image (A) also had a 1. Perform this for each endpoint in all direction (n) times. Step 4: Union of X1 & X3 Take the result from step 1 and union it with step 3 to achieve the final results. MATLAB Code %% --------------- % Pruning % --------------- clear; clc; % Image read in img = imread('Pruning.tif'); b_img_skel = bwmorph (img, 'skel', 40); b_img_spur = bwmorph(b_img_skel, 'spur', Inf); figure('Name', 'Pruning'); subplot(1,2,1); imshow(b_img_skel); title(sprintf('Image Skeleton')); subplot(1,2,2); imshow(b_img_s
https://en.wikipedia.org/wiki/Metric%20tensor%20%28general%20relativity%29
In general relativity, the metric tensor (in this context often abbreviated to simply the metric) is the fundamental object of study. The metric captures all the geometric and causal structure of spacetime, being used to define notions such as time, distance, volume, curvature, angle, and separation of the future and the past. In general relativity, the metric tensor plays the role of the gravitational potential in the classical theory of gravitation, although the physical content of the associated equations is entirely different. Gutfreund and Renn say "that in general relativity the gravitational potential is represented by the metric tensor." Notation and conventions This article works with a metric signature that is mostly positive (); see sign convention. The gravitation constant will be kept explicit. This article employs the Einstein summation convention, where repeated indices are automatically summed over. Definition Mathematically, spacetime is represented by a four-dimensional differentiable manifold and the metric tensor is given as a covariant, second-degree, symmetric tensor on , conventionally denoted by . Moreover, the metric is required to be nondegenerate with signature . A manifold equipped with such a metric is a type of Lorentzian manifold. Explicitly, the metric tensor is a symmetric bilinear form on each tangent space of that varies in a smooth (or differentiable) manner from point to point. Given two tangent vectors and at a point in , the metric can be evaluated on and to give a real number: This is a generalization of the dot product of ordinary Euclidean space. Unlike Euclidean space – where the dot product is positive definite – the metric is indefinite and gives each tangent space the structure of Minkowski space. Local coordinates and matrix representations Physicists usually work in local coordinates (i.e. coordinates defined on some local patch of ). In local coordinates (where is an index that runs from 0 to 3) the
https://en.wikipedia.org/wiki/Multicategory
In mathematics (especially category theory), a multicategory is a generalization of the concept of category that allows morphisms of multiple arity. If morphisms in a category are viewed as analogous to functions, then morphisms in a multicategory are analogous to functions of several variables. Multicategories are also sometimes called operads, or colored operads. Definition A (non-symmetric) multicategory consists of a collection (often a proper class) of objects; for every finite sequence of objects (for von Neumann ordinal ) and object Y, a set of morphisms from to Y; and for every object X, a special identity morphism (with n = 1) from X to X. Additionally, there are composition operations: Given a sequence of sequences of objects, a sequence of objects, and an object Z: if for each , fj is a morphism from to Yj; and g is a morphism from to Z: then there is a composite morphism from to Z. This must satisfy certain axioms: If m = 1, Z = Y0, and g is the identity morphism for Y0, then g(f0) = f0; if for each , nj = 1, , and fj is the identity morphism for Yj, then ; and an associativity condition: if for each and , is a morphism from to , then are identical morphisms from to Z. Comcategories A comcategory (co-multi-category) is a totally ordered set O of objects, a set A of multiarrows with two functions where O% is the set of all finite ordered sequences of elements of O. The dual image of a multiarrow f may be summarized A comcategory C also has a multiproduct with the usual character of a composition operation. C is said to be associative if there holds a multiproduct axiom in relation to this operator. Any multicategory, symmetric or non-symmetric, together with a total-ordering of the object set, can be made into an equivalent comcategory. A multiorder is a comcategory satisfying the following conditions. There is at most one multiarrow with given head and ground. Each object x has a unit multiarrow. A multiarrow is a unit
https://en.wikipedia.org/wiki/INMOS%20G364%20framebuffer
The G364 framebuffer was a line of graphics adapters using the SGS Thomson INMOS G364 colour video controller, produced by INMOS (known for their transputer and eventually acquired by SGS Thomson and incorporated into STMicroelectronics) in the early 1990s. The G364 included a RAMDAC and a 64-bit interface to VRAM graphical memory to implement a framebuffer, but did not include any hardware-based graphical acceleration other than a hardware cursor function. The G364 was largely similar in design and functionality to the G300 framebuffer, but had a 64-bit VRAM interface instead of the slower 32-bit interface of the lower-price G300. The INMOS G364 is quite similar to the G332 found on the Personal DECstation and Dell PowerLine 450DE/2 DGX Graphics Workstation. Although the G364 was capable of providing comparatively high resolution output (up to 1600×1200 pixels at 8 bits-per-pixel, in many cases) typically achieved only in Unix workstations such as those of Sun Microsystems or SGI, it was not a popular chipset for the personal computer manufacturers of the early 1990s and was not adopted by any major workstation manufacturers. The G364 framebuffer was used in an after-market Amiga graphics card, and as the primary graphics system sold with the MIPS Magnum 4000 series of MIPS-based Windows NT workstations. Amiga cards based on the G364: EGS SPECTRUM 110/24 Rainbow III Visiona Paint (G300) The G332 found use in the State Machine G8 and Computer Concepts Colour Card for the Acorn Archimedes range of personal computers, these providing a secondary framebuffer to which the main display memory was copied periodically, also offering a broader 24-bit palette for all graphics modes including individually programmable colours for 256-colour modes. The capabilities of the G332 were reported as being "almost identical" to the ARM VIDC20 that was announced at the time these adapter cards became available. See also Graphics processing unit
https://en.wikipedia.org/wiki/Coherent%20file%20distribution%20protocol
Coherent File Distribution Protocol (CFDP) is an IETF-documented experimental protocol intended for high-speed one-to-many file transfers. Class 1 is assured delivery, class 2 is blind unassured delivery.
https://en.wikipedia.org/wiki/GameTap
GameTap was an online video game service established by Turner Broadcasting System (TBS) in 2006. It provided users with classic arcade video games and game-related video content. The service was acquired by French online video game service Metaboli in 2008 as a wholly owned subsidiary; Metaboli aiming to create a global games service. The service remained active until October 2015, when it was shut down by Metaboli. Features GameTap was conceived primarily as an online subscription rental service, competing against mail-based services like GameFly. GameTap offered two subscription levels: a Premium subscription with access to the entire content library, and a Classic subscription with access to older console and arcade games running in emulation. GameTap also sold games via the online distribution method. GameTap initially offered a limited selection of games for free play without a subscription, but this option was discontinued. Originally, GameTap was designed to offer not only video games, but a complete media hub (GameTap TV), taking advantage of the TBS catalog as well as offering original video content, including the animated series Revisioned: Tomb Raider and new episodes of Space Ghost Coast to Coast. GameTap TV has since been discontinued. Most multiplayer games can be played by two users on the same computer while many others not originally intended to be played outside of a LAN may be played over the internet by using a VPN client such as Hamachi. A limited number of games have been enhanced with an online leaderboard and challenge lobby, adding internet multiplayer to games that previously could only be played face to face. Every Monday GameTap holds a leaderboard tournament with a different game each week. GameTap Originals GameTap has funded the development of a number of titles, with the games subsequently premiering as GameTap exclusives. Such games include Sam & Max Season One and Myst Online: Uru Live. On February 7, 2007, GameTap announced
https://en.wikipedia.org/wiki/Full%20cycle
In a pseudorandom number generator (PRNG), a full cycle or full period is the behavior of a PRNG over its set of valid states. In particular, a PRNG is said to have a full cycle if, for any valid seed state, the PRNG traverses every valid state before returning to the seed state, i.e., the period is equal to the cardinality of the state space. The restrictions on the parameters of a PRNG for it to possess a full cycle are known only for certain types of PRNGs, such as linear congruential generators and linear-feedback shift registers. There is no general method to determine whether a PRNG algorithm is full-cycle short of exhausting the state space, which may be exponentially large compared to the size of the algorithm's internal state. Example 1 (in C/C++) Given a random number seed that is greater or equal to zero, a total sample size greater than 1, and an increment coprime to the total sample size, a full cycle can be generated with the following logic. Each nonnegative number smaller than the sample size occurs exactly once. unsigned int seed = 0; unsigned int sample_size = 3000; unsigned int generated_number = seed % sample_size; unsigned int increment = 7; for (unsigned int iterator = 0; iterator < sample_size; ++iterator) { generated_number = (generated_number + increment) % sample_size; } Example 1 (in Python) # Generator that goes through a full cycle def cycle(seed: int, sample_size: int, increment: int): nb = seed for i in range(sample_size): nb = (nb + increment) % sample_size yield nb # Example values seed = 17 sample_size = 100 increment = 13 # Print all the numbers print(list(cycle(seed, sample_size, increment))) # Verify that all numbers were generated correctly assert set(cycle(seed, sample_size, increment)) == set(range(sample_size)) See also Linear congruential generator Xorshift Pseudorandom number generators Articles with example Python (programming language) code
https://en.wikipedia.org/wiki/CRISPR
CRISPR () (an acronym for clustered regularly interspaced short palindromic repeats) is a family of DNA sequences found in the genomes of prokaryotic organisms such as bacteria and archaea. These sequences are derived from DNA fragments of bacteriophages that had previously infected the prokaryote. They are used to detect and destroy DNA from similar bacteriophages during subsequent infections. Hence these sequences play a key role in the antiviral (i.e. anti-phage) defense system of prokaryotes and provide a form of acquired immunity. CRISPR is found in approximately 50% of sequenced bacterial genomes and nearly 90% of sequenced archaea. Cas9 (or "CRISPR-associated protein 9") is an enzyme that uses CRISPR sequences as a guide to recognize and open up specific strands of DNA that are complementary to the CRISPR sequence. Cas9 enzymes together with CRISPR sequences form the basis of a technology known as CRISPR-Cas9 that can be used to edit genes within the organisms. This editing process has a wide variety of applications including basic biological research, development of biotechnological products, and treatment of diseases. The development of the CRISPR-Cas9 genome editing technique was recognized by the Nobel Prize in Chemistry in 2020 which was awarded to Emmanuelle Charpentier and Jennifer Doudna. History Repeated sequences The discovery of clustered DNA repeats took place independently in three parts of the world. The first description of what would later be called CRISPR is from Osaka University researcher Yoshizumi Ishino and his colleagues in 1987. They accidentally cloned part of a CRISPR sequence together with the "iap" gene (isozyme conversion of alkaline phosphatase) from the genome of Escherichia coli which was their target. The organization of the repeats was unusual. Repeated sequences are typically arranged consecutively, without interspersing different sequences. They did not know the function of the interrupted clustered repeats. In 1993, r
https://en.wikipedia.org/wiki/Thermochromic%20ink
Thermochromic ink (also called thermochromatic ink) is a type of dye that changes color when temperatures increase or decrease. Often used in the manufacture of many toys or product packaging, as well as thermometers. Thermochromic ink can also turn transparent when heat is applied; an example of this type of thermochromic ink is found on corners of an examination mark sheet. This proves that the sheet has not been edited or photocopied, and also on certain pizza boxes to show the temperature of the product. Use on packaging can be to detect temperature history during shipping and to indicate proper heating in an oven. Examples On June 20, 2017, the United States Postal Service released the first application of thermochromic ink to postage stamps in its Total Eclipse of the Sun Forever stamp to commemorate the solar eclipse of August 21, 2017. When pressed with a finger, body heat turns the black circle in the center of the stamp into an image of the full moon. The stamp image is a photo of a total solar eclipse seen in Jalu, Libya, on March 29, 2006. The photo was taken by retired NASA astrophysicist Fred Espenak, aka "Mr. Eclipse". See also Thermochromism Security printing Active packaging
https://en.wikipedia.org/wiki/Hydrobiology
Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters). One of the significant areas of current research is eutrophication. Special attention is paid to biotic interactions in plankton assemblage including the microbial loop, the mechanism of influencing algal blooms, phosphorus load, and lake turnover. Another subject of research is the acidification of mountain lakes. Long-term studies are carried out on changes in the ionic composition of the water of rivers, lakes and reservoirs in connection with acid rain and fertilization. One goal of current research is elucidation of the basic environmental functions of the ecosystem in reservoirs, which are important for water quality management and water supply. Much of the early work of hydrobiologists concentrated on the biological processes utilized in sewage treatment and water purification especially slow sand filters. Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive. A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them. They also identify pollutants and nuisances that can affect the aquatic fauna and flora. They take the samples and write reports of their observations for publications. A hydrobiologist engineer intervenes more in the process of the study. They define the inte
https://en.wikipedia.org/wiki/Apple%20Interactive%20Television%20Box
The Apple Interactive Television Box (AITB) is a television set-top box developed by Apple Computer (now Apple Inc.) in partnership with a number of global telecommunications firms, including British Telecom and Belgacom. Prototypes of the unit were deployed at large test markets in parts of the United States and Europe in 1994 and 1995, but the product was canceled shortly thereafter, and was never mass-produced or marketed. Overview The AITB was designed as an interface between a consumer and an interactive television service. The unit's remote control would allow a user to choose what content would be shown on a connected television, and to seek with fast forward and rewind. In this regard it is similar to a modern satellite receiver or TiVo unit. The box would only pass along the user's choices to a central content server for streaming instead of issuing content itself. There were also plans for game shows, educational material for children, and other forms of content made possible by the interactive qualities of the device. Early conceptual prototypes have an unfinished feel. Near-completion units have a high production quality, the internal components often lack prototype indicators, and some units have FCC approval stickers. These facts, along with a full online manual suggest the product was very near completion before being canceled. Infrastructure Because the machine was designed to be part of a subscription data service, the AITB units are mostly inoperable. The ROM contains only what is required to continue booting from an external hard drive or from its network connection. Many of the prototypes do not appear to even attempt to boot. This is likely dependent on changes in the ROM. The ROM itself contains parts of a downsized System 7.1, enabling it to establish a network connection to the media servers provided by Oracle. The Oracle Media Server (OMS) initially ran on hardware produced by Larry Ellison's nCube Systems company, but was later also ma
https://en.wikipedia.org/wiki/Gorongosa%20National%20Park
Gorongosa National Park is at the southern end of the Great African Rift Valley in the heart of central Mozambique, Southeast Africa. The more than park comprises the valley floor and parts of surrounding plateaus. Rivers originating on nearby Mount Gorongosa () water the plain. Seasonal flooding and waterlogging of the valley, which is composed of a mosaic of soil types, creates a variety of distinct ecosystems. Grasslands are dotted with patches of acacia trees, savannah, dry forest on sands and seasonally rain-filled pans, and termite hill thickets. The plateaus contain miombo and montane forests and a spectacular rain forest at the base of a series of limestone gorges. This combination of unique features at one time supported some of the densest wildlife populations in all of Africa, including charismatic carnivores, herbivores, and over 500 bird species. But large mammal numbers were reduced by as much as 95% and ecosystems were stressed during the Mozambican Civil War (1977-1992). The Carr Foundation/Gorongosa Restoration Project, a U.S. non-profit organization, has teamed with the Government of Mozambique to protect and restore the ecosystem of Gorongosa National Park and to develop an ecotourism industry to benefit local communities. History Hunting reserve: 1920–1959 The first official act to protect the Gorongosa region, Portuguese Mozambique, came in 1920 when the Mozambique Company ordered 1,000 square km set aside as a hunting reserve for company administrators and their guests. Chartered by the government of Portugal, the Mozambique Company controlled all of central Mozambique between 1891 and 1940. In 1935, Mr. Jose Henriques Coimbra was named warden and Jose Ferreira became the reserve's first guide. That same year the Mozambique Company enlarged the reserve to 3,200 square km to protect habitat for nyala and black rhino, both highly prized hunting trophies. By 1940 the reserve had become so popular that a new headquarters and tourist camp wa
https://en.wikipedia.org/wiki/Change%20of%20variables
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written (this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable . Substituting x by into the polynomial gives which is just a quadratic equation with the two solutions: The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives Then, assuming that one is interested only in real solutions, the solutions of the original equation are Simple example Consider the system of equations where and are positive integers with . (Source: 1991 AIME) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as . Making the substitutions and reduces the system to . Solving this gives and . Back-substituting the first ordered pair gives us , which gives the solution Back-substituting the second ordered pair gives us , which gives no solutions. Hence the solution that solves the system is . Formal introduction Let , be smooth manifolds and let be a -diffeomorphism between them, that is: is a times continuously differentiable, bijective map from to with times continuously differentiable inverse from to .
https://en.wikipedia.org/wiki/Many-to-many
Many-to-many communication occurs when information is shared between groups. Members of a group receive information from multiple senders. Wikis are a type of many-to-many communication, where multiple editors collaborate to create content that is disseminated among a wide audience. Video conferencing, online gaming, chat rooms, and internet forums are also types of many-to-many communication.
https://en.wikipedia.org/wiki/Anime%20Web%20Turnpike
Anime Web Turnpike (also known as Anipike) was a web directory founded in August 1995 by Jay Fubler Harvey. It served as a large database of links to various anime and manga websites. With well over 40,000 links, it had one of the largest organized collection of anime and manga related links. Users could add their own website to the database by setting up a username on the site and adding it to the applicable category. The website also had services such as a community forum, chat room and a magazine. The Anime Broadcasting Network, Inc. acquired the Anime Web Turnpike in 2000 with plans to enhance and expand the site, but multiple technical issues delayed these plans. As of Nov 2014, the site has gone offline. The site was back online as of July 2016, with no new posts since 2014. As of March 2021, the website has not been updated. Reception In 1995, the site was mentioned among 101 Internet sites to visit. The site and its creator were featured in the 2003 documentary film Otaku Unite! In 2003, Anime Web Turnpike was ranked the number three "must visit" anime website by the online magazine Animefringe.
https://en.wikipedia.org/wiki/Viridos%20%28company%29
In September 2021, Synthetic Genomics Inc. (SGI), a private company located in La Jolla, California, changed its name to Viridos. The company is focused on the field of synthetic biology, especially harnessing photosynthesis with micro algae to create alternatives to fossil fuels. Viridos designs and builds biological systems to address global sustainability problems. Synthetic biology is an interdisciplinary branch of biology and engineering, combining fields such as biotechnology, evolutionary biology, molecular biology, systems biology, biophysics, computer engineering, and genetic engineering. Synthetic Genomics uses techniques such as software engineering, bioprocessing, bioinformatics, biodiscovery, analytical chemistry, fermentation, cell optimization, and DNA synthesis to design and build biological systems. The company produces or performs research in the fields of sustainable bio-fuels, insect resistant crops, transplantable organs, targeted medicines, DNA synthesis instruments as well as a number of biological reagents. Core markets SGI mainly operates in three end markets: research, bioproduction and applied products. The research segment focuses on genomics solutions for academic and commercial research organizations. The commercial products and services include instrumentation, reagents, DNA synthesis services, and bioinformatics services and software. In 2015, the company launched the BioXP 3200 system, a fully automated benchtop instrument that produces DNA fragments from many different sources for genomic data. The company's efforts in bio-based production are intended to improve both existing production hosts and develop entirely new synthetic production hosts with the goal of more efficient routes to bioproducts. SGI has a number of commercial as well as research and development stage programs across a variety of industries. Some of these research partnerships include: History Synthetic Genomics was founded in the spring of 2005 by J. Craig
https://en.wikipedia.org/wiki/SOG%20Knife
The SOG Knife was designed for, and issued to, covert Studies and Observations Group personnel during the Vietnam War. It was unmarked and supposedly untraceable to country of origin or manufacture in order to maintain plausible deniability of covert operators in the event of their death or capture. Design The SOG Knife was designed by Benjamin Baker, the Deputy Chief of the U.S. Counterinsurgency Support Office (CISO). A chrome-moly steel known as SKS-3 was chosen for the blade and hardened to a Rockwell hardness of 55-57. The blade pattern featured a convex false edge on the clip point of a Bowie knife. The stacked leather handle was inspired by a Marbles Gladstone Skinning Knife made in the 1920s owned by Baker, into which finger grooves were molded. The blade was typically parkerized or blackened to reduce glare. This was done so by applying a dark gun-blue finish (similar to those used on guns) on this SK-3 carbon steel knife. The knife was carried in a leather sheath which contained a sharpening steel or whetstone. The first contract was awarded to Japanese Trading Company Yogi Shokai, Okinawa for 1,300 seven-inch blades designated "Knife, indigenous, RECON, blade, w/scabbard & whetstone" at $9.85 each. In 1966, SOG ordered 1,200 sterile knives with six-inch blades and black sheaths and in March of the following year an additional lot of 3,700 was ordered. This second lot was serial numbered for accountability purposes and was designated "Knife, indigenous, hunting, blade, w/black sheath and whetstone". Further knives were ordered from Japan Sword, Tokyo as well. The orders were actually fulfilled by a number of knifemakers and as a result, the various lots had minor differences such as blade bluing color and guard color or shape. Although the SOG office based at Kadena and Yogi Shokai were in Okinawa, it is believed that only a major knifemaking source like Seki could have fulfilled all these orders, In 1986, a company named SOG Specialty Knives bas
https://en.wikipedia.org/wiki/Construction%20set
A construction set is a set of standardized pieces that allow for the construction of a variety of different models. Construction sets are generally marketed as toys. One very popular brand of construction set toys is Lego. Toys Psychological benefits Construction toy play is beneficial for building social skills and building trust in others because it acts as a collaborative task where individuals have to cooperate to finish the taskbuilding an object out of Lego, for example. The effect was found in high school students. For children specifically, children who complete models using toy building blocks have much better spatial ability than children who do not complete such models. Spatial ability also predicts completion of models. Construction toy play is also beneficial for autistic children when both individual and group play with building blocks is incorporated. Autistic children who played with building blocks were motivated to initiate social contact with children their age, able to maintain and endure contact with those children, and were also able to surpass the barriers of being withdrawn and highly structured. Categories Construction sets can be categorized according to their connection method and geometry: Struts of variable length that are connected to any point along another strut, and at nodes. Tesseract connection points are initially flexible but can be made rigid with the addition of clips. Struts of fixed but multiple lengths that are connected by nodes are good for building space frames, and often have components that allow full rotational freedom. D8h (*228) nodes are used for K'Nex, Tinkertoys, Playskool Pipeworks, Cleversticks and interlocking disks in general. D6h nodes are used for interlocking disks. Ih (*532) nodes are used for Zometool Panels of varying sizes and shapes Panels of varying sizes and shapes are connected by pins or screws perpendicular to the panels, which are good for building linkages such as an Erector Set, Min
https://en.wikipedia.org/wiki/Gell-Mann%E2%80%93Nishijima%20formula
The Gell-Mann–Nishijima formula (sometimes known as the NNG formula) relates the baryon number B, the strangeness S, the isospin I3 of quarks and hadrons to the electric charge Q. It was originally given by Kazuhiko Nishijima and Tadao Nakano in 1953, and led to the proposal of strangeness as a concept, which Nishijima originally called "eta-charge" after the eta meson. Murray Gell-Mann proposed the formula independently in 1956. The modern version of the formula relates all flavour quantum numbers (isospin up and down, strangeness, charm, bottomness, and topness) with the baryon number and the electric charge. Formula The original form of the Gell-Mann–Nishijima formula is: This equation was originally based on empirical experiments. It is now understood as a result of the quark model. In particular, the electric charge Q of a quark or hadron particle is related to its isospin I3 and its hypercharge Y via the relation: Since the discovery of charm, top, and bottom quark flavors, this formula has been generalized. It now takes the form: where Q is the charge, I3 the 3rd-component of the isospin, B the baryon number, and S, C, B′, T are the strangeness, charm, bottomness and topness numbers. Expressed in terms of quark content, these would become: By convention, the flavor quantum numbers (strangeness, charm, bottomness, and topness) carry the same sign as the electric charge of the particle. So, since the strange and bottom quarks have a negative charge, they have flavor quantum numbers equal to −1. And since the charm and top quarks have positive electric charge, their flavor quantum numbers are +1. From a quantum chromodynamics point of view, the Gell-Mann–Nishijima formula and its generalized version can be derived using an approximate SU(3) flavour symmetry because the charges can be defined using the corresponding conserved Noether currents.
https://en.wikipedia.org/wiki/Invariants%20of%20tensors
In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor are the coefficients of the characteristic polynomial , where is the identity operator and represent the polynomial's eigenvalues. More broadly, any scalar-valued function is an invariant of if and only if for all orthogonal . This means that a formula expressing an invariant in terms of components, , will give the same result for all Cartesian bases. For example, even though individual diagonal components of will change with a change in basis, the sum of diagonal components will not change. Properties The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective. Calculation of the invariants of rank two tensors In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor. Principal invariants For such tensors, the principal invariants are given by: For symmetric tensors, these definitions are reduced. The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that where is the second-order identity tensor. Main invariants In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called de
https://en.wikipedia.org/wiki/Spatial%20ecology
Spatial ecology studies the ultimate distributional or spatial unit occupied by a species. In a particular habitat shared by several species, each of the species is usually confined to its own microhabitat or spatial niche because two species in the same general territory cannot usually occupy the same ecological niche for any significant length of time. Overview In nature, organisms are neither distributed uniformly nor at random, forming instead some sort of spatial pattern. This is due to various energy inputs, disturbances, and species interactions that result in spatially patchy structures or gradients. This spatial variance in the environment creates diversity in communities of organisms, as well as in the variety of the observed biological and ecological events. The type of spatial arrangement present may suggest certain interactions within and between species, such as competition, predation, and reproduction. On the other hand, certain spatial patterns may also rule out specific ecological theories previously thought to be true. Although spatial ecology deals with spatial patterns, it is usually based on observational data rather than on an existing model. This is because nature rarely follows set expected order. To properly research a spatial pattern or population, the spatial extent to which it occurs must be detected. Ideally, this would be accomplished beforehand via a benchmark spatial survey, which would determine whether the pattern or process is on a local, regional, or global scale. This is rare in actual field research, however, due to the lack of time and funding, as well as the ever-changing nature of such widely-studied organisms such as insects and wildlife. With detailed information about a species' life-stages, dynamics, demography, movement, behavior, etc., models of spatial pattern may be developed to estimate and predict events in unsampled locations. History Most mathematical studies in ecology in the nineteenth century assumed a un
https://en.wikipedia.org/wiki/Monatomic%20ion
A monatomic ion (also called simple ion) is an ion consisting of exactly one atom. If, instead of being monatomic, an ion contains more than one atom, even if these are of the same element, it is called a polyatomic ion. For example, calcium carbonate consists of the monatomic cation Ca2+ and the polyatomic anion ; both pentazenium () and azide () are polyatomic as well. A type I binary ionic compound contains a metal that forms only one type of ion. A type II ionic compound contains a metal that forms more than one type of ion, i.e., the same element in different oxidation states. {|class="wikitable" |- ! colspan="2" | Common type I monatomic cations |- | Hydrogen | H+ |- | Lithium | Li+ |- | Sodium | Na+ |- | Potassium | K+ |- | Rubidium | Rb+ |- | Caesium | Cs+ |- | Magnesium | Mg2+ |- | Calcium | Ca2+ |- | Strontium | Sr2+ |- | Barium | Ba2+ |- | Aluminium | Al3+ |- | Silver | Ag+ |- | Zinc | Zn2+ |- |} {|class="wikitable" |- ! colspan="3" | Common type II monatomic cations |- |- | iron(II) | Fe2+ | ferrous |- | iron(III) | Fe3+ | ferric |- | copper(I) | Cu+ | cuprous |- | copper(II) | Cu2+ | cupric |- | cobalt(II) | Co+2 | cobaltous |- | cobalt(III) | Co3+ | cobaltic |- | tin(II) | Sn2+ | stannous |- | tin(IV) | Sn4+ | stannic |} {|class="wikitable" |- ! colspan="2" | Common monatomic anions |- | hydride | H− |- | fluoride | F− |- | chloride | Cl− |- | bromide | Br− |- | iodide | I− |- | oxide | O2− |- | sulfide | S2− |- | nitride | N3− |- | phosphide | P3− |- |}
https://en.wikipedia.org/wiki/Mathematical%20universe%20hypothesis
In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the ultimate ensemble theory, is a speculative "theory of everything" (TOE) proposed by cosmologist Max Tegmark. Description Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world". The theory can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical entities; a form of mathematicism in that it denies that anything exists except mathematical objects; and a formal expression of ontic structural realism. Tegmark claims that the hypothesis has no free parameters and is not observationally ruled out. Thus, he reasons, it is preferred over other theories-of-everything by Occam's Razor. Tegmark also considers augmenting the MUH with a second assumption, the computable universe hypothesis (CUH), which says that the mathematical structure that is our external physical reality is defined by computable functions. The MUH is related to Tegmark's categorization of four levels of the multiverse. This categorization posits a nested hierarchy of increasing diversity, with worlds corresponding to different sets of initial conditions (level 1), physical constants (level 2), quantum branches (level 3), and altogether different equations or mathematical structures (level 4). Criticisms and responses Andreas Albrecht of Imperial College in London called it a "provocative" solution to one of the central problems facing physics. Alth
https://en.wikipedia.org/wiki/Hereditarianism
Hereditarianism is the doctrine or school of thought that heredity plays a significant role in determining human nature and character traits, such as intelligence and personality. Hereditarians believe in the power of genetics to explain human character traits and solve human social and political problems. Hereditarians adopt the view that an understanding of human evolution can extend the understanding of human nature. Overview Social scientist Barry Mehler defines hereditarianism as "the belief that a substantial part of both group and individual differences in human behavioral traits are caused by genetic differences". Hereditarianism is sometimes used as a synonym for biological or genetic determinism, though some scholars distinguish the two terms. When distinguished, biological determinism is used to mean that heredity is the only factor. Supporters of hereditarianism reject this sense of biological determinism for most cases. However, in some cases genetic determinism is true; for example, Matt Ridley describes Huntington's disease as "pure fatalism, undiluted by environmental variability". In other cases, hereditarians would see no role for genes; for example, the condition of "not knowing a word of Chinese" has nothing to do (directly) with genes. Hereditarians point to the heritability of cognitive ability, and the outsized influence that cognitive ability has on life outcomes, as evidence in favor of the hereditarian viewpoint. According to Plomin and Van Stumm (2018), "Intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait." Estimates for the heritability of intelligence range from 20% in infancy to 80% in adulthood. History Francis Galton is generally considered the father of hereditarianism. In his book Hereditary Genius (1869), Galton pioneered research on the heredity of intelligence. Galton continued research into the heredity of human behavior in his later works, includ
https://en.wikipedia.org/wiki/Uniformization%20%28set%20theory%29
In set theory, a branch of mathematics, the axiom of uniformization is a weak form of the axiom of choice. It states that if is a subset of , where and are Polish spaces, then there is a subset of that is a partial function from to , and whose domain (the set of all such that exists) equals Such a function is called a uniformizing function for , or a uniformization of . To see the relationship with the axiom of choice, observe that can be thought of as associating, to each element of , a subset of . A uniformization of then picks exactly one element from each such subset, whenever the subset is non-empty. Thus, allowing arbitrary sets X and Y (rather than just Polish spaces) would make the axiom of uniformization equivalent to the axiom of choice. A pointclass is said to have the uniformization property if every relation in can be uniformized by a partial function in . The uniformization property is implied by the scale property, at least for adequate pointclasses of a certain form. It follows from ZFC alone that and have the uniformization property. It follows from the existence of sufficient large cardinals that and have the uniformization property for every natural number . Therefore, the collection of projective sets has the uniformization property. Every relation in L(R) can be uniformized, but not necessarily by a function in L(R). In fact, L(R) does not have the uniformization property (equivalently, L(R) does not satisfy the axiom of uniformization). (Note: it's trivial that every relation in L(R) can be uniformized in V, assuming V satisfies the axiom of choice. The point is that every such relation can be uniformized in some transitive inner model of V in which the axiom of determinacy holds.)
https://en.wikipedia.org/wiki/Sachindra%20Prasad%20Bose
Sachindra Prasad Bose () (died February 1941) was an Indian independence movement activist and follower of Sir Surendranath Banerjee. He was the son-in-law of the moderate Brahmo leader, Krishna Kumar Mitra. On 4 November 1905, when he was a fourth year student of Ripon College, Calcutta, he took initiative to form the Anti-Circular Society in protest against the circular issued by R. W. Carlyle, then Chief Secretary of the Government of Bengal instructing Magistrates and Collectors to take stern measures against students involved in politics. He became its secretary and Krishna Kumar Mitra became its president. He, along with Kanungo, designed and unfurled the Calcutta Flag on 7 August 1906 in Parsi Bagan Square (Greer Park) in Calcutta, India. In 1908, he was arrested and sent to the Rawalpindi jail. After his release, he worked as the editor of a magazine named Vyavsa O Vanijya.
https://en.wikipedia.org/wiki/The%20Hobbit%20%281982%20video%20game%29
The Hobbit is an illustrated text adventure computer game released in 1982 for the ZX Spectrum home computer and based on the 1937 book The Hobbit, by J. R. R. Tolkien. It was developed at Beam Software by Philip Mitchell and Veronika Megler and published by Melbourne House. It was later converted to most home computers available at the time including the Commodore 64, BBC Micro, and Oric computers. By arrangement with the book publishers, a copy of the book was included with each game sold. The parser was very advanced for the time and used a subset of English called Inglish. When it was released, most adventure games used simple verb-noun parsers (allowing for simple phrases like "get lamp"), but Inglish allowed the player to type advanced sentences such as "ask Gandalf about the curious map then take sword and kill troll with it". The parser was complex and intuitive, introducing pronouns, adverbs ("viciously attack the goblin"), punctuation and prepositions and allowing the player to interact with the game world in ways not previously possible. Gameplay Many locations are illustrated by an image, based on originals designed by Kent Rees. On the tape version, to save space, each image was stored in a compressed format by storing outline information and then flood filling the enclosed areas on the screen. The slow CPU speed meant that it would take up to several seconds for each scene to draw. The disk-based versions of the game used pre-rendered, higher-quality images. The game has an innovative text-based physics system, developed by Veronika Megler. Objects, including the characters in the game, have a calculated size, weight, and solidity. Objects can be placed inside other objects, attached together with rope and damaged or broken. If the main character is sitting in a barrel and this barrel is then picked up and thrown through a trapdoor, the player would go through. Unlike other works of interactive fiction, the game is also in real time, insofar as a p
https://en.wikipedia.org/wiki/Time%20delay%20and%20integration
A time delay and integration or time delay integration (TDI) charge-coupled device (CCD) is an image sensor for capturing images of moving objects at low light levels. While using similar underlying CCD technology, in operation it contrasts with staring arrays and line scanned arrays. It works by synchronized mechanical and electronical scanning, so that the effects of dim imaging targets on the sensor can be integrated over longer periods of time. TDI is more of an operating mode for CCDs than a separate type of CCD device altogether, even if technical optimizations for the mode are also available. The principle behind TDI—constructive interference between separate observations—is often applicable to other sensor technologies, so that it is comparable to any long term integrating mode of imaging, such as speckle imaging, adaptive optics, and especially long exposure astronomical observation. Detailed operation It is perhaps the easiest to understand TDI devices by contrast with more well-known types of CCD sensors. The best known is the staring array one. In it, there are hundreds or thousands of adjacent rows of specially engineered semiconductor which react to light by accumulating charge, and slightly separated in depth from it by insulation, a tightly spaced array of gate electrodes, whose electric field can be used to drive the accumulated charge around in a predictable and almost lossless fashion. In a staring array configuration, the image is exposed on the two-dimensional semiconductor surface, and then the resulting charge distribution over each line of the image is moved to the side, to be rapidly and sequentially read out by an electronic read amplifier. When done fast enough, this produces a snapshot of the applied photonic flux over the sensor; the readout can proceed in parallel over the several lines, and yields a two-dimensional image of the light applied. Along with CMOS detectors which sense the photocharge accumulation pixel by pixel instead
https://en.wikipedia.org/wiki/Doctor%20Atomic
Doctor Atomic is an opera by the contemporary American composer John Adams, with libretto by Peter Sellars. It premiered at the San Francisco Opera on October 1, 2005. The work focuses on how leading figures at Los Alamos dealt with the great stress and anxiety of preparing for the test of the first atomic bomb (the "Trinity" test). In 2007, a documentary was made by Jon H. Else about the creation of the opera and collaboration between Adams and Sellars, titled Wonders Are Many. Composition history The first act takes place about a month before the bomb is to be tested, and the second act is set in the early morning of July 16, 1945 (the day of the test). During the second act, time is shown slowing down for the characters and then snapping back to the clock. The opera ends in the final, prolonged moment before the bomb is detonated. Although the original commission for the opera suggested that U.S. physicist J. Robert Oppenheimer, the "father of the atomic bomb", be fashioned as a 20th-century Doctor Faustus, Adams and Sellars deliberately worked to avoid this characterization. Alice Goodman worked for two years with Adams on the project before leaving. She objected to the characterization of Edward Teller, as dictated by the original commission. The work centers on key players in the Manhattan Project, especially Robert Oppenheimer and General Leslie Groves. It also features Kitty Oppenheimer, Robert's wife. Sellars adapted the libretto from primary historical sources. Doctor Atomic is similar in style to previous Adams operas Nixon in China and The Death of Klinghoffer, both of which explored the characters and personalities of figures who were involved in historical incidents, rather than a re-enactment of the events themselves. Libretto Sellars adapted much of the text for the opera from declassified U.S. government documents and communications among the scientists, government officials, and military personnel who were involved in the project. He also inc
https://en.wikipedia.org/wiki/Convective%20inhibition
Convective inhibition (CIN or CINH) is a numerical measure in meteorology that indicates the amount of energy that will prevent an air parcel from rising from the surface to the level of free convection. CIN is the amount of energy required to overcome the negatively buoyant energy the environment exerts on an air parcel. In most cases, when CIN exists, it covers a layer from the ground to the level of free convection (LFC). The negatively buoyant energy exerted on an air parcel is a result of the air parcel being cooler (denser) than the air which surrounds it, which causes the air parcel to accelerate downward. The layer of air dominated by CIN is warmer and more stable than the layers above or below it. The situation in which convective inhibition is measured is when layers of warmer air are above a particular region of air. The effect of having warm air above a cooler air parcel is to prevent the cooler air parcel from rising into the atmosphere. This creates a stable region of air. Convective inhibition indicates the amount of energy that will be required to force the cooler packet of air to rise. This energy comes from fronts, heating, moistening, or mesoscale convergence boundaries such as outflow and sea breeze boundaries, or orographic lift. Typically, an area with a high convection inhibition number is considered stable and has very little likelihood of developing a thunderstorm. Conceptually, it is the opposite of CAPE. CIN hinders updrafts necessary to produce convective weather, such as thunderstorms. Although, when large amounts of CIN are reduced by heating and moistening during a convective storm, the storm will be more severe than in the case when no CIN was present. CIN is strengthened by low altitude dry air advection and surface air cooling. Surface cooling causes a small capping inversion to form aloft allowing the air to become stable. Incoming weather fronts and short waves influence the strengthening or weakening of CIN. CIN is calculat
https://en.wikipedia.org/wiki/MUSASINO-1
The MUSASINO-1 was one of the earliest electronic digital computers built in Japan. Construction started at the Electrical Communication Laboratories of NTT at Musashino, Tokyo in 1952 and was completed in July 1957. The computer was used until July 1962. Saburo Muroga, a University of Illinois visiting scholar and member of the ILLIAC I team, returned to Japan and oversaw the construction of MUSASINO-1. Using 519 vacuum tubes and 5,400 parametrons, the MUSASINO-1 possessed a magnetic core memory, initially of 32 (later expanded to 256) words. A word was composed of 40 bits, and two instructions could be stored in a single word. Addition time was clocked at 1,350 microseconds, multiplication at 6,800 microseconds, and division time at 26.1 milliseconds. The MUSASINO-1's instruction set was a superset of the ILLIAC I's instructions, so it could generally use the latter's software. However, many of the programs for the ILLIAC used some of the unused bits in the instructions to store data, and these would be interpreted as a different instructions by the MUSASINO-1 control circuitry. See also FUJIC ILLIAC I List of vacuum-tube computers
https://en.wikipedia.org/wiki/Sergei%20Winogradsky
Sergei Nikolaievich Winogradsky (or Vinohradsky; published under the name of Sergius Winogradsky or M. S. Winogradsky from Ukrainian: Сергій Миколайович Виноградський Russian: Сергей Николаевич Виноградский 1 September 1856 – 25 February 1953) was a Ukrainian and Russian microbiologist, ecologist and soil scientist who pioneered the cycle-of-life concept. Winogradsky discovered the first known form of lithotrophy during his research with Beggiatoa in 1887. He reported that Beggiatoa oxidized hydrogen sulfide (H2S) as an energy source and formed intracellular sulfur droplets. This research provided the first example of lithotrophy, but not autotrophy. His research on nitrifying bacteria would report the first known form of chemoautotrophy, showing how a lithotroph fixes carbon dioxide (CO2) to make organic compounds. He is best known in school science as the inventor of the Winogradsky Column technique for the study of sediment microbes. Biography Winogradsky was born in Kyiv, Russian Empire to a family of wealthy lawyers. Among his paternal ancestors were Cossack atamans or hetmans (in Ukrainian), and on the maternal side - the hetman family Skoropadsky. In this early stage of his life, Winogradsky was "strictly devoted to the orthodox faith", though he later became irreligious. After graduating from the 2nd Kyiv Gymnasium in 1873, he began studying law, but he entered the Imperial Conservatoire of Music in St Petersburg in 1875 to study piano. However, after two years of music training, he entered the University of Saint Petersburg in 1877 to study chemistry under Nikolai Menshutkin and botany under Andrei Sergeevich Famintzin. He received a diploma in 1881 and stayed at the St. Petersburg University for a degree of Master of Science in botany in 1884. In 1885, he began work at the University of Strasbourg under the renowned botanist Anton de Bary; Winogradsky became renowned for his work on sulfur bacteria. In 1888, he relocated to Zurich, where he began i
https://en.wikipedia.org/wiki/Quantifier%20elimination
Quantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science. Informally, a quantified statement " such that " can be viewed as a question "When is there an such that ?", and the statement without quantifiers can be viewed as the answer to that question. One way of classifying formulas is by the amount of quantification. Formulas with less depth of quantifier alternation are thought of as being simpler, with the quantifier-free formulas as the simplest. A theory has quantifier elimination if for every formula , there exists another formula without quantifiers that is equivalent to it (modulo this theory). Examples An example from high school mathematics says that a single-variable quadratic polynomial has a real root if and only if its discriminant is non-negative: Here the sentence on the left-hand side involves a quantifier , while the equivalent sentence on the right does not. Examples of theories that have been shown decidable using quantifier elimination are Presburger arithmetic, algebraically closed fields, real closed fields, atomless Boolean algebras, term algebras, dense linear orders, abelian groups, random graphs, as well as many of their combinations such as Boolean algebra with Presburger arithmetic, and term algebras with queues. Quantifier eliminator for the theory of the real numbers as an ordered additive group is Fourier–Motzkin elimination; for the theory of the field of real numbers it is the Tarski–Seidenberg theorem. Quantifier elimination can also be used to show that "combining" decidable theories leads to new decidable theories (see Feferman-Vaught theorem). Algorithms and decidability If a theory has quantifier elimination, then a specific question can be addressed: Is there a method of determining for each ? If there is such a method we call it a quantifier elimination algorithm. If there is such an algorithm, then decidability for the theory reduces to deci
https://en.wikipedia.org/wiki/Counting%20quantification
A counting quantifier is a mathematical term for a quantifier of the form "there exists at least k elements that satisfy property X". In first-order logic with equality, counting quantifiers can be defined in terms of ordinary quantifiers, so in this context they are a notational shorthand. However, they are interesting in the context of logics such as two-variable logic with counting that restrict the number of variables in formulas. Also, generalized counting quantifiers that say "there exists infinitely many" are not expressible using a finite number of formulas in first-order logic. Definition in terms of ordinary quantifiers Counting quantifiers can be defined recursively in terms of ordinary quantifiers. Let denote "there exist exactly ". Then Let denote "there exist at least ". Then See also Uniqueness quantification Lindström quantifier
https://en.wikipedia.org/wiki/PGPfone
PGPfone was a secure voice telephony system developed by Philip Zimmermann in 1995. The PGPfone protocol had little in common with Zimmermann's popular PGP email encryption package, except for the use of the name. It used ephemeral Diffie-Hellman protocol to establish a session key, which was then used to encrypt the stream of voice packets. The two parties compared a short authentication string to detect a Man-in-the-middle attack, which is the most common method of wiretapping secure phones of this type. PGPfone could be used point-to-point (with two modems) over the public switched telephone network, or over the Internet as an early Voice over IP system. In 1996, there were no protocol standards for Voice over IP. Ten years later, Zimmermann released the successor to PGPfone, Zfone and ZRTP, a newer and secure VoIP protocol based on modern VoIP standards. Zfone builds on the ideas of PGPfone. According to the MIT PGPfone web page, "MIT is no longer distributing PGPfone. Given that the software has not been maintained since 1997, we doubt it would run on most modern systems." See also Comparison of VoIP software Nautilus (secure telephone) PGP word list Secure telephone