source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Chess%20annotation%20symbols
When annotating chess games, commentators frequently use widely recognized annotation symbols. Question marks and exclamation points that denote a move as bad or good are ubiquitous in chess literature. Some publications intended for an international audience, such as the Chess Informant, have a wide range of additional symbols that transcend language barriers. The common symbols for evaluating the merits of a move are "??", "?", "?!", "!?", "!", and "!!". The chosen symbol is appended to the text describing the move (e.g. Re7? or Kh1!?); see Algebraic chess notation. Use of these annotation symbols is subjective, as different annotators use the same symbols differently. Evaluation symbols Moves Move evaluation symbols, by increasing effectiveness of the move: ?? (Blunder) The double question mark "??" indicates a blunder, a very bad move that severely worsens the player's position. Typical moves that receive double question marks are those that overlook a tactic that wins substantial or overlook a checkmate. A "??"-worthy move usually results in an immediately lost position. Occasionally, the sign is used for a move that transforms a won position into a draw. Though more common among less experienced players, blunders occur at all levels of play. ? (mistake) A single question mark "?" indicates that the annotator thinks that the move is a mistake and that it should not have been played. mistakes often lead to loss of tempo, material, or otherwise a worsening of the player's position. The nature of a mistakes may be more strategic than tactical; in some cases, the move receiving a question mark may be one for which it is difficult to find a refutation. A move that overlooks a forthcoming brilliant combination from the opponent would rarely receive more than one question mark, for example. ?! (Dubious move) This symbol is similar to the "!?" (below) but usually indicates that the annotator believes the move to be dubious or questionable but to possi
https://en.wikipedia.org/wiki/Unbeatable%20strategy
In biology, the idea of an unbeatable strategy was proposed by W.D. Hamilton in his 1967 paper on sex ratios in Science. In this paper Hamilton discusses sex ratios as strategies in a game, and cites Verner as using this language in his 1965 paper which "claims to show that, given factors causing fluctuations of the population's primary sex ratio, a 1:1 sex-ratio production proves the best overall genotypic strategy". "In the way in which the success of a chosen sex ratio depends on choices made by the co-parasitizing females, this problem resembles certain problems discussed in the "theory of games." In the foregoing analysis a game-like element, of a kind, was present and made necessary the use of the word unbeatable to describe the ratio finally established. This word was applied in just the same sense in which it could be applied to the "minimax" strategy of a zero-sum two-person game. Such a strategy should not, without qualification, be called optimum because it is not optimum against -although unbeaten by- any strategy differing from itself. This is exactly the case with the "unbeatable" sex ratios referred to." Hamilton (1967). "[...] But if, on the contrary, players of such a game were motivated to outscore, they would find that is beaten by a higher ratio, ; the value of which gives its player the greatest possible advantage over the player playing , is found to be given by the relationship and shows to be the unbeatable play." Hamilton (1967). The concept can be traced through R.A. Fisher (1930) to Darwin (1859); see Edwards (1998). Hamilton did not explicitly define the term "unbeatable strategy" or apply the concept beyond the evolution of sex-ratios, but the idea was very influential. George R. Price generalised the verbal argument, which was then formalised mathematically by John Maynard Smith, into the evolutionarily stable strategy (ESS).
https://en.wikipedia.org/wiki/Genetic%20load
Genetic load is the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype. The average individual taken from a population with a low genetic load will generally, when grown in the same conditions, have more surviving offspring than the average individual from a population with a high genetic load. Genetic load can also be seen as reduced fitness at the population level compared to what the population would have if all individuals had the reference high-fitness genotype. High genetic load may put a population in danger of extinction. Fundamentals Consider n genotypes , which have the fitnesses and frequencies , respectively. Ignoring frequency-dependent selection, the genetic load may be calculated as: where is either some theoretical optimum, or the maximum fitness observed in the population. In calculating the genetic load, must be actually found in at least a single copy in the population, and is the average fitness calculated as the mean of all the fitnesses weighted by their corresponding frequencies: where the genotype is and has the fitness and frequency and respectively. One problem with calculating genetic load is that it is difficult to evaluate either the theoretically optimal genotype, or the maximally fit genotype actually present in the population. This is not a problem within mathematical models of genetic load, or for empirical studies that compare the relative value of genetic load in one setting to genetic load in another. Causes Deleterious mutation Deleterious mutation load is the main contributing factor to genetic load overall. The Haldane-Muller theorem of mutation–selection balance says that the load depends only on the deleterious mutation rate and not on the selection coefficient. Specifically, relative to an ideal genotype of fitness 1, the mean population fitness is
https://en.wikipedia.org/wiki/International%20Temperature%20Scale%20of%201990
The International Temperature Scale of 1990 (ITS-90) is an equipment calibration standard specified by the International Committee of Weights and Measures (CIPM) for making measurements on the Kelvin and Celsius temperature scales. It is an approximation of thermodynamic temperature that facilitates the comparability and compatibility of temperature measurements internationally. It defines fourteen calibration points ranging from to ( to ) and is subdivided into multiple temperature ranges which overlap in some instances. ITS-90 is the most recent of a series of International Temperature Scales adopted by the CIPM since 1927. Adopted at the 1989 General Conference on Weights and Measures, it supersedes the International Practical Temperature Scale of 1968 (amended edition of 1975) and the 1976 "Provisional 0.5 K to 30 K Temperature Scale". The CCT has also published several online guidebooks to aid realisations of the ITS-90. The lowest temperature covered by the ITS-90 is 0.65 K. In 2000, the temperature scale was extended further, to 0.9 mK, by the adoption of a supplemental scale, known as the Provisional Low Temperature Scale of 2000 (PLTS-2000). In 2019, the kelvin was redefined. However, the alteration was very slight compared to the ITS-90 uncertainties, and so the ITS-90 remains the recommended practical temperature scale without any significant changes. It is anticipated that the redefinition, combined with improvements in primary thermometry methods, will phase out reliance on the ITS-90 and the PLTS-2000 in the future. Details The ITS-90 is designed to represent the thermodynamic (absolute) temperature scale (referencing absolute zero) as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs) and monochromatic radiation thermometers. Although the Kelvin
https://en.wikipedia.org/wiki/Clamshell%20design
A clamshell design is a kind of form factor for electronic devices in the shape of a clamshell. Mobile phones, handheld game consoles, and especially laptops, are often designed like clamshells. Clamshell devices are usually made of two sections connected by a hinge, each section containing either a flat panel display or an alphanumeric keyboard/keypad, which can fold into contact together like a bivalve shell. A clamshell mobile phone is sometimes also called a flip phone, especially if the hinge is on the short edge. If the hinge is on a long edge (e.g., Nokia Communicators), the device is more likely to be called just a "clamshell" rather than a flip phone. Generally speaking, the interface components such as keys and display are kept inside the closed clamshell, protecting them from damage and unintentional use while also making the device shorter or narrower so it is easier to carry around. In many cases, opening the clamshell offers more surface area than when the device is closed, allowing interface components to be larger and easier to use than on devices which do not flip open. A disadvantage of the clamshell design is the connecting hinge, which is prone to fatigue or failure. Etymology The clamshell form factor is most closely associated with the cell phone market, as Motorola used to have a trademark on the term "flip phone", but the term "flip phone" has become genericized to be used more frequently than "clamshell" in colloquial speech. History A "flip phone" like communication device appears in chapter 3 of Armageddon 2419 A.D., a science fiction novella by Philip Francis Nowlan, which was first published in the August 1928 issue of the pulp magazine Amazing Stories: "Alan took a compact packet about six inches square from a holster attached to her belt and handed it to Wilma. So far as I could see, it had no special receiver for the ear. Wilma merely threw back a lid, as though she was opening a book, and began to talk. The voice that came bac
https://en.wikipedia.org/wiki/Backtracking%20line%20search
In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient is known. The method involves starting with a relatively large estimate of the step size for movement along the line search direction, and iteratively shrinking the step size (i.e., "backtracking") until a decrease of the objective function is observed that adequately corresponds to the amount of decrease that is expected, based on the step size and the local gradient of the objective function. A common stopping criterion is the Armijo–Goldstein condition. Backtracking line search is typically used for gradient descent (GD), but it can also be used in other contexts. For example, it can be used with Newton's method if the Hessian matrix is positive definite. Motivation Given a starting position and a search direction , the task of a line search is to determine a step size that adequately reduces the objective function (assumed i.e. continuously differentiable), i.e., to find a value of that reduces relative to . However, it is usually undesirable to devote substantial resources to finding a value of to precisely minimize . This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value of that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value of . The backtracking line search starts with a large estimate of and iteratively shrinks it. The shrinking continues until a value is found that is small enough to provide a decrease in the objective func
https://en.wikipedia.org/wiki/Deep-level%20transient%20spectroscopy
Deep-level transient spectroscopy (DLTS) is an experimental tool for studying electrically active defects (known as charge carrier traps) in semiconductors. DLTS establishes fundamental defect parameters and measures their concentration in the material. Some of the parameters are considered as defect "finger prints" used for their identifications and analysis. DLTS investigates defects present in a space charge (depletion) region of a simple electronic device. The most commonly used are Schottky diodes or p-n junctions. In the measurement process the steady-state diode reverse polarization voltage is disturbed by a voltage pulse. This voltage pulse reduces the electric field in the space charge region and allows free carriers from the semiconductor bulk to penetrate this region and recharge the defects causing their non-equilibrium charge state. After the pulse, when the voltage returns to its steady-state value, the defects start to emit trapped carriers due to the thermal emission process. The technique observes the device space charge region capacitance where the defect charge state recovery causes the capacitance transient. The voltage pulse followed by the defect charge state recovery are cycled allowing an application of different signal processing methods for defect recharging process analysis. The DLTS technique has a higher sensitivity than almost any other semiconductor diagnostic technique. For example, in silicon it can detect impurities and defects at a concentration of one part in 1012 of the material host atoms. This feature together with a technical simplicity of its design made it very popular in research labs and semiconductor material production factories. The DLTS technique was pioneered by David Vern Lang at Bell Laboratories in 1974. A US Patent was awarded to Lang in 1975. DLTS methods Conventional DLTS In conventional DLTS the capacitance transients are investigated by using a lock-in amplifier or double box-car averaging technique whe
https://en.wikipedia.org/wiki/Biollante
is a rose, human, and Godzilla mutant hybrid kaiju who first appeared in Toho's 1989 film Godzilla vs. Biollante, and has since appeared in numerous licensed video games and comic books. The creature is portrayed as a genetically engineered clone of Godzilla spliced with the genes of a rose and a human. As the character was created during the end of the Cold War and a wane in the concerns over nuclear weapons represented by Godzilla, Biollante was conceived as a symbol of more contemporary controversies regarding genetic engineering. Overview Biollante first appears in the 1989 film Godzilla vs. Biollante. After Godzilla's return in 1985, Dr. Genshiro Shiragami attempts to use the monster's cells to genetically enhance various species of plants to create crops resistant to harsh weather of Saradia, an arid country in the Middle East. His attempts are initially thwarted when a bomb planted by the American organisation Bio-Major destroys his laboratory and kills his daughter Erika. Shiragami splices her DNA with that of a rose, which is nearly destroyed five years later by an earthquake. Hoping to make the rose immortal, he further splices its DNA with those of Godzilla, resulting in the creation of a hybrid mutant he christens Biollante. The creature breaks out of the lab and into Lake Ashi, where it begins calling out to its progenitor Godzilla. Godzilla arrives and incinerates Biollante, whose spores float into the atmosphere. The spores later land near Osaka in the form of a much more Godzilla-like Biollante, who fights Godzilla to a standstill until the latter retreats after being weakened by the Anti-Nuclear Energy Bacteria. Biollante subsequently transforms into spores again and floats into space, with an image of Erika being seen among the spores. The creature makes a brief cameo appearance in Godzilla vs. SpaceGodzilla, where it is speculated that its cells floating in space may have contributed to the creation of the monster SpaceGodzilla. In Godzilla: Mo
https://en.wikipedia.org/wiki/Phantoms%20%28novel%29
Phantoms is a horror novel by American writer Dean Koontz, first published in 1983. The story is a version of the now-debunked urban legend involving a village mysteriously vanishing at Angikuni Lake. The novel includes many literary tips of the hat to the work of H. P. Lovecraft, including the suggestion that the novel's 'Ancient Enemy' is Lovecraft's god Nyarlathotep, also known as the 'Crawling Chaos'; and the fact that character of the air force specialist in potential contact with non-human intelligence is named 'Captain Arkham' (cf. Lovecraft's invention Arkham). Most of these Lovecraftian references were excised from the 1998 film version of Koontz's novel. Plot summary Jenny and Lisa Paige, two sisters, return to Jenny's hometown of Snowfield, California, a small ski resort village nestled in the Sierra-Nevada Mountains where Jenny works as a doctor, and find no one alive. The few bodies they find are either mutilated, or reveal some strange form of death. Finally, after growing more alarmed by the town's mysterious and alarming situation Jenny manages to call police in a neighboring town to come help. Together, the women and the police, led by Sheriff Bryce Hammond, are able to request help from the military Biological Investigations Unit. The police managed to find only one clue as to what was causing the town's disappearances and deaths. A victim of whatever was trying to kill him managed to write the name Timothy Flyte on a mirror moments before he was killed. Flyte is a British academic and author of a book, The Ancient Enemy. His book catalogs and describes various mass vanishings of people in different parts of the world over the centuries. It is discovered that the town was built over the hibernating place of one such Enemy, a creature known as an amoeboid shapeshifter. This Ancient Enemy rarely feeds, but when it does, the effects are devastating. It was theorized that the Enemy either caused or aided in the extinction of the dinosaurs, as well
https://en.wikipedia.org/wiki/ALTQ
ALTQ (ALTernate Queueing) is the network scheduler for Berkeley Software Distribution. ALTQ provides queueing disciplines, and other components related to quality of service (QoS), required to realize resource sharing. It is most commonly implemented on BSD-based routers. ALTQ is included in the base distribution of FreeBSD, NetBSD, and DragonFly BSD, and was integrated into the pf packet filter of OpenBSD but later replaced by a new queueing subsystem (it was deprecated with OpenBSD 5.5 release, and completely removed with 5.6 in 2014). With ALTQ, packets can be assigned to queues for the purpose of bandwidth control. The scheduler defines the algorithm used to decide which packets get delayed, dropped or sent out immediately. There are five schedulers currently supported in the FreeBSD implementation of ALTQ: — Class-based Queueing. Queues attached to an interface build a tree, thus each queue can have further child queues. Each queue can have a priority and a bandwidth assigned. Priority mainly controls the time packets take to get sent out, while bandwidth has primarily effects on throughput. — Controlled Delay. Attempts to combat bufferbloat. — Fair Queuing. Attempts to fairly distribute bandwidth among all connections. — Hierarchical Fair Service Curve. Queues attached to an interface build a tree, thus each queue can have further child queues. Each queue can have a priority and a bandwidth assigned. Priority mainly controls the time packets take to get sent out, while bandwidth has primarily effects on throughput. — Priority Queueing. Queues are flat attached to the interface, thus, queues cannot have further child queues. Each queue has a unique priority assigned, ranging from 0 to 15. Packets in the queue with the highest priority are processed first. See also Traffic shaping KAME project
https://en.wikipedia.org/wiki/Tetrasodium%20pyrophosphate
Tetrasodium pyrophosphate, also called sodium pyrophosphate, tetrasodium phosphate or TSPP, is an inorganic compound with the formula Na4P2O7. As a salt, it is a white, water-soluble solid. It is composed of pyrophosphate anion and sodium ions. Toxicity is approximately twice that of table salt when ingested orally. Also known is the decahydrate Na4P2O710(H2O). Use Tetrasodium pyrophosphate is used as a buffering agent, an emulsifier, a dispersing agent, and a thickening agent, and is often used as a food additive. Common foods containing tetrasodium pyrophosphate include chicken nuggets, marshmallows, pudding, crab meat, imitation crab, canned tuna, and soy-based meat alternatives and cat foods and cat treats where it is used as a palatability enhancer. In toothpaste and dental floss, tetrasodium pyrophosphate acts as a tartar control agent, serving to remove calcium and magnesium from saliva and thus preventing them from being deposited on teeth. Tetrasodium pyrophosphate is used in commercial dental rinses before brushing to aid in plaque reduction. Tetrasodium pyrophosphate is sometimes used in household detergents to prevent similar deposition on clothing, but due to its phosphate content it causes eutrophication of water, promoting algae growth. Production Tetrasodium pyrophosphate is produced by the reaction of furnace-grade phosphoric acid with sodium carbonate to form disodium phosphate, which is then heated to 450 °C to form tetrasodium pyrophosphate: 2 Na2HPO4 → Na4P2O7 + H2O
https://en.wikipedia.org/wiki/Wolfe%20conditions
In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find for some smooth . Each step often involves approximately solving the subproblem where is the current best guess, is a search direction, and is the step length. The inexact line searches provide an efficient way of computing an acceptable step length that reduces the objective function 'sufficiently', rather than minimizing the objective function over exactly. A line search algorithm can use Wolfe conditions as a requirement for any guessed , before finding a new search direction . Armijo rule and curvature A step length is said to satisfy the Wolfe conditions, restricted to the direction , if the following two inequalities hold: with . (In examining condition (ii), recall that to ensure that is a descent direction, we have , as in the case of gradient descent, where , or Newton–Raphson, where with positive definite.) is usually chosen to be quite small while is much larger; Nocedal and Wright give example values of and for Newton or quasi-Newton methods and for the nonlinear conjugate gradient method. Inequality i) is known as the Armijo rule and ii) as the curvature condition; i) ensures that the step length decreases 'sufficiently', and ii) ensures that the slope has been reduced sufficiently. Conditions i) and ii) can be interpreted as respectively providing an upper and lower bound on the admissible step length values. Strong Wolfe condition on curvature Denote a univariate function restricted to the direction as . The Wolfe conditions can result in a value for the step length that is not close to a minimizer of . If we modify the curvature condition to the following, then i) and iii) together form the so-called strong Wolfe conditions, and force to lie close to a critical point of . Rationale The
https://en.wikipedia.org/wiki/California%20Games
California Games is a 1987 sports video game originally released by Epyx for the Apple II and Commodore 64, and ported to other home computers and video game consoles. Branching from their Summer Games and Winter Games series, this game consists of a collection of outdoor sports purportedly popular in California. The game was successful and spawned a sequel, California Games II. Gameplay The events available vary slightly depending on the platform, but include all of the following: Half-pipe Footbag Surfing (starring Rippin' Rick) Roller skating BMX Flying disc Development Several members of the development team moved on to other projects. Chuck Sommerville, the designer of the half-pipe game in California Games, later developed the game Chip's Challenge, while Ken Nicholson, the designer of the footbag game, was the inventor of the technology used in Microsoft's DirectX. Kevin Norman, the designer of the BMX game, went on to found the educational science software company Norman & Globus, makers of the ElectroWiz series of products. The sound design for the original version of California Games was done by Chris Grigg, member of the band Negativland. Ports Originally written for the Apple II and Commodore 64, it was eventually ported to Amiga, Apple IIGS, Atari 2600, Atari ST, MS-DOS, Genesis, Amstrad CPC, ZX Spectrum, Nintendo Entertainment System, MSX and Master System. The Atari Lynx version was the pack-in game for the system when it was launched in June 1989. An Atari XE version was planned and contracted out by Atari Corp. to Epyx in 1988 but no code was delivered by the publication deadline. Reception California Games was a commercial blockbuster. With more than 300,000 copies sold in the first nine months, it was the most-successful Epyx game, outselling each of the four previous and two subsequent titles in the company's "Games" series. CEO David Shannon Morse said that it was the first Epyx game to appeal equally to boys and girls during playte
https://en.wikipedia.org/wiki/The%20Tower%20King
"The Tower King" is a British comic strip, appearing in titles published by IPC Magazines. The story was published in the anthology Eagle from 27 March to 4 September 1982, written by Alan Hebden, with art by José Ortiz. The story was set in a dystopian London, where society has broken down. Creation While the relaunched Eagle included a mix of photo and conventional picture strips. "The Tower King" was one of the latter. It was written by IPC stalwart Alan Hebden, who had experience writing for Battle Picture Weekly (including creating Major Eazy) and 2000 AD. José Ortiz provided the art; while the strip was in black-and-white, the web offset printing method used for Eagle meant he was able to give the art a grey wash, enhancing the atmosphere and detail. The strip's creators made use of the opportunity by juxtapositioning jarring visual elements, such as historic London landmarks strewn with the rubble of modern buildings, or soldiers in patchwork armour complete with pocket watches and police helmets, armed with both halberds and grenades. Publication history The story debuted in the launch issue of the new Eagle, dated 27 March 1982 and continued until the 4 September 1982 edition - when it was effectively replaced by another Ortiz-drawn strip, "The House of Daemon". In 1998, the rights to the strips created for Eagle – including "House of Daemon" – were purchased from Egmont Publishing by the Dan Dare Corporation. In 2014, Hibernia Books licensed "The Tower King" and produced a collected edition with a foreword by 2000 AD artist Leigh Gallagher, initially in a print run of 200 copies. A second limited run followed in 2017. In 2020, Hibernia produced another short run, along with another run of their collection of "The House of Daemon". Plot Following from a nuclear war, a malfunctioning solar satellite]] bathes the Earth in radiation that makes the production of electricity in any form impossible. Without heating, transport, food or communication and in the
https://en.wikipedia.org/wiki/Dave%20Smith%20%28engineer%29
David Joseph Smith (April 2, 1950 – May 31, 2022) was an American engineer and founder of the synthesizer company Sequential. Smith created the first polyphonic synthesizer with fully programmable memory, the Prophet-5, which had a major impact on the music industry. He also led the development of MIDI, a standard interface protocol for synchronizing electronic instruments and audio equipment. In 2005, Smith was inducted into the Mix Foundation TECnology (Technical Excellence and Creativity) Hall of Fame for the MIDI specification. In 2013, he and the Japanese businessman Ikutaro Kakehashi received a Technical Grammy Award for their contributions to the development of MIDI. Career Smith was born on April 2, 1950, in San Francisco. He had degrees in both Computer Science and Electronic Engineering from UC Berkeley. Sequential Circuits and Prophet-5 He purchased a Minimoog in 1972 and later built his own analog sequencer, founding Sequential Circuits in 1974 and advertising his product for sale in Rolling Stone. By 1977 he was working at Sequential full-time, and later that year he designed the Prophet 5, the world's first microprocessor-based musical instrument and also the first programmable polyphonic synth, an innovation that marked a crucial step forward in synthesizer design and functionality. Sequential went on to become one of the most successful music synthesizer manufacturers of the time. MIDI In 1981 Smith set out to create a standard protocol for communication between electronic musical instruments from different manufacturers worldwide. He presented a paper outlining the idea of a Universal Synthesizer Interface (USI) to the Audio Engineering Society (AES) in 1981 after meetings with Tom Oberheim and Roland founder Ikutaro Kakehashi. After some enhancements and revisions, the new standard was introduced as "Musical Instrument Digital Interface" (MIDI) at the Winter NAMM Show in 1983, when a Sequential Circuits Prophet-600 was successfully connecte
https://en.wikipedia.org/wiki/Photomorphogenesis
In developmental biology, photomorphogenesis is light-mediated development, where plant growth patterns respond to the light spectrum. This is a completely separate process from photosynthesis where light is used as a source of energy. Phytochromes, cryptochromes, and phototropins are photochromic sensory receptors that restrict the photomorphogenic effect of light to the UV-A, UV-B, blue, and red portions of the electromagnetic spectrum. The photomorphogenesis of plants is often studied by using tightly frequency-controlled light sources to grow the plants. There are at least three stages of plant development where photomorphogenesis occurs: seed germination, seedling development, and the switch from the vegetative to the flowering stage (photoperiodism). Most research on photomorphogenesis is derived from plants studies involving several kingdoms: Fungi, Monera, Protista, and Plantae. History Theophrastus of Eresus (371 to 287 BC) may have been the first to write about photomorphogenesis. He described the different wood qualities of fir trees grown in different levels of light, likely the result of the photomorphogenic "shade-avoidance" effect. In 1686, John Ray wrote "Historia Plantarum" which mentioned the effects of etiolation (grow in the absence of light). Charles Bonnet introduced the term "etiolement" to the scientific literature in 1754 when describing his experiments, commenting that the term was already in use by gardeners. Developmental stages affected Seed germination Light has profound effects on the development of plants. The most striking effects of light are observed when a germinating seedling emerges from the soil and is exposed to light for the first time. Normally the seedling radicle (root) emerges first from the seed, and the shoot appears as the root becomes established. Later, with growth of the shoot (particularly when it emerges into the light) there is increased secondary root formation and branching. In this coordinated progressi
https://en.wikipedia.org/wiki/Cyclic%20number
A cyclic number is an integer for which cyclic permutations of the digits are successive integer multiples of the number. The most widely known is the six-digit number 142857, whose first six integer multiples are 142857 × 1 = 142857 142857 × 2 = 285714 142857 × 3 = 428571 142857 × 4 = 571428 142857 × 5 = 714285 142857 × 6 = 857142 Details To qualify as a cyclic number, it is required that consecutive multiples be cyclic permutations. Thus, the number 076923 would not be considered a cyclic number, because even though all cyclic permutations are multiples, they are not consecutive integer multiples: 076923 × 1 = 076923 076923 × 3 = 230769 076923 × 4 = 307692 076923 × 9 = 692307 076923 × 10 = 769230 076923 × 12 = 923076 The following trivial cases are typically excluded: single digits, e.g.: 5 repeated digits, e.g.: 555 repeated cyclic numbers, e.g.: 142857142857 If leading zeros are not permitted on numerals, then 142857 is the only cyclic number in decimal, due to the necessary structure given in the next section. Allowing leading zeros, the sequence of cyclic numbers begins: (106 − 1) / 7 = 142857 (6 digits) (1016 − 1) / 17 = 0588235294117647 (16 digits) (1018 − 1) / 19 = 052631578947368421 (18 digits) (1022 − 1) / 23 = 0434782608695652173913 (22 digits) (1028 − 1) / 29 = 0344827586206896551724137931 (28 digits) (1046 − 1) / 47 = 0212765957446808510638297872340425531914893617 (46 digits) (1058 − 1) / 59 = 0169491525423728813559322033898305084745762711864406779661 (58 digits) (1060 − 1) / 61 = 016393442622950819672131147540983606557377049180327868852459 (60 digits) (1096 − 1) / 97 = 010309278350515463917525773195876288659793814432989690721649484536082474226804123711340206185567 (96 digits) Relation to repeating decimals Cyclic numbers are related to the recurring digital representations of unit fractions. A cyclic number of length L is the digital representation of 1/(L + 1). Conversely, if the digital period of 1/p (where p is prime) is p − 1, then
https://en.wikipedia.org/wiki/Electron%20magnetic%20moment
In atomic physics, the electron magnetic moment, or more specifically the electron magnetic dipole moment, is the magnetic moment of an electron resulting from its intrinsic properties of spin and electric charge. The value of the electron magnetic moment (symbol μe) is In units of the Bohr magneton (μB), it is , a value that was measured with a relative accuracy of . Magnetic moment of an electron The electron is a charged particle with charge −, where is the unit of elementary charge. Its angular momentum comes from two types of rotation: spin and orbital motion. From classical electrodynamics, a rotating distribution of electric charge produces a magnetic dipole, so that it behaves like a tiny bar magnet. One consequence is that an external magnetic field exerts a torque on the electron magnetic moment that depends on the orientation of this dipole with respect to the field. If the electron is visualized as a classical rigid body in which the mass and charge have identical distribution and motion that is rotating about an axis with angular momentum , its magnetic dipole moment is given by: where e is the electron rest mass. The angular momentum L in this equation may be the spin angular momentum, the orbital angular momentum, or the total angular momentum. The ratio between the true spin magnetic moment and that predicted by this model is a dimensionless factor , known as the electron -factor: It is usual to express the magnetic moment in terms of the reduced Planck constant and the Bohr magneton B: Since the magnetic moment is quantized in units of B, correspondingly the angular momentum is quantized in units of . Formal definition Classical notions such as the center of charge and mass are, however, hard to make precise for a quantum elementary particle. In practice the definition used by experimentalists comes from the form factors appearing in the matrix element of the electromagnetic current operator between two on-shell states. Here and a
https://en.wikipedia.org/wiki/Invariable%20plane
The invariable plane of a planetary system, also called Laplace's invariable plane, is the plane passing through its barycenter (center of mass) perpendicular to its angular momentum vector. Solar System In the Solar System, about 98% of this effect is contributed by the orbital angular momenta of the four jovian planets (Jupiter, Saturn, Uranus, and Neptune). The invariable plane is within 0.5° of the orbital plane of Jupiter, and may be regarded as the weighted average of all planetary orbital and rotational planes. Terminology and definition This plane is sometimes called the "Laplacian" or "Laplace plane" or the "invariable plane of Laplace", though it should not be confused with the Laplace plane, which is the plane about which the individual orbital planes of planetary satellites precess. Both derive from the work of (and are at least sometimes named for) the French astronomer Pierre Simon Laplace. The two are equivalent only in the case where all perturbers and resonances are far from the precessing body. The invariable plane is derived from the sum of angular momenta, and is "invariable" over the entire system, while the Laplace plane for different orbiting objects within a system may be different. Laplace called the invariable plane the plane of maximum areas, where the "area" in this case is the product of the radius and its time rate of change , that is, its radial velocity, multiplied by the mass. Description The magnitude of the orbital angular momentum vector of a planet is where is the orbital radius of the planet (from the barycenter), is the mass of the planet, and is its orbital angular velocity. That of Jupiter contributes the bulk of the Solar System's angular momentum, 60.3%. Then comes Saturn at 24.5%, Neptune at 7.9%, and Uranus at 5.3%. The Sun forms a counterbalance to all of the planets, so it is near the barycenter when Jupiter is on one side and the other three jovian planets are diametrically opposite on the other side, but the S
https://en.wikipedia.org/wiki/Integrated%20Computer-Aided%20Manufacturing
Integrated Computer-Aided Manufacturing (ICAM) is a US Air Force program that develops tools, techniques, and processes to support manufacturing integration. It influenced the computer-integrated manufacturing (CIM) and computer-aided manufacturing (CAM) project efforts of many companies. The ICAM program was founded in 1976 and initiative managed by the US Air Force at Wright-Patterson as a part of their technology modernization efforts. The program initiated the development a series of standards for modeling and analysis in management and business improvement, called Integrated Definitions, short IDEFs. Overview The USAF ICAM program was founded in 1976 at the US Air Force Materials Laboratory, Wright-Patterson Air Force Base in Ohio by Dennis E. Wisnosky and Dan L. Shunk and others. In the mid-1970s Joseph Harrington had assisted Wisnosky and Shunk in designing the ICAM program and had broadened the concept of CIM to include the entire manufacturing company. Harrington considered manufacturing a "monolithic function". The ICAM program was visionary in showing that a new approach was necessary to achieve integration in manufacturing firms. Wisnosky and Shunk developed a "wheel" to illustrate the architecture of their ICAM project and to show the various elements that had to work together. Wisnosky and Shunk were among the first to understand the web of interdependencies needed for integration. Their work represents the first major step in shifting the focus of manufacturing from a series of sequential operations to parallel processing. The ICAM program has spent over $100 million to develop tools, techniques, and processes to support manufacturing integration. The Air Force's ICAM program recognizes the role of data as central to any integration effort. Data must be common and shareable across functions. The concept still remains ahead of its time, because most major companies did not seriously begin to attack the data architecture challenge until well into t
https://en.wikipedia.org/wiki/Agartha
Agartha (sometimes Agartta, Agharti, Agarath, Agarta, Agharta, or Agarttha) is a legendary kingdom that is said to be located on the inner surface of the Earth. It is sometimes related to the belief in a hollow Earth and is a popular subject in esotericism. History The legend of Agartha remained mostly obscure in Europe until Gérard Encausse edited and re-published a detailed 1886 account by the nineteenth-century French occultist Alexandre Saint-Yves d'Alveydre (1842–1909), Mission de l'Inde en Europe, in 1910. After World War I, German occultist groups such as the Thule Society took an interest in Agartha. In his 1922 book, Beasts, Men and Gods, the Polish explorer Ferdynand Ossendowski relates a story which was imparted to him concerning a subterranean kingdom existing inside the Earth. This kingdom is known to a fictional Buddhist society as Agharti. Connections to mythology Agartha is frequently associated or confused with Shambhala which figures prominently in Vajrayana Buddhism and Tibetan Kalachakra teachings and revived in the West by Madame Blavatsky and the Theosophical Society. Theosophists in particular regard Agarthi as a vast complex of caves underneath Tibet inhabited by demi-gods, called asuras. Helena and Nicholas Roerich, whose teachings closely parallel theosophy, see Shambhala's existence as both spiritual and physical. See also Dwarf (mythology) Hades Xibalba Children Who Chase Lost Voices
https://en.wikipedia.org/wiki/Blobotics
Blobotics is a term describing research into chemical-based computer processors based on ions rather than electrons. Andrew Adamatzky, a computer scientist at the University of the West of England, Bristol used the term in an article in New Scientist March 28, 2005 . The aim is to create 'liquid logic gates' which would be 'infinitely reconfigurable and self-healing'. The process relies on the Belousov–Zhabotinsky reaction, a repeating cycle of three separate sets of reactions. Such a processor could form the basis of a robot which, using artificial sensors, interact with its surroundings in a way which mimics living creatures. The coining of the term was featured by ABC radio in Australia .
https://en.wikipedia.org/wiki/Institute%20for%20Systems%20Biology
Institute for Systems Biology (ISB) is a non-profit research institution located in Seattle, Washington, United States. ISB concentrates on systems biology, the study of relationships and interactions between various parts of biological systems, and advocates an interdisciplinary approach to biological research. Goals Systems biology is the study of biological systems in a holistic manner by integrating data at all levels of the biological information hierarchy, from global down to the individual organism, and below down to the molecular level. The vision of ISB is to integrate these concepts using a cross-disciplinary approach combining the efforts of biologists, chemists, computer scientists, engineers, mathematicians, physicists, and physicians. On its website, ISB has defined four areas of focus: P4 Medicine - This acronym refers to predictive, preventive, personalized and participatory medicine, which focuses on wellness rather than mere treatment of disease. Global Health - Use of the systems approach towards the study of infectious diseases, vaccine development, emergence of chronic diseases, and maternal and child health. Sustainable Environment - Applying systems biology for a better understanding of the role of microbes in the environment and their relation to human health. Education & Outreach - Knowledge transfer to society through a variety of educational programs and partnerships, including the spin out of new companies. Early history Leroy Hood co-founded the Institute with Alan Aderem and Ruedi Aebersold in 2000. However, the story of how ISB got started actually begins in 1990. Lee Hood was the director of a large molecular biotechnology lab at the California Institute of Technology in Pasadena, and was a key advisor in the Human Genome Project, having overseen development of machines that were instrumental to its later success. The University of Washington (UW), like many other universities, was eager to recruit Hood, but had neither the
https://en.wikipedia.org/wiki/Mutation%E2%80%93selection%20balance
Mutation–selection balance is an equilibrium in the number of deleterious alleles in a population that occurs when the rate at which deleterious alleles are created by mutation equals the rate at which deleterious alleles are eliminated by selection. The majority of genetic mutations are neutral or deleterious; beneficial mutations are relatively rare. The resulting influx of deleterious mutations into a population over time is counteracted by negative selection, which acts to purge deleterious mutations. Setting aside other factors (e.g., balancing selection, and genetic drift), the equilibrium number of deleterious alleles is then determined by a balance between the deleterious mutation rate and the rate at which selection purges those mutations. Mutation–selection balance was originally proposed to explain how genetic variation is maintained in populations, although several other ways for deleterious mutations to persist are now recognized, notably balancing selection. Nevertheless, the concept is still widely used in evolutionary genetics, e.g. to explain the persistence of deleterious alleles as in the case of spinal muscular atrophy, or, in theoretical models, mutation-selection balance can appear in a variety of ways and has even been applied to beneficial mutations (i.e. balance between selective loss of variation and creation of variation by beneficial mutations). Haploid population As a simple example of mutation-selection balance, consider a single locus in a haploid population with two possible alleles: a normal allele A with frequency , and a mutated deleterious allele B with frequency , which has a small relative fitness disadvantage of . Suppose that deleterious mutations from A to B occur at rate , and the reverse beneficial mutation from B to A occurs rarely enough to be negligible (e.g. because the mutation rate is so low that is small). Then, each generation selection eliminates deleterious mutants reducing by an amount , while mutation creat
https://en.wikipedia.org/wiki/%C3%89douard%20Goursat
Édouard Jean-Baptiste Goursat (21 May 1858 – 25 November 1936) was a French mathematician, now remembered principally as an expositor for his Cours d'analyse mathématique, which appeared in the first decade of the twentieth century. It set a standard for the high-level teaching of mathematical analysis, especially complex analysis. This text was reviewed by William Fogg Osgood for the Bulletin of the American Mathematical Society. This led to its translation into English by Earle Raymond Hedrick published by Ginn and Company. Goursat also published texts on partial differential equations and hypergeometric series. Life Edouard Goursat was born in Lanzac, Lot. He was a graduate of the École Normale Supérieure, where he later taught and developed his Cours. At that time the topological foundations of complex analysis were still not clarified, with the Jordan curve theorem considered a challenge to mathematical rigour (as it would remain until L. E. J. Brouwer took in hand the approach from combinatorial topology). Goursat's work was considered by his contemporaries, including G. H. Hardy, to be exemplary in facing up to the difficulties inherent in stating the fundamental Cauchy integral theorem properly. For that reason it is sometimes called the Cauchy–Goursat theorem. Work Goursat, along with Möbius, Schläfli, Cayley, Riemann, Clifford and others, was one of the 19th century mathematicians who envisioned and explored a geometry of more than three dimensions. He was the first to enumerate the finite groups generated by reflections in four-dimensional space, in 1889. The Goursat tetrahedra are the fundamental domains which generate, by repeated reflections of their faces, uniform polyhedra and their honeycombs which fill three-dimensional space. Goursat recognized that the honeycombs are four-dimensional Euclidean polytopes. He derived a formula for the general displacement in four dimensions preserving the origin, which he recognized as a double rotation in two
https://en.wikipedia.org/wiki/Convex%20optimization
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming. Definition A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function mapping some subset of into is convex if its domain is convex and for all and all in its domain, the following condition holds: . A set S is convex if for all members and all , we have that . Concretely, a convex optimization problem is the problem of finding some attaining , where the objective function is convex, as is the feasible set . If such a point exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set. If is unbounded below over or the infimum is not attained, then the optimization problem is said to be unbounded. Otherwise, if is the empty set, then the problem is said to be infeasible. Standard form A convex optimization problem is in standard form if it is written as where: is the optimization variable; The objective function is a convex function; The inequality constraint functions , , are convex functions; The equality constraint functions , , are affine transformations, that
https://en.wikipedia.org/wiki/Hunter%20versus%20farmer%20hypothesis
The hunter versus farmer hypothesis is a proposed explanation of the nature of attention-deficit hyperactivity disorder (ADHD) first suggested by radio host Thom Hartmann in his book Attention Deficit Disorder: A Different Perception. This hypothesis proposes that ADHD represents a lack of adaptation of members of hunter-gatherer societies to their transformation into farming societies. Hartmann developed the idea first as a mental model after his own son was diagnosed with ADHD, stating, "It's not hard science, and was never intended to be." A key component of the hypothesis is that the proposed "hyperfocus" aspect of ADHD is a benefit under appropriate circumstances. The hypothesis also explains the distractibility factor in ADHD individuals and their short attention span for subject matter that does not interest the individual (which may or may not trigger hyperfocus), along with various other characteristics such as difficulty adhering to social norms, poor planning and organizing ability, distorted sense of time, impatience, attraction to variety or novelty or excitement, and impulsiveness. It is argued that in the hunter-gatherer cultures that preceded farming societies, hunters needed hyperfocus more than gatherers. Hypothesis claims The hunter versus farmer hypothesis proposes that the high frequency of ADHD in contemporary settings "represents otherwise normal behavioral strategies that become maladaptive in such evolutionarily novel environments as the formal school classroom." One example such as migration in the hunter-gatherer society, is that some of these hunter-gatherers that naturally predisposed to these various amounts of this same gene may have value in certain kinds or qualities of social groups. It was also stated that the lack of "hyperfocus" should not be the only dichotomy of "farmers versus hunter-gatherers" that was identified in Hartmann's theory. Hartmann claims that most or all humans were nomadic hunter-gatherers for hundreds of th
https://en.wikipedia.org/wiki/Union%20Mini%C3%A8re%20du%20Haut-Katanga
The Union Minière du Haut-Katanga (French; literally "Mining Union of Upper-Katanga") was a Belgian mining company (with minority British share) which controlled and operated the mining industry in the copperbelt region in the modern-day Democratic Republic of the Congo between 1906 and 1966. Created in 1906, the UMHK was founded as a joint venture of the Belgian Compagnie du Katanga, the Belgian Comité Spécial du Katanga and the British Tanganyika Concessions. The Compagnie du Katanga was a subsidiary of the Compagnie du Congo pour le Commerce et l'Industrie (CCCI), which was controlled by the country's largest conglomerate, the Société Générale de Belgique. With the support of the colonial state, the company was allocated a concession in Katanga. Its primary product was copper, but it also produced tin, cobalt, radium, uranium, zinc, cadmium, germanium, manganese, silver, and gold. UMHK was part of a powerful group of global copper producers. By the start of World War II, the Société Générale controlled 70% of the Congolese economy. Exercising preponderant influence over the Comité spécial, the Société Générale effectively controlled the Union Minière from its inception to 1960. In 1967, the Union Minière du Haut-Katanga reorganized as Union Minière, and in 2001 it became Umicore. Company history Copper Cheap copper has no terrors for the great Mid-African mines of the Union Minière du Haut Katanga, world's biggest producer... Elements in Katanga's strength are: tremendously rich ores; cheap native labor; big production of cobalt and radium (over 82%, of world radium supply) on the side; and, most recent, the newly opened Benguela Railway, which connects Katanga with the Atlantic, saves hundreds of rail miles, thousands of nautical miles for Katanga copper on its long journey to European markets. Copper's Travail, 10 August 1931, Time During its years of operation, the UMHK greatly contributed to the wealth of Belgium, and, to a lesser extent, Katanga—whi
https://en.wikipedia.org/wiki/Vibration%20theory%20of%20olfaction
The vibration theory of smell proposes that a molecule's smell character is due to its vibrational frequency in the infrared range. This controversial theory is an alternative to the more widely accepted docking theory of olfaction (formerly termed the shape theory of olfaction), which proposes that a molecule's smell character is due to a range of weak non-covalent interactions between its protein odorant receptor (found in the nasal epithelium), such as electrostatic and Van der Waals interactions as well as H-bonding, dipole attraction, pi-stacking, metal ion, Cation–pi interaction, and hydrophobic effects, in addition to the molecule's conformation. Introduction The current vibration theory has recently been called the "swipe card" model, in contrast with "lock and key" models based on shape theory. As proposed by Luca Turin, the odorant molecule must first fit in the receptor's binding site. Then it must have a vibrational energy mode compatible with the difference in energies between two energy levels on the receptor, so electrons can travel through the molecule via inelastic electron tunneling, triggering the signal transduction pathway. The vibration theory is discussed in a popular but controversial book by Chandler Burr. The odor character is encoded in the ratio of activities of receptors tuned to different vibration frequencies, in the same way that color is encoded in the ratio of activities of cone cell receptors tuned to different frequencies of light. An important difference, though, is that the odorant has to be able to become resident in the receptor for a response to be generated. The time an odorant resides in a receptor depends on how strongly it binds, which in turn determines the strength of the response; the odor intensity is thus governed by a similar mechanism to the "lock and key" model. For a pure vibrational theory, the differing odors of enantiomers, which possess identical vibrations, cannot be explained. However, once the link betwe
https://en.wikipedia.org/wiki/Meta-process%20modeling
Meta-process modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined problems. Meta-process modeling supports the effort of creating flexible process models. The purpose of process models is to document and communicate processes and to enhance the reuse of processes. Thus, processes can be better taught and executed. Results of using meta-process models are an increased productivity of process engineers and an improved quality of the models they produce. Overview Meta-process modeling focuses on and supports the process of constructing process models. Its main concern is to improve process models and to make them evolve, which in turn, will support the development of systems. This is important due to the fact that "processes change with time and so do the process models underlying them. Thus, new processes and models may have to be built and existing ones improved". "The focus has been to increase the level of formality of process models in order to make possible their enactment in process-centred software environments". A process meta-model is a meta model, "a description at the type level of a process model. A process model is, thus, an instantiation of a process meta-model. [..] A meta-model can be instantiated several times in order to define various process models. A process meta-model is at the meta-type level with respect to a process." There exist standards for several domains: Software engineering Software Process Engineering Metamodel (SPEM) which is defined as a profile (UML) by the Object Management Group. Topics in metadata modeling There are different techniques for constructing process models. "Construction techniques used in the information systems area have developed independently of those in software engineering. In information systems, construction techniques exploit the notion of a meta-model and the two principal techniqu
https://en.wikipedia.org/wiki/Process%20modeling
The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model. Overview Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development. The goals of a process model are to be: Descriptive Track what actually happens during a process Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently. Prescriptive Define the desired processes and how they should/could/might be performed. Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance. Explanatory Provide explanations about the rationale of processes. Explore and evaluate the several possible courses of action based on rational arguments. Establish an explicit link between processes and the requirements that the model needs to fulfill. Pre-defines points at which data can be extracted for reporting purposes. Purpose From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process mode
https://en.wikipedia.org/wiki/Metamodeling
A metamodel is a model of a model, and metamodeling is the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction and development of the frames, rules, constraints, models and theories applicable and useful for modeling a predefined class of problems. As its name implies, this concept applies the notions of meta- and modeling in software engineering and systems engineering. Metamodels are of many types and have diverse applications. Overview A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software like entity. Metamodel can be a mathematical relation or algorithm representing input and output relations. A model is an abstraction of phenomena in the real world; a metamodel is yet another abstraction, highlighting properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural network, Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting right metamodels to represent that behavior. Common uses for metamodels are: As a schema for semantic data that needs to be exchanged or stored As a language that supports a particular method or process As a language to express additional semantics of existing information As a mechanism to create tools that work with a broad class of models at run time As a schema for modeling and automatically exploring sentences of a language with applications to automated test synthesis As an approximation of a higher-fidelity model for use when reducing time, cost, or computational effort is necessary Because of the "meta" character of metamodeling, both the praxis and theory
https://en.wikipedia.org/wiki/Abstract%20index%20notation
Abstract index notation (also referred to as slot-naming index notation) is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeholders, not related to any basis and, in particular, are non-numerical. Thus it should not be confused with the Ricci calculus. The notation was introduced by Roger Penrose as a way to use the formal aspects of the Einstein summation convention to compensate for the difficulty in describing contractions and covariant differentiation in modern abstract tensor notation, while preserving the explicit covariance of the expressions involved. Let be a vector space, and its dual space. Consider, for example, an order-2 covariant tensor . Then can be identified with a bilinear form on . In other words, it is a function of two arguments in which can be represented as a pair of slots: Abstract index notation is merely a labelling of the slots with Latin letters, which have no significance apart from their designation as labels of the slots (i.e., they are non-numerical): A tensor contraction (or trace) between two tensors is represented by the repetition of an index label, where one label is contravariant (an upper index corresponding to the factor ) and one label is covariant (a lower index corresponding to the factor ). Thus, for instance, is the trace of a tensor over its last two slots. This manner of representing tensor contractions by repeated indices is formally similar to the Einstein summation convention. However, as the indices are non-numerical, it does not imply summation: rather it corresponds to the abstract basis-independent trace operation (or natural pairing) between tensor factors of type and those of type . Abstract indices and tensor spaces A general homogeneous tensor is an element of a tensor product of copies of and , such as Label each factor in this tensor product with a Latin letter
https://en.wikipedia.org/wiki/Visual%20radio
Visual radio is a generic term for adding visuals to audio radio broadcasts. Visual Radio is also a trademark for a Nokia product which delivers interactive FM radio over a data connection. Visual Radio Visual Radio is a technology developed by Nokia. Visual Radio is built-in functionality available in an increasing number of phones that are already equipped with analog FM radio. Workings The audio is received via a regular analog FM radio embedded in the phone. A presentation of graphics and text, synchronized to the audio programming, is streamed to the phone over a data connection and the FM transmission chain is unaffected by the addition of Visual Radio. Limitations On phones with built-in Wi-Fi (tested on Nokia E51, E63, E66, E71, N78, N79, N81, N82 a, and N95 8GB), the Nokia application does not allow a Wi-Fi access point to be used for the data connection, only GPRS access points are allowed, allowing the possibility of revenue sharing between Nokia, the Radio stations and GPRS network operators. Platform components The platform is composed of three parts: A Visual Radio Tool that can be integrated with the radio station's legacy play-out system, so the interactive visual channel created by the radio station's content producer is synchronized with the audio programming. A Visual Radio server that handles the two-way traffic between the audience and radio stations; A Visual Radio client application on the mobile phone, that displays the interactive visual channel and takes care of user interaction. The Visual Radio concept was created by Nokia and the platform was originally offered to radio stations and operators globally by HP. Since October 2007, Nokia has been collaborating with RCS Inc., of New York, whose Selector music scheduling system is used by thousands of radio stations around the world. RCS produces the second-generation version of the Visual Radio platform and also markets a similar product for the Internet (and most other digita
https://en.wikipedia.org/wiki/Head%20%28watercraft%29
The head (pl. heads) is a ship's toilet. The name derives from sailing ships in which the toilet area for the regular sailors was placed at the head or bow of the ship. Design In sailing ships, the toilet was placed in the bow somewhat above the water line with vents or slots cut near the floor level allowing normal wave action to wash out the facility. Only the captain had a private toilet near his quarters, at the stern of the ship in the quarter gallery. The plans of 18th-century naval ships do not reveal the construction of toilet facilities when the ships were first built. The Journal of Aaron Thomas aboard HMS Lapwing in the Caribbean Sea in the 1790s records that a canvas tube was attached, presumably by the ship's sailmaker, to a superstructure beside the bowsprit near the figurehead, ending just above the normal waterline. In many modern boats, the heads look similar to seated flush toilets but use a system of valves and pumps that brings sea water into the toilet and pumps the waste out through the hull (in place of the more normal cistern and plumbing trap) to a drain. In small boats the pump is often hand operated. The cleaning mechanism is easily blocked if too much toilet paper or other fibrous material is put down the pan. Submarine heads face the problem that at greater depths higher water pressure makes it harder to pump the waste out through the hull. As a result, early systems could be complicated, with the head fitted to the United States Navy S-class submarine being described as almost taking an engineer to operate. Making a mistake resulted in waste or seawater being forcibly expelled back into the hull of the submarine. This caused the loss of . The toilet on the World War I British E-class submarine was considered so poor by the captain of that he preferred the crew to wait to relieve themselves until the submarine surfaced at night. As a result, many submarines only used the heads as an extra storage space for provisions. Aboard sail
https://en.wikipedia.org/wiki/NIS%2B
NIS+ is a directory service developed by Sun Microsystems to replace its older 'NIS' (Network Information Service). It is designed to eliminate the need for duplication across many computers of configuration data such as user accounts, host names and addresses, printer information and NFS disk mounts on individual systems, instead using a central repository on a master server, simplifying system administration. NIS+ client software has been ported to other Unix and Unix-like platforms. Prior to the release of Solaris 9 in 2002, Sun announced its intent to remove NIS+ from Solaris in a future release and now recommends that customers instead use an LDAP-based lookup scheme. NIS+ was present in Solaris 9 and 10 (although both releases include tools to migrate NIS+ data to an LDAP server) and it has been removed from Solaris 11. NIS vs. NIS+ NIS and NIS+ are similar only in purpose and name, otherwise, they are completely different implementations. They differ in the following ways: NIS+ is hierarchical. NIS+ is based around Secure RPC (servers must authenticate clients and vice versa). NIS+ may be replicated (replicas are read-only). NIS+ implements permissions on directories, tables, columns and rows. NIS+ also implements permissions on operations, such as being able to use to transfer changed data from a master to a replica. The problem of managing network information In the 1970s, when computers were expensive, and networks consisted of a small number of nodes, administering network information was manageable, and a centralized system was not needed. As computers became cheaper and networks grew larger, it became increasingly difficult to maintain separate copies of network configurations on individual systems. For example, when a new user was added to the network, the following files would need to be updated on every existing system: Likewise, would have needed updating every time a new group was added and would have needed updating every time
https://en.wikipedia.org/wiki/Manifold%20vacuum
Manifold vacuum, or engine vacuum in an internal combustion engine is the difference in air pressure between the engine's intake manifold and Earth's atmosphere. Manifold vacuum is an effect of a piston's movement on the induction stroke and the choked flow through a throttle in the intake manifold of an engine. It is a measure of the amount of restriction of airflow through the engine, and hence of the unused power capacity in the engine. In some engines, the manifold vacuum is also used as an auxiliary power source to drive engine accessories and for the crankcase ventilation system. Manifold vacuums should not be confused with Venturi vacuums, which are an effect exploited in carburetors to establish a pressure difference roughly proportional to mass airflow and to maintain a somewhat constant air/fuel ratio. It is also used in light airplanes to provide airflow for pneumatic gyroscopic instruments. Overview The rate of airflow through an internal combustion engine is an important factor determining the amount of power the engine generates. Most gasoline engines are controlled by limiting that flow with a throttle that restricts intake airflow, while a diesel engine is controlled by the amount of fuel supplied to the cylinder, and so has no "throttle" as such. Manifold vacuum is present in all naturally aspirated engines that use throttles (including carbureted and fuel injected gasoline engines using the Otto cycle or the two-stroke cycle; diesel engines do not have throttle plates). The mass flow through the engine is the product of the rotation rate of the engine, the displacement of the engine, and the density of the intake stream in the intake manifold. In most applications the rotation rate is set by the application (engine speed in a vehicle or machinery speed in other applications). The displacement is dependent on the engine geometry, which is generally not adjustable while the engine is in use (although a handful of models do have this feature, see
https://en.wikipedia.org/wiki/Swept-plane%20display
Swept-plane display is a structure from motion technique with which one can create the optical illusion of a volume of light, due to the persistence of vision property of human visual perception. The principle is to have a 2D lighted surface sweep in a circle, creating a volume. The image on the 2D surface changes as the surface rotates. The lighted surface needs to be translucent. Perception Optical illusions Psychophysics
https://en.wikipedia.org/wiki/Riparian%20forest
A riparian forest or riparian woodland is a forested or wooded area of land adjacent to a body of water such as a river, stream, pond, lake, marshland, estuary, canal, sink or reservoir. Etymology The term riparian comes from the Latin word ripa, 'river bank'; technically it only refers to areas adjacent to flowing bodies of water such as rivers, streams, sloughs and estuaries. However, the terms riparian forest and riparian zone have come to include areas adjacent to non-flowing bodies of water such as ponds, lakes, playas and reservoirs. Characteristics Riparian forests are subject to frequent inundation. Riparian forests help control sediment, reduce the damaging effects of flooding and aid in stabilizing stream banks. Riparian zones are transition zones between an upland terrestrial environment and an aquatic environment. Organisms found in this zone are adapted to periodic flooding. Many not only tolerate it, but require it in order to maintain health and complete their lifestyles. Threats Threats to riparian forests: Cleared for agricultural use because of the good soil quality Historically, trees used as wood fuel for steamships, steam locomotives, etc. Urban development (housing, roads, malls, etc.) Grazing Mining Disrupted hydrology, such as dams and levees, which reduces the amount and/or frequency of flooding Invasive species See also Bosque Gallery forest Management of Pacific Northwest riparian forests Riparian zone Tugay Swamp Oak Forests
https://en.wikipedia.org/wiki/Lockstep%20%28computing%29
Lockstep systems are fault-tolerant computer systems that run the same set of operations at the same time in parallel. The redundancy (duplication) allows error detection and error correction: the output from lockstep operations can be compared to determine if there has been a fault if there are at least two systems (dual modular redundancy), and the error can be automatically corrected if there are at least three systems (triple modular redundancy), via majority vote. The term "lockstep" originates from army usage, where it refers to synchronized walking, in which marchers walk as closely together as physically practical. To run in lockstep, each system is set up to progress from one well-defined state to the next well-defined state. When a new set of inputs reaches the system, it processes them, generates new outputs and updates its state. This set of changes (new inputs, new outputs, new state) is considered to define that step, and must be treated as an atomic transaction; in other words, either all of it happens, or none of it happens, but not something in between. Sometimes a timeshift (delay) is set between systems, which increases the detection probability of errors induced by external influences (e.g. voltage spikes, ionizing radiation, or in situ reverse engineering). Lockstep memory Some vendors, including Intel, use the term lockstep memory to describe a multi-channel memory layout in which cache lines are distributed between two memory channels, so one half of the cache line is stored in a DIMM on the first channel, while the second half goes to a DIMM on the second channel. By combining the single error correction and double error detection (SECDED) capabilities of two ECC-enabled DIMMs in a lockstep layout, their single-device data correction (SDDC) nature can be extended into double-device data correction (DDDC), providing protection against the failure of any single memory chip. Downsides of the Intel's lockstep memory layout are the reduction
https://en.wikipedia.org/wiki/Wollaston%20prism
A Wollaston prism is an optical device, invented by William Hyde Wollaston, that manipulates polarized light. It separates light into two separate linearly polarized outgoing beams with orthogonal polarization. The two beams will be polarized according to the optical axis of the two right angle prisms. The Wollaston prism consists of two orthogonal prisms of birefringent material—typically a uniaxial material such as calcite. These prisms are cemented together on their base (traditionally with Canada balsam) to form two right triangle prisms with perpendicular optic axes. Outgoing light beams diverge from the prism as ordinary and extraordinary rays due to the differences in the indexes of refraction, with the angle of divergence determined by the prisms' wedge angle and the wavelength of the light. Commercial prisms are available with divergence angles from less than 1° to about 45°. See also Other types of polarizing prisms
https://en.wikipedia.org/wiki/MIC-1
The MIC-1 is a processor architecture invented by Andrew S. Tanenbaum to use as a simple but complete example in his teaching book Structured Computer Organization. It consists of a very simple control unit that runs microcode from a 512-words store. The Micro-Assembly Language (MAL) is engineered to allow simple writing of an IJVM interpreter, and the source code for such an interpreter can be found in the book. Hardware Data path The data path is the core of the MIC-1. It contains 32-bit registers, buses, an ALU and a shifter. Buses There are 2 main buses of 32 lines (or 32 bits) each: B bus: connected to the output of the registers and to the input of the ALU. C bus: connected to the output of the shifter and to the input of the registers. Registers Registers are selected by 2 control lines: one to enable the B bus and the other to enable the C bus. The B bus can be enabled by just one register at a time, since the transfer of data from 2 registers at the same time, would make this data inconsistent. In contrast, the C bus can be enabled by more than 1 register at the same time; as a matter of fact, the current value present in the C bus can be written to more than 1 register without problems. The reading and writing operations are carried out in 1 clock cycle. The MBR register is a readonly register, and it contains 2 control lines. Since it is an 8-bit register, its output is connected to the least significant 8 bits of the B bus. It can be set to provide its output in 2 ways: 2's complement (MBR): all the remaining 24 bits of the B bus are set to 1, if it's a negative number, or they are set to 0, if it's a positive number (sign extension). Without complement (MBRU): the remaining 24 bits (of 32 total) are set to 0. ALU The ALU (or arithmetic logic unit) has the following input, output and control lines: 2 32-bit input lines: one for the B bus and one for the bus that is connected directly to the H register. 1 32-bit output line, whi
https://en.wikipedia.org/wiki/Sedentary%20lifestyle
Sedentary lifestyle is a lifestyle type, in which one is physically inactive and does little or no physical movement and or exercise. A person living a sedentary lifestyle is often sitting or lying down while engaged in an activity like socializing, watching TV, playing video games, reading or using a mobile phone or computer for much of the day. A sedentary lifestyle contributes to poor health quality, diseases as well as many preventable causes of death. Sitting time is a common measure of a sedentary lifestyle. A global review representing 47% of the global adult population found that the average person sits down for 4.7 to 6.5 hours a day with the average going up every year. The CDC found that 25.3% of all American adults are physically inactive. Screen time is a term for the amount of time a person spends looking at a screen such as a television, computer monitor, or mobile device. Excessive screen time is linked to negative health consequences. Definition Sedentary behavior is not the same as physical inactivity: sedentary behavior is defined as "any waking behavior characterized by an energy expenditure less than or equal to 1.5 metabolic equivalents (METs), while in a sitting, reclining or lying posture". Spending most waking hours sitting does not necessarily mean that an individual is sedentary, though sitting and lying down most frequently are sedentary behaviors. Esmonde-White defines a sedentary lifestyle as a lifestyle that involves "longer than six hours a day" of sedentary behavior. Health effects Effects of a sedentary work life or lifestyle can be either direct or indirect. One of the most prominent direct effect of a sedentary lifestyle is an increased BMI leading to obesity. A lack of physical activity is one of the leading causes of preventable death worldwide. At least 300,000 premature deaths, and $90 billion in direct healthcare costs are caused by obesity and sedentary lifestyle per year in the US alone. The risk is higher among those
https://en.wikipedia.org/wiki/Campus%20network
A campus network, campus area network, corporate area network or CAN is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. The networking equipments (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned by the campus tenant / owner: an enterprise, university, government etc. A campus area network is larger than a local area network but smaller than a metropolitan area network (MAN) or wide area network (WAN). University campuses College or university campus area networks often interconnect a variety of buildings, including administrative buildings, academic buildings, university libraries, campus or student centers, residence halls, gymnasiums, and other outlying structures, like conference centers, technology centers, and training institutes. Early examples include the Stanford University Network at Stanford University, Project Athena at MIT, and the Andrew Project at Carnegie Mellon University. Corporate campuses Much like a university campus network, a corporate campus network serves to connect buildings. Examples of such are the networks at Googleplex and Microsoft's campus. Campus networks are normally interconnected with high speed Ethernet links operating over optical fiber such as gigabit Ethernet and 10 Gigabit Ethernet. Area range The range of CAN is 1 km to 5 km. If two buildings have the same domain and they are connected with a network, then it will be considered as CAN only. Though the CAN is mainly used for corporate campuses so the data link will be high speed.
https://en.wikipedia.org/wiki/Traffic%20wave
Traffic waves, also called stop waves, ghost jams, traffic snakes or traffic shocks, are traveling disturbances in the distribution of cars on a highway. Traffic waves travel backwards relative to the cars themselves. Relative to a fixed spot on the road the wave can move with, or against the traffic, or even be stationary (when the wave moves away from the traffic with exactly the same speed as the traffic). Traffic waves are a type of traffic jam. A deeper understanding of traffic waves is a goal of the physical study of traffic flow, in which traffic itself can often be seen using techniques similar to those used in fluid dynamics. It is related to the accordion effect. Mitigation It has been said that by knowing how traffic waves are created, drivers can sometimes reduce their effects by increasing vehicle headways and reducing the use of brakes, ultimately alleviating traffic congestion for everyone in the area. However, in other models, increasing headway leads to diminishing the capacity of the travel lanes, increasing the congestion; however, disputed by acknowledging that similar principles apply to herding sheep through gates, and that in such a case, via human intervention, solitons are diminished simply by slapping "stuck sheep" and holding back aggressive sheep. In funnelling sheep through gates it can be determined how much intervention is needed to curb bottlenecks. Similar principles can be applied to human traffic streams, where, if each individual had the knowledge of final destination and complete route planning, then traversal along a route would be done so with the full knowledge that any abrupt change from any itinerary causes delays for those about to traverse the same route. History The earliest theoretical model of traffic shock waves was offered by Lighthill and Whitham in 1955. The following year Paul Richards independently published a similar model. Both papers were based on fluid dynamics and the model is known as the Lighthill-Whith
https://en.wikipedia.org/wiki/Aeroplankton
Aeroplankton (or aerial plankton) are tiny lifeforms that float and drift in the air, carried by wind. Most of the living things that make up aeroplankton are very small to microscopic in size, and many can be difficult to identify because of their tiny size. Scientists collect them for study in traps and sweep nets from aircraft, kites or balloons. The study of the dispersion of these particles is called aerobiology. Aeroplankton is made up mostly of microorganisms, including viruses, about 1,000 different species of bacteria, around 40,000 varieties of fungi, and hundreds of species of protists, algae, mosses, and liverworts that live some part of their life cycle as aeroplankton, often as spores, pollen, and wind-scattered seeds. Additionally, microorganisms are swept into the air from terrestrial dust storms, and an even larger amount of airborne marine microorganisms are propelled high into the atmosphere in sea spray. Aeroplankton deposits hundreds of millions of airborne viruses and tens of millions of bacteria every day on every square meter around the planet. Small, drifting aeroplankton are found everywhere in the atmosphere, reaching concentration up to 106 microbial cells per cubic metre. Processes such as aerosolisation and wind transport determine how the microorganisms are distributed in the atmosphere. Air mass circulation globally disperses vast numbers of the floating aerial organisms, which travel across and between continents, creating biogeographic patterns by surviving and settling in remote environments. As well as the colonization of pristine environments, the globetrotting behaviour of these organisms has human health consequences. Airborne microorganisms are also involved in cloud formation and precipitation, and play important roles in the formation of the phyllosphere, a vast terrestrial habitat involved in nutrient cycling. Overview The atmosphere is the least understood biome on Earth despite its critical role as a microbial transpo
https://en.wikipedia.org/wiki/Field%20%28video%29
In video, a field is one of the many still images displayed sequentially to create the impression of motion on the screen. Two fields comprise one video frame. When the fields are displayed on a video monitor they are "interlaced" so that the content of one field will be used on all of the odd-numbered lines on the screen, and the other field will be displayed on the even lines. Converting fields to a still frame image requires a process called deinterlacing, in which the missing lines are duplicated or interpolated to recreate the information that would have been contained in the discarded field. Since each field contains only half of the information of a full frame, however, deinterlaced images do not have the resolution of a full frame. To increase the resolution of video images, new schemes have been created that capture full-frame images for each frame. Video composed of such frames is called progressive scan video. Video shot with a standard video camera format such as S-VHS or Mini-DV is often interlaced when created. In contrast, video shot with a film-based camera is almost always progressive. Free-to-air analog TV was mostly broadcast as interlaced material because the trade-off of spatial resolution for frame rate reduced flickering on Cathode ray tube (CRT) televisions. High-definition digital television (see: HDTV) today can be broadcast terrestrially or distributed through cable systems in either interlaced (1080i) or progressive scan formats (720p or 1080p). Most prosumer camcorders can record in progressive scan formats. In video editing, knowing which of the two (odd or even) fields is "dominant." Selecting edit points on the wrong field can result in a "flash" at each edit point, and playing the video fields in reverse order creates a flickering image. See also Federal Standard 1037C: defines the field in interlaced video. Color framing External links All About Video Fields: technical information with emphasis on the programming implica
https://en.wikipedia.org/wiki/Miniature%20book
A miniature book is a very small book. Standards for what may be termed a miniature rather than just a small book have changed through time. Today, most collectors consider a book to be miniature only if it is 3 inches or smaller in height, width, and thickness, particularly in the United States. Many collectors consider nineteenth-century and earlier books of 4 inches to fit in the category of miniatures. Book from 3–4 inches in all dimensions are termed macrominiature books. Books less than 1 inch in all dimensions are called microminiature books. Books less than 1/4 inch in all dimensions are known as ultra-microminiature books. History Miniature books stretch back far in history; many collections contain cuneiform tablets stretching back thousands of years, and exquisite medieval Books of Hours. Printers began testing the limits of size not long after the technology of printing began, and around 200 miniature books were printed in the sixteenth century. Exquisite specimens from the 17th century abound. In the 19th century, technological innovations in printing enabled the creation of smaller and smaller type. Fine and popular additions alike grew in number throughout the 19th century in what was considered the golden age for miniature books. While some miniature books are objects of high craft, bound in fine Moroccan leather, with gilt decoration and excellent examples of woodcuts, etchings, and watermarks, others are cheap, disposable, sometimes highly functional items not expected to survive. Today, miniature books are produced both as fine works of craft and as commercial products found in chain bookstores. Miniature books were produced for personal convenience. Miniature books could be easily be carried in the pocket of a waistcoat or a woman's reticule. Victorian women used miniature etiquette books to subtly ascertain information on polite behavior in society. Along with etiquette books, Victorian women that had copies of The Little Flirt learned to
https://en.wikipedia.org/wiki/Toxicogenomics
Toxicogenomics is a subdiscipline of pharmacology that deals with the collection, interpretation, and storage of information about gene and protein activity within a particular cell or tissue of an organism in response to exposure to toxic substances. Toxicogenomics combines toxicology with genomics or other high-throughput molecular profiling technologies such as transcriptomics, proteomics and metabolomics. Toxicogenomics endeavors to elucidate the molecular mechanisms evolved in the expression of toxicity, and to derive molecular expression patterns (i.e., molecular biomarkers) that predict toxicity or the genetic susceptibility to it. Pharmaceutical research In pharmaceutical research, toxicogenomics is defined as the study of the structure and function of the genome as it responds to adverse xenobiotic exposure. It is the toxicological subdiscipline of pharmacogenomics, which is broadly defined as the study of inter-individual variations in whole-genome or candidate gene single-nucleotide polymorphism maps, haplotype markers, and alterations in gene expression that might correlate with drug responses. Though the term toxicogenomics first appeared in the literature in 1999, it was by that time already in common use within the pharmaceutical industry as its origin was driven by marketing strategies from vendor companies. The term is still not universally accepted, and others have offered alternative terms such as chemogenomics to describe essentially the same field of study. Bioinformatics The nature and complexity of the data (in volume and variability) demands highly developed processes of automated handling and storage. The analysis usually involves a wide array of bioinformatics and statistics, often including statistical classification approaches. Drug discovery In pharmaceutical drug discovery and development, toxicogenomics is used to study possible adverse (i.e. toxic) effects of pharmaceutical drugs in defined model systems in order to draw conclusion
https://en.wikipedia.org/wiki/Cyclic%20alternating%20pattern
The cyclic alternating pattern (abbreviated CAP) is a pattern of two, long-lasting alternate electroencephalogram (EEG) patterns that occur in sleep, as described by Terzano, et al., in 1985. It is a pattern of spontaneous cortical activity, which is ongoing and in the absence of sensory stimulation. It is the reorganization of the sleeping brain challenged by the modification of environmental conditions and it is characterized by periodic abnormal electrocortical activity that recurs with a frequency of up to one minute. It is considered "the EEG marker of unstable sleep". CAP does not occur during REM. In Lennox-Gastaut syndrome, CAP modulates the occurrence of clinical seizures and generalized epileptic discharges by means of a gate-control mechanism. CAP is a marker of sleep instability and it is found during non-rapid eye movement sleep. CAP is organized into sequences of successive cycles composed of two phases, A and B. Phase A involves phasic events, in other words, not continuous. Phase A subtypes of CAP allow adaptive adjustments of ongoing states to internal and external inputs. Phase B refers to background rhythm during CAP. Furthermore, CAP involves cerebral activities and is influenced by autonomic and motor functions. Interaction between CAP and neurovegetative fluctuations and motor events determine the pathophysiology of several sleep disorders and the effect of medication on continuous positive airway pressure (CPAP) treatment (CPAP is used to treat obstructive sleep apnea or OSA). CAP is a marker of NREM instability and is also the "master clock" that accompanies the stage transitions maintained in sleep phases, noted in both the EEG and by autonomic functions through regular fluctuations. CAP is decreased in narcolepsy, multiple system atrophy, in certain cases of drug administration, with CPAP treatment for OSA, and during night-time recovery sleep after prolonged sleep deprivation. There is a relationship present between CAP and arousals that
https://en.wikipedia.org/wiki/Seed%20dispersal
In spermatophyte plants, seed dispersal is the movement, spread or transport of seeds away from the parent plant. Plants have limited mobility and rely upon a variety of dispersal vectors to transport their seeds, including both abiotic vectors, such as the wind, and living (biotic) vectors such as birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus. These modes are typically inferred based on adaptations, such as wings or fleshy fruit. However, this simplified view may ignore complexity in dispersal. Plants can disperse via modes without possessing the typical associated adaptations and plant traits may be multifunctional. Benefits Seed dispersal is likely to have several benefits for different plant species. Seed survival is often higher away from the parent plant. This higher survival may result from the actions of density-dependent seed and seedling predators and pathogens, which often target the high concentrations of seeds beneath adults. Competition with adult plants may also be lower when seeds are transported away from their parent. Seed dispersal also allows plants to reach specific habitats that are favorable for survival, a hypothesis known as directed dispersal. For example, Ocotea endresiana (Lauraceae) is a tree species from Latin America which is dispersed by several species of birds, including the three-wattled bellbird. Male bellbirds perch on dead trees in order to attract mates, and often defecate seeds beneath these perches where the see
https://en.wikipedia.org/wiki/Flexural%20rigidity
Flexural rigidity is defined as the force couple required to bend a fixed non-rigid structure by one unit of curvature, or as the resistance offered by a structure while undergoing bending. Flexural rigidity of a beam Although the moment and displacement generally result from external loads and may vary along the length of the beam or rod, the flexural rigidity (defined as ) is a property of the beam itself and is generally constant for prismatic members. However, in cases of non-prismatic members, such as the case of the tapered beams or columns or notched stair stringers, the flexural rigidity will vary along the length of the beam as well. The flexural rigidity, moment, and transverse displacement are related by the following equation along the length of the rod, : where is the flexural modulus (in Pa), is the second moment of area (in m4), is the transverse displacement of the beam at x, and is the bending moment at x. The flexural rigidity (stiffness) of the beam is therefore related to both , a material property, and , the physical geometry of the beam. If the material exhibits Isotropic behavior then the Flexural Modulus is equal to the Modulus of Elasticity (Young's Modulus). Flexural rigidity has SI units of Pa·m4 (which also equals N·m²). Flexural rigidity of a plate (e.g. the lithosphere) In the study of geology, lithospheric flexure affects the thin lithospheric plates covering the surface of the Earth when a load or force is applied to them. On a geological timescale, the lithosphere behaves elastically (in first approach) and can therefore bend under loading by mountain chains, volcanoes and other heavy objects. Isostatic depression caused by the weight of ice sheets during the last glacial period is an example of the effects of such loading. The flexure of the plate depends on: The plate elastic thickness (usually referred to as effective elastic thickness of the lithosphere). The elastic properties of the plate The applied load or fo
https://en.wikipedia.org/wiki/Mantel%20test
The Mantel test, named after Nathan Mantel, is a statistical test of the correlation between two matrices. The matrices must be of the same dimension; in most applications, they are matrices of interrelations between the same vectors of objects. The test was first published by Nathan Mantel, a biostatistician at the National Institutes of Health, in 1967. Accounts of it can be found in advanced statistics books (e.g., Sokal & Rohlf 1995). Usage The test is commonly used in ecology, where the data are usually estimates of the "distance" between objects such as species of organisms. For example, one matrix might contain estimates of the genetic distances (i.e., the amount of difference between two different genomes) between all possible pairs of species in the study, obtained by the methods of molecular systematics; while the other might contain estimates of the geographical distance between the ranges of each species to every other species. In this case, the hypothesis being tested is whether the variation in genetics for these organisms is correlated to the variation in geographical distance. Method If there are n objects, and the matrix is symmetrical (so the distance from object a to object b is the same as the distance from b to a) such a matrix contains distances. Because distances are not independent of each other – since changing the "position" of one object would change of these distances (the distance from that object to each of the others) – we can not assess the relationship between the two matrices by simply evaluating the correlation coefficient between the two sets of distances and testing its statistical significance. The Mantel test deals with this problem. The procedure adopted is a kind of randomization or permutation test. The correlation between the two sets of distances is calculated, and this is both the measure of correlation reported and the test statistic on which the test is based. In principle, any correlation coefficient could be
https://en.wikipedia.org/wiki/Temperate%20forest
A temperate forest is a forest found between the tropical and boreal regions, located in the temperate zone. It is the second largest biome on our planet, covering 25% of the world's forest area, only behind the boreal forest, which covers about 33%. These forests cover both hemispheres at latitudes ranging from 25 to 50 degrees, wrapping the planet in a belt similar to that of the boreal forest. Due to its large size spanning several continents, there are several main types: deciduous, coniferous, mixed forest, and rainforest. Climate The climate of a temperate forest is highly variable depending on the location of the forest. For example, Los Angeles and Vancouver, Canada are both considered to be located in a temperate zone, however, Vancouver is located in a temperate rainforest, while Los Angeles is a relatively dry subtropical climate. Types of temperate forest Deciduous They are found in Europe, East Asia, North America, and in some parts of South America. Deciduous forests are composed mainly of broadleaf trees, such as maple and oak, that shed all their leaves during one season. They are typically found in three middle-latitude regions with temperate climates characterized by a winter season and year-round precipitation: eastern North America, western Eurasia and northeastern Asia. Coniferous Coniferous forests are composed of needle-leaved evergreen trees, such as pine or fir. Evergreen forests are typically found in regions with moderate climates. Boreal forests, however, are an exception as they are found in subarctic regions. Coniferous trees often have an advantage over broadleaf trees in harsher environments. Their leaves are typically hardier and longer lived but require more energy to grow. Mixed As the name implies, conifers and broadleaf trees grow in the same area. The main trees found in these forests in North America and Eurasia include fir, oak, ash, maple, birch, beech, poplar, elm and pine. Other plant species may include magnolia,
https://en.wikipedia.org/wiki/Electron%20backscatter%20diffraction
Electron backscatter diffraction (EBSD) is a scanning electron microscopy (SEM) technique used to study the crystallographic structure of materials. EBSD is carried out in a scanning electron microscope equipped with an EBSD detector comprising at least a phosphorescent screen, a compact lens and a low-light camera. In this configuration, the SEM incident beam hits the tilted sample. As backscattered electrons leave the sample, they interact with the crystal's periodic atomic lattice planes and diffract according to Bragg's law at various scattering angles before reaching the phosphor screen forming Kikuchi patterns (EBSPs). EBSD spatial resolution depends on many factors, including the nature of the material under study and the sample preparation. Thus, EBSPs can be indexed to provide information about the material's grain structure, grain orientation, and phase at the micro-scale. EBSD is applied for impurities and defect studies, plastic deformation, and statistical analysis for average misorientation, grain size, and crystallographic texture. EBSD can also be combined with energy-dispersive X-ray spectroscopy (EDS), cathodoluminescence (CL), and wavelength-dispersive X-ray spectroscopy (WDS) for advanced phase identification and materials discovery. The change and degradation in electron backscatter patterns (EBSPs) provide information about lattice distortion in the diffracting volume. Pattern degradation (i.e., diffuse quality) can be used to assess the level of plasticity. Changes in the EBSP zone axis position can be used to measure the residual stress and small lattice rotations. EBSD can also provide information about the density of geometrically necessary dislocations (GNDs). However, the lattice distortion is measured relative to a reference pattern (EBSP0). The choice of reference pattern affects the measurement precision; e.g., a reference pattern deformed in tension will directly reduce the tensile strain magnitude derived from a high-resolution map
https://en.wikipedia.org/wiki/Hyperfunction
In mathematics, hyperfunctions are generalizations of functions, as a 'jump' from one holomorphic function to another at a boundary, and can be thought of informally as distributions of infinite order. Hyperfunctions were introduced by Mikio Sato in 1958 in Japanese, (1959, 1960 in English), building upon earlier work by Laurent Schwartz, Grothendieck and others. Formulation A hyperfunction on the real line can be conceived of as the 'difference' between one holomorphic function defined on the upper half-plane and another on the lower half-plane. That is, a hyperfunction is specified by a pair (f, g), where f is a holomorphic function on the upper half-plane and g is a holomorphic function on the lower half-plane. Informally, the hyperfunction is what the difference would be at the real line itself. This difference is not affected by adding the same holomorphic function to both f and g, so if h is a holomorphic function on the whole complex plane, the hyperfunctions (f, g) and (f + h, g + h) are defined to be equivalent. Definition in one dimension The motivation can be concretely implemented using ideas from sheaf cohomology. Let be the sheaf of holomorphic functions on Define the hyperfunctions on the real line as the first local cohomology group: Concretely, let and be the upper half-plane and lower half-plane respectively. Then so Since the zeroth cohomology group of any sheaf is simply the global sections of that sheaf, we see that a hyperfunction is a pair of holomorphic functions one each on the upper and lower complex halfplane modulo entire holomorphic functions. More generally one can define for any open set as the quotient where is any open set with . One can show that this definition does not depend on the choice of giving another reason to think of hyperfunctions as "boundary values" of holomorphic functions. Examples If f is any holomorphic function on the whole complex plane, then the restriction of f to the real axis is a hyperf
https://en.wikipedia.org/wiki/Numerical%20method
In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm. Mathematical definition Let be a well-posed problem, i.e. is a real or complex functional relationship, defined on the cross-product of an input data set and an output data set , such that exists a locally lipschitz function called resolvent, which has the property that for every root of , . We define numerical method for the approximation of , the sequence of problems with , and for every . The problems of which the method consists need not be well-posed. If they are, the method is said to be stable or well-posed. Consistency Necessary conditions for a numerical method to effectively approximate are that and that behaves like when . So, a numerical method is called consistent if and only if the sequence of functions pointwise converges to on the set of its solutions: When on the method is said to be strictly consistent. Convergence Denote by a sequence of admissible perturbations of for some numerical method (i.e. ) and with the value such that . A condition which the method has to satisfy to be a meaningful tool for solving the problem is convergence: One can easily prove that the point-wise convergence of to implies the convergence of the associated method is function. See also Numerical methods for ordinary differential equations Numerical methods for partial differential equations
https://en.wikipedia.org/wiki/Mystery%20House
Mystery House is an adventure game released by On-Line Systems in 1980. It was designed, written and illustrated by Roberta Williams, and programmed by Ken Williams for the Apple II. Mystery House is the first graphical adventure game and the first game produced by On-Line Systems, the company which would evolve into Sierra On-Line. It is one of the earliest horror video games. Plot The game starts near an abandoned Victorian mansion. The player is soon locked inside the house with no other option than to explore. The mansion contains many interesting rooms and seven other people: Tom, a plumber; Sam, a mechanic; Sally, a seamstress; Dr. Green, a surgeon; Joe, a grave-digger; Bill, a butcher; Daisy, a cook. Initially, the player has to search the house in order to find a hidden cache of jewels. Soon, dead bodies (of the other people) begin appearing and it is obvious there is a murderer on the loose in the house. The player must discover who it is or become the next victim. Development and release At the end of the 1970s, Ken Williams sought to set up a company for enterprise software for the market-dominating Apple II computer. One day, he took a teletype terminal to his house to work on the development of an accounting program. Looking through a catalog, he found a game called Colossal Cave Adventure. He bought the game and introduced it to his wife, Roberta, and they both played through it. They began to search for something similar but found the market underdeveloped. Roberta decided that she could write her own, and conceived of the plot for Mystery House, taking inspiration from Agatha Christie's novel And Then There Were None. She was also inspired by the board game Clue, which helped to break her out from a linear structure to the game. Recognizing that though she knew some programming, she needed someone else to code the game, she convinced her husband to help her. Ken agreed and borrowed his brother's Apple II computer to write the game on. Ken sugges
https://en.wikipedia.org/wiki/Boundary%20element%20method
The boundary element method (BEM) is a numerical computational method of solving linear partial differential equations which have been formulated as integral equations (i.e. in boundary integral form), including fluid mechanics, acoustics, electromagnetics (where the technique is known as method of moments or abbreviated as MoM), fracture mechanics, and contact mechanics. Mathematical basis The integral equation may be regarded as an exact solution of the governing partial differential equation. The boundary element method attempts to use the given boundary conditions to fit boundary values into the integral equation, rather than values throughout the space defined by a partial differential equation. Once this is done, in the post-processing stage, the integral equation can then be used again to calculate numerically the solution directly at any desired point in the interior of the solution domain. BEM is applicable to problems for which Green's functions can be calculated. These usually involve fields in linear homogeneous media. This places considerable restrictions on the range and generality of problems to which boundary elements can usefully be applied. Nonlinearities can be included in the formulation, although they will generally introduce volume integrals which then require the volume to be discretised before solution can be attempted, removing one of the most often cited advantages of BEM. A useful technique for treating the volume integral without discretising the volume is the dual-reciprocity method. The technique approximates part of the integrand using radial basis functions (local interpolating functions) and converts the volume integral into boundary integral after collocating at selected points distributed throughout the volume domain (including the boundary). In the dual-reciprocity BEM, although there is no need to discretize the volume into meshes, unknowns at chosen points inside the solution domain are involved in the linear algebraic equ
https://en.wikipedia.org/wiki/Perceptual%20control%20theory
Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels. Principles and differences from other theories The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world. The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment
https://en.wikipedia.org/wiki/Fodor%27s%20lemma
In mathematics, particularly in set theory, Fodor's lemma states the following: If is a regular, uncountable cardinal, is a stationary subset of , and is regressive (that is, for any , ) then there is some and some stationary such that for any . In modern parlance, the nonstationary ideal is normal. The lemma was first proved by the Hungarian set theorist, Géza Fodor in 1956. It is sometimes also called "The Pressing Down Lemma". Proof We can assume that (by removing 0, if necessary). If Fodor's lemma is false, for every there is some club set such that . Let . The club sets are closed under diagonal intersection, so is also club and therefore there is some . Then for each , and so there can be no such that , so , a contradiction. Fodor's lemma also holds for Thomas Jech's notion of stationary sets as well as for the general notion of stationary set. Fodor's lemma for trees Another related statement, also known as Fodor's lemma (or Pressing-Down-lemma), is the following: For every non-special tree and regressive mapping (that is, , with respect to the order on , for every ), there is a non-special subtree on which is constant.
https://en.wikipedia.org/wiki/Stationary%20set
In mathematics, specifically set theory and model theory, a stationary set is a set that is not too small in the sense that it intersects all club sets and is analogous to a set of non-zero measure in measure theory. There are at least three closely related notions of stationary set, depending on whether one is looking at subsets of an ordinal, or subsets of something of given cardinality, or a powerset. Classical notion If is a cardinal of uncountable cofinality, and intersects every club set in then is called a stationary set. If a set is not stationary, then it is called a thin set. This notion should not be confused with the notion of a thin set in number theory. If is a stationary set and is a club set, then their intersection is also stationary. This is because if is any club set, then is a club set, thus is nonempty. Therefore, must be stationary. See also: Fodor's lemma The restriction to uncountable cofinality is in order to avoid trivialities: Suppose has countable cofinality. Then is stationary in if and only if is bounded in . In particular, if the cofinality of is , then any two stationary subsets of have stationary intersection. This is no longer the case if the cofinality of is uncountable. In fact, suppose is moreover regular and is stationary. Then can be partitioned into many disjoint stationary sets. This result is due to Solovay. If is a successor cardinal, this result is due to Ulam and is easily shown by means of what is called an Ulam matrix. H. Friedman has shown that for every countable successor ordinal , every stationary subset of contains a closed subset of order type . Jech's notion There is also a notion of stationary subset of , for a cardinal and a set such that , where is the set of subsets of of cardinality : . This notion is due to Thomas Jech. As before, is stationary if and only if it meets every club, where a club subset of is a set unbounded under and closed under union of chains of lengt
https://en.wikipedia.org/wiki/Diagonal%20intersection
Diagonal intersection is a term used in mathematics, especially in set theory. If is an ordinal number and is a sequence of subsets of , then the diagonal intersection, denoted by is defined to be That is, an ordinal is in the diagonal intersection if and only if it is contained in the first members of the sequence. This is the same as where the closed interval from 0 to is used to avoid restricting the range of the intersection. See also Club filter Club set Fodor's lemma
https://en.wikipedia.org/wiki/Club%20filter
In mathematics, particularly in set theory, if is a regular uncountable cardinal then the filter of all sets containing a club subset of is a -complete filter closed under diagonal intersection called the club filter. To see that this is a filter, note that since it is thus both closed and unbounded (see club set). If then any subset of containing is also in since and therefore anything containing it, contains a club set. It is a -complete filter because the intersection of fewer than club sets is a club set. To see this, suppose is a sequence of club sets where Obviously is closed, since any sequence which appears in appears in every and therefore its limit is also in every To show that it is unbounded, take some Let be an increasing sequence with and for every Such a sequence can be constructed, since every is unbounded. Since and is regular, the limit of this sequence is less than We call it and define a new sequence similar to the previous sequence. We can repeat this process, getting a sequence of sequences where each element of a sequence is greater than every member of the previous sequences. Then for each is an increasing sequence contained in and all these sequences have the same limit (the limit of ). This limit is then contained in every and therefore and is greater than To see that is closed under diagonal intersection, let be a sequence of club sets, and let To show is closed, suppose and Then for each for all Since each is closed, for all so To show is unbounded, let and define a sequence as follows: and is the minimal element of such that Such an element exists since by the above, the intersection of club sets is club. Then and since it is in each with See also
https://en.wikipedia.org/wiki/Electric%20power%20quality
Electric power quality is the degree to which the voltage, frequency, and waveform of a power supply system conform to established specifications. Good power quality can be defined as a steady supply voltage that stays within the prescribed range, steady AC frequency close to the rated value, and smooth voltage curve waveform (which resembles a sine wave). In general, it is useful to consider power quality as the compatibility between what comes out of an electric outlet and the load that is plugged into it. The term is used to describe electric power that drives an electrical load and the load's ability to function properly. Without the proper power, an electrical device (or load) may malfunction, fail prematurely or not operate at all. There are many ways in which electric power can be of poor quality, and many more causes of such poor quality power. The electric power industry comprises electricity generation (AC power), electric power transmission and ultimately electric power distribution to an electricity meter located at the premises of the end user of the electric power. The electricity then moves through the wiring system of the end user until it reaches the load. The complexity of the system to move electric energy from the point of production to the point of consumption combined with variations in weather, generation, demand and other factors provide many opportunities for the quality of supply to be compromised. While "power quality" is a convenient term for many, it is the quality of the voltage—rather than power or electric current—that is actually described by the term. Power is simply the flow of energy, and the current demanded by a load is largely uncontrollable. Introduction The quality of electrical power may be described as a set of values of parameters, such as: Continuity of service (whether the electrical power is subject to voltage drops or overages below or above a threshold level thereby causing blackouts or brownouts) Variation i
https://en.wikipedia.org/wiki/Strain%20%28chemistry%29
In chemistry, a molecule experiences strain when its chemical structure undergoes some stress which raises its internal energy in comparison to a strain-free reference compound. The internal energy of a molecule consists of all the energy stored within it. A strained molecule has an additional amount of internal energy which an unstrained molecule does not. This extra internal energy, or strain energy, can be likened to a compressed spring. Much like a compressed spring must be held in place to prevent release of its potential energy, a molecule can be held in an energetically unfavorable conformation by the bonds within that molecule. Without the bonds holding the conformation in place, the strain energy would be released. Summary Thermodynamics The equilibrium of two molecular conformations is determined by the difference in Gibbs free energy of the two conformations. From this energy difference, the equilibrium constant for the two conformations can be determined. If there is a decrease in Gibbs free energy from one state to another, this transformation is spontaneous and the lower energy state is more stable. A highly strained, higher energy molecular conformation will spontaneously convert to the lower energy molecular conformation. Enthalpy and entropy are related to Gibbs free energy through the equation (at a constant temperature): Enthalpy is typically the more important thermodynamic function for determining a more stable molecular conformation. While there are different types of strain, the strain energy associated with all of them is due to the weakening of bonds within the molecule. Since enthalpy is usually more important, entropy can often be ignored. This isn't always the case; if the difference in enthalpy is small, entropy can have a larger effect on the equilibrium. For example, n-butane has two possible conformations, anti and gauche. The anti conformation is more stable by 0.9 kcal mol−1. We would expect that butane is roughly
https://en.wikipedia.org/wiki/Van%20der%20Waals%20strain
Van der Waals strain is strain resulting from Van der Waals repulsion when two substituents in a molecule approach each other with a distance less than the sum of their Van der Waals radii. Van der Waals strain is also called Van der Waals repulsion and is related to steric hindrance. One of the most common forms of this strain is eclipsing hydrogen, in alkanes. In rotational and pseudorotational mechanisms In molecules whose vibrational mode involves a rotational or pseudorotational mechanism (such as the Berry mechanism or the Bartell mechanism), Van der Waals strain can cause significant differences in potential energy, even between molecules with identical geometry. PF5, for example, has significantly lower potential energy than PCl5. Despite their identical trigonal bipyramidal molecular geometry, the higher electron count of chlorine as compared to fluorine causes a potential energy spike as the molecule enters its intermediate in the mechanism and the substituents draw nearer to each other. See also Van der Waals force Van der Waals molecule Van der Waals radius Van der Waals surface
https://en.wikipedia.org/wiki/Human%20Behavior%20and%20Evolution%20Society
The Human Behavior and Evolution Society (HBES) is an interdisciplinary, international society of researchers, primarily from the social and biological sciences, who use modern evolutionary theory to help to discover human nature — including evolved emotional, cognitive and sexual adaptations. It was founded on October 29, 1988 at the University of Michigan. The official academic journal of the society is Evolution and Human Behavior, and the society has held annual conferences since 1989. The membership in broadly international, and consists of scholars from many fields, such as psychology, anthropology, medicine, law, philosophy, biology, economics and sociology. Despite the diversity, HBES members "all speak the common language of Darwinism." Presidents The following individuals have served as presidents of HBES: W.D. Hamilton (1988-1989) Randy Nesse (1989-1991) Martin Daly (1991-1993) Napoleon Chagnon (1993-1995) Dick Alexander (1995-1997) Margo Wilson (1997-1999) John Tooby (1999-2001) Bill Irons (2001-2003) Bobbi Low (2003-2005) David Buss (2005-2007) Steve Gangestad (2007-2009) Pete Richerson (2009-2011) Randy Thornhill (2011-2013) Mark Flinn (2013-2015) Elizabeth Cashdan (2015-2017) Rob Kurzban (2017-2018) Doug Kenrick (2018-2019) Leda Cosmides (2019-2021) David Schmitt (2021-) See also Dual inheritance theory Evolutionary developmental psychology Evolutionary psychology FOXP2 and human evolution Human behavioral ecology
https://en.wikipedia.org/wiki/Register%20file
A register file is an array of processor registers in a central processing unit (CPU). Register banking is the method of using a single name to access multiple different physical registers depending on the operating mode. Modern integrated circuit-based register files are usually implemented by way of fast static RAMs with multiple ports. Such RAMs are distinguished by having dedicated read and write ports, whereas ordinary multiported SRAMs will usually read and write through the same ports. The instruction set architecture of a CPU will almost always define a set of registers which are used to stage data between memory and the functional units on the chip. In simpler CPUs, these architectural registers correspond one-for-one to the entries in a physical register file (PRF) within the CPU. More complicated CPUs use register renaming, so that the mapping of which physical entry stores a particular architectural register changes dynamically during execution. The register file is part of the architecture and visible to the programmer, as opposed to the concept of transparent caches. Register-bank switching Register files may be clubbed together as register banks. A processor may have more than one register bank. ARM processors have both banked and unbanked registers. While all modes always share the same physical registers for the first eight general-purpose registers, R0 to R7, the physical register which the banked registers, R8 to R14, point to depends on the operating mode the processor is in. Notably, Fast Interrupt Request (FIQ) mode has its own bank of registers for R8 to R12, with the architecture also providing a private stack pointer (R13) for every interrupt mode. x86 processors use context switching and fast interrupt for switching between instruction, decoder, GPRs and register files, if there is more than one, before the instruction is issued, but this is only existing on processors that support superscalar. However, context switching is a totall
https://en.wikipedia.org/wiki/IBM%207090/94%20IBSYS
IBSYS is the discontinued tape-based operating system that IBM supplied with its IBM 709, IBM 7090 and IBM 7094 computers. A similar operating system (but with several significant differences), also called IBSYS, was provided with IBM 7040 and IBM 7044 computers. IBSYS was based on FORTRAN Monitor System (FMS) and (more likely) Bell Labs' "BESYS" rather than the SHARE Operating System. IBSYS directly supported several old language processors on the $EXECUTE card: 9PAC, FORTRAN and IBSFAP. Newer language processors ran under IBJOB. IBM later provided similar facilities for the 7040/7044 as IBM 7040/7044 Operating System (16K/32K) 7040-PR-150 and for the IBM 1410/IBM 7010 as IBM 1410/7010 Operating System 1410-PR-155. IBSYS System Supervisor IBSYS itself is a resident monitor program, that reads control card images placed between the decks of program and data cards of individual jobs. An IBSYS control card begins with a "$" in column 1, immediately followed by a Control Name that selects the various IBSYS utility programs needed to set up and run the job. These card deck images are usually read from magnetic tapes prepared offline, not directly from the card reader. IBJOB Processor The IBJOB Processor is a subsystem that runs under the IBSYS System Supervisor. It reads control cards that request, e.g., compilation, execution. The languages supported include COBOL. Commercial Translator (COMTRAN), Fortran IV (IBFTC) and Macro Assembly Program (IBMAP). See also University of Michigan Executive System Timeline of operating systems Further reading Noble, A. S., Jr., "Design of an integrated programming and operating system", IBM Systems Journal, June 1963. "The present paper considers the underlying design concepts of IBSYS/IBJOB, an integrated programming and operating system. The historical background and over-all structure of the system are discussed. Flow of jobs through the IBJOB processor, as controlled by the monitor, is also described." "IBM 7090/7094
https://en.wikipedia.org/wiki/Effector%20cell
In cell biology, an effector cell is any of various types of cell that actively responds to a stimulus and effects some change (brings it about). Examples of effector cells include: The muscle, gland or organ cell capable of responding to a stimulus at the terminal end of an efferent nerve fiber Plasma cell, an effector B cell in the immune system Effector T cells, T cells that actively respond to a stimulus Cytokine-induced killer cells, strongly productive cytotoxic effector cells that are capable of lysing tumor cells Microglia, a glial effector cell that reconstructs the Central nervous system after a bone marrow transplant Fibroblast, a cell that is most commonly found within connective tissue Mast cell, the primary effector cell involved in the development of asthma Cytokine-induced killer cells as effector cells As an effector cell, cytokine-induced killer cells can recognize infected or malignant cells even when antibodies and major histocompatibility complex (MHC) are not available. This allows a quick immune reaction to take place. Cytokine-Induced killer (CIK) cells are important because harmful cells that do not contain MHC cannot be traced and removed by other immune cells. CIK cells are being studied intensely as a possible therapy treatment for cancer and other types of viral infections. CIK cells respond to lymphokines by lysing tumorous cells that are resistant to NK cells or LAK cell activity. CIK cells show a large amount of cytotoxic potential against various types of tumors. Side effects of CIK cells are also considered very minor. In a few cases, CIK cell treatment lead to the complete disappearance of tumor burdens, extended periods of survival, and improved quality of life, even if the cancerous tumor cells were in advanced stages. At the moment, the exact mechanism of tumor recognition in CIK cells are not completely understood. Fibroblast as effector cells Fibroblast are types of cells that form the extracellular matrix and col
https://en.wikipedia.org/wiki/Cycle%20graph%20%28algebra%29
In group theory, a subfield of abstract algebra, a group cycle graph illustrates the various cycles of a group and is particularly useful in visualizing the structure of small finite groups. A cycle is the set of powers of a given group element a, where an, the n-th power of an element a is defined as the product of a multiplied by itself n times. The element a is said to generate the cycle. In a finite group, some non-zero power of a must be the group identity, e; the lowest such power is the order of the cycle, the number of distinct elements in it. In a cycle graph, the cycle is represented as a polygon, with the vertices representing the group elements, and the connecting lines indicating that all elements in that polygon are members of the same cycle. Cycles Cycles can overlap, or they can have no element in common but the identity. The cycle graph displays each interesting cycle as a polygon. If a generates a cycle of order 6 (or, more shortly, has order 6), then a6 = e. Then the set of powers of a2, {a2, a4, e} is a cycle, but this is really no new information. Similarly, a5 generates the same cycle as a itself. So, only the primitive cycles need be considered, namely those that are not subsets of another cycle. Each of these is generated by some primitive element, a. Take one point for each element of the original group. For each primitive element, connect e to a, a to a2, ..., an−1 to an, etc., until e is reached. The result is the cycle graph. When a2 = e, a has order 2 (is an involution), and is connected to e by two edges. Except when the intent is to emphasize the two edges of the cycle, it is typically drawn as a single line between the two elements. Properties As an example of a group cycle graph, consider the dihedral group Dih4. The multiplication table for this group is shown on the left, and the cycle graph is shown on the right with e specifying the identity element. Notice the cycle {e, a, a2, a3} in the multiplication table, with
https://en.wikipedia.org/wiki/Turing%20jump
In computability theory, the Turing jump or Turing jump operator, named for Alan Turing, is an operation that assigns to each decision problem a successively harder decision problem with the property that is not decidable by an oracle machine with an oracle for . The operator is called a jump operator because it increases the Turing degree of the problem . That is, the problem is not Turing-reducible to . Post's theorem establishes a relationship between the Turing jump operator and the arithmetical hierarchy of sets of natural numbers. Informally, given a problem, the Turing jump returns the set of Turing machines that halt when given access to an oracle that solves that problem. Definition The Turing jump of X can be thought of as an oracle to the halting problem for oracle machines with an oracle for X. Formally, given a set and a Gödel numbering of the -computable functions, the Turing jump of is defined as The th Turing jump is defined inductively by The jump of is the effective join of the sequence of sets for : where denotes the th prime. The notation or is often used for the Turing jump of the empty set. It is read zero-jump or sometimes zero-prime. Similarly, is the th jump of the empty set. For finite , these sets are closely related to the arithmetic hierarchy, and is in particular connected to Post's theorem. The jump can be iterated into transfinite ordinals: there are jump operators for sets of natural numbers when is an ordinal that has a code in Kleene's (regardless of code, the resulting jumps are the same by a theorem of Spector), in particular the sets for , where is the Church–Kleene ordinal, are closely related to the hyperarithmetic hierarchy. Beyond , the process can be continued through the countable ordinals of the constructible universe, using Jensen's work on fine structure theory of Godel's L. The concept has also been generalized to extend to uncountable regular cardinals. Examples The Tur
https://en.wikipedia.org/wiki/Treatise%20on%20Invertebrate%20Paleontology
The Treatise on Invertebrate Paleontology (or TIP) published by the Geological Society of America and the University of Kansas Press, is a definitive multi-authored work of some 50 volumes, written by more than 300 paleontologists, and covering every phylum, class, order, family, and genus of fossil and extant (still living) invertebrate animals. The prehistoric invertebrates are described as to their taxonomy, morphology, paleoecology, stratigraphic and paleogeographic range. However, taxa with no fossil record whatsoever have just a very brief listing. Publication of the decades-long Treatise on Invertebrate Paleontology is a work-in-progress; and therefore it is not yet complete: For example, there is no volume yet published regarding the post-Paleozoic era caenogastropods (a molluscan group including the whelk and periwinkle). Furthermore, every so often, previously published volumes of the Treatise are revised. Evolution of the project Raymond C. Moore, the project's founder and first editor, originally envisioned this Treatise in invertebrate paleontology as comprising just three large volumes, and totaling only three thousand pages. The project began with work on a few, mostly slim volumes in which a single senior specialist in a distinct field of invertebrate paleozoology would summarize one particular group. As a result, each publication became a comprehensive compilation of everything known at that time for each group. Examples of this stage of the project are Part G. Bryozoa, by Ray S. Bassler (the first volume, published in 1953), and Part P. Arthropoda Part 2, the Chelicerata by Alexander Petrunkevitch (1955/1956). Around 1959 or 1960, as more and larger invertebrate groups were being addressed, the incompleteness of the then-current state of affairs became apparent. So several senior editors of the Treatise started major research programs to fill in the evident gaps. Consequently, the succeeding volumes, while still maintaining the original fo
https://en.wikipedia.org/wiki/Bappir
Bappir was a Sumerian twice-baked barley bread that was primarily used in ancient Mesopotamian beer brewing. Historical research done at Anchor Brewing Co. in 1989 (documented in Charlie Papazian's Home Brewer's Companion ()) reconstructed a bread made from malted barley and barley flour with honey, spices and water and baked until hard enough to store for long periods of time; the finished product was probably crumbled and mixed with water, malt and either dates or honey and allowed to ferment for a few days, producing a somewhat sweet brew. It seems to have been drunk flat without bottling or conditioning with a straw in the manner that yerba mate is drunk now. It is thought that bappir was seldom baked with the intent of being eaten; its storage qualities made it a good candidate for an emergency ration in times of scarcity, but its primary use seems to have been beer-making. A modern interpretation of Sumerian bappir bread was brewed and bottled in 2016 by Anchorbrew. See also Ninkasi, the Sumerian goddess of beer Biscotti, a similarly twice-baked modern bread that is often eaten as a sweet course with wine or coffee
https://en.wikipedia.org/wiki/Oppenheimer%E2%80%93Phillips%20process
The Oppenheimer–Phillips process or strip reaction is a type of deuteron-induced nuclear reaction. In this process the neutron half of an energetic deuteron (a stable isotope of hydrogen with one proton and one neutron) fuses with a target nucleus, transmuting the target to a heavier isotope while ejecting a proton. An example is the nuclear transmutation of carbon-12 to carbon-13. The process allows a nuclear interaction to take place at lower energies than would be expected from a simple calculation of the Coulomb barrier between a deuteron and a target nucleus. This is because, as the deuteron approaches the positively charged target nucleus, it experiences a charge polarization where the "proton-end" faces away from the target and the "neutron-end" faces towards the target. The fusion proceeds when the binding energy of the neutron and the target nucleus exceeds the binding energy of the deuteron itself; the proton formerly in the deuteron is then repelled from the new, heavier, nucleus. History An explanation of this effect was published by J. Robert Oppenheimer and Melba Phillips in 1935, considering experiments with the Berkeley cyclotron showing that some elements became radioactive under deuteron bombardment. Mechanism During the O-P process, the deuteron's positive charge is spatially polarized, and collects preferentially at one end of the deuteron's density distribution, nominally, the "proton end". As the deuteron approaches the target nucleus, the positive charge is repelled by the electrostatic field until, assuming the incident energy is not sufficient for it to surmount the barrier, the "proton end" approaches to a minimum distance having climbed the Coulomb barrier as far as it can. If the "neutron end" is close enough for the strong nuclear force, which only operates over very short distances, to exceed the repulsive electrostatic force on the "proton end", fusion of a neutron with the target nucleus may begin. The reaction proceeds as follow
https://en.wikipedia.org/wiki/Molecular%20sieve
A molecular sieve is a material with pores (very small holes) of uniform size. These pore diameters are similar in size to small molecules, and thus large molecules cannot enter or be adsorbed, while smaller molecules can. As a mixture of molecules migrates through the stationary bed of porous, semi-solid substance referred to as a sieve (or matrix), the components of the highest molecular weight (which are unable to pass into the molecular pores) leave the bed first, followed by successively smaller molecules. Some molecular sieves are used in size-exclusion chromatography, a separation technique that sorts molecules based on their size. Other molecular sieves are used as desiccants (some examples include activated charcoal and silica gel). The pore diameter of a molecular sieve is measured in ångströms (Å) or nanometres (nm). According to IUPAC notation, microporous materials have pore diameters of less than 2 nm (20 Å) and macroporous materials have pore diameters of greater than 50 nm (500 Å); the mesoporous category thus lies in the middle with pore diameters between 2 and 50 nm (20–500 Å). Materials Molecular sieves can be microporous, mesoporous, or macroporous material. Microporous material (<2 nm) Zeolites (aluminosilicate minerals, not to be confused with aluminium silicate) Zeolite LTA: 3–4 Å Porous glass: 10 Å (1 nm), and up Active carbon: 0–20 Å (0–2 nm), and up Clays Montmorillonite intermixes Halloysite (endellite): Two common forms are found, when hydrated the clay exhibits a 1 nm spacing of the layers and when dehydrated (meta-halloysite) the spacing is 0.7 nm. Halloysite naturally occurs as small cylinders which average 30 nm in diameter with lengths between 0.5 and 10 micrometres. Mesoporous material (2–50 nm) Silicon dioxide (used to make silica gel): 24 Å (2.4 nm) Macroporous material (>50 nm) Macroporous silica, 200–1000 Å (20–100 nm) Applications Molecular sieves are often utilized in the petroleum industry, especially for dryin
https://en.wikipedia.org/wiki/Digital%20access%20carrier%20system
Digital access carrier system (DACS) is the name used by British Telecom (BT Group plc) in the United Kingdom for a 0+2 pair gain system. Usage For almost as long as telephones have been a common feature in homes and offices, telecommunication companies have regularly been faced with a situation where demand in a particular street or area exceeds the number of physical copper pairs available from the pole to the exchange. Until the early 1980s, this situation was often dealt with by providing shared or 'party' lines, which were connected to multiple customers. This raised privacy problems since any subscriber connected to the line could listen to (or indeed, interrupt) another subscriber's call. With advances in the size, price, and reliability of electronic equipment, it eventually became possible to provide two normal subscriber lines over one copper pair, eliminating the need for party lines. The more modern ISDN technology based digital systems that perform this task are known in Britain by the generic name 'DACS'. DACS works by digitising the analogue signal and sending the combined digital information for both lines over the same copper pair between the exchange and the pole. The cost of the DACS equipment is significantly less than the cost of installing additional copper pairs. Overview The DACS system consists of three main parts: The exchange unit (EU), which connects multiple pairs of analogue lines to their corresponding single digital lines. One Telspec EU rack connects as many as 80 analogue lines over 40 digital copper pairs. The copper pair between the exchange and the remote unit, carrying the digital signal between the exchange unit and the remote unit. The remote unit (RU), which connects two analogue customer lines to one digital copper pair. The RUs are usually to be found on poles within a few hundred metres of the subscribers' homes or businesses. Advantages Because it uses a digital signal along most of the distance between subscrib
https://en.wikipedia.org/wiki/Hund%27s%20rules
In atomic physics, Hund's rules refers to a set of rules that German physicist Friedrich Hund formulated around 1925, which are used to determine the term symbol that corresponds to the ground state of a multi-electron atom. The first rule is especially important in chemistry, where it is often referred to simply as Hund's Rule. The three rules are: For a given electron configuration, the term with maximum multiplicity has the lowest energy. The multiplicity is equal to , where is the total spin angular momentum for all electrons. The multiplicity is also equal to the number of unpaired electrons plus one. Therefore, the term with lowest energy is also the term with maximum and maximum number of unpaired electrons. For a given multiplicity, the term with the largest value of the total orbital angular momentum quantum number  has the lowest energy. For a given term, in an atom with outermost subshell half-filled or less, the level with the lowest value of the total angular momentum quantum number  (for the operator ) lies lowest in energy. If the outermost shell is more than half-filled, the level with the highest value of  is lowest in energy. These rules specify in a simple way how usual energy interactions determine which term includes the ground state. The rules assume that the repulsion between the outer electrons is much greater than the spin–orbit interaction, which is in turn stronger than any other remaining interactions. This is referred to as the LS coupling regime. Full shells and subshells do not contribute to the quantum numbers for total , the total spin angular momentum and for , the total orbital angular momentum. It can be shown that for full orbitals and suborbitals both the residual electrostatic energy (repulsion between electrons) and the spin–orbit interaction can only shift all the energy levels together. Thus when determining the ordering of energy levels in general only the outer valence electrons must be considered. Rule 1 Due t
https://en.wikipedia.org/wiki/Static%20spacetime
In general relativity, a spacetime is said to be static if it does not change over time and is also irrotational. It is a special case of a stationary spacetime, which is the geometry of a stationary spacetime that does not change in time but can rotate. Thus, the Kerr solution provides an example of a stationary spacetime that is not static; the non-rotating Schwarzschild solution is an example that is static. Formally, a spacetime is static if it admits a global, non-vanishing, timelike Killing vector field which is irrotational, i.e., whose orthogonal distribution is involutive. (Note that the leaves of the associated foliation are necessarily space-like hypersurfaces.) Thus, a static spacetime is a stationary spacetime satisfying this additional integrability condition. These spacetimes form one of the simplest classes of Lorentzian manifolds. Locally, every static spacetime looks like a standard static spacetime which is a Lorentzian warped product R S with a metric of the form , where R is the real line, is a (positive definite) metric and is a positive function on the Riemannian manifold S. In such a local coordinate representation the Killing field may be identified with and S, the manifold of -trajectories, may be regarded as the instantaneous 3-space of stationary observers. If is the square of the norm of the Killing vector field, , both and are independent of time (in fact ). It is from the latter fact that a static spacetime obtains its name, as the geometry of the space-like slice S does not change over time. Examples of static spacetimes The (exterior) Schwarzschild solution. de Sitter space (the portion of it covered by the static patch). Reissner–Nordström space. The Weyl solution, a static axisymmetric solution of the Einstein vacuum field equations discovered by Hermann Weyl. Examples of non-static spacetimes In general, "almost all" spacetimes will not be static. Some explicit examples include: Spherically symmetric spacet
https://en.wikipedia.org/wiki/Asymptotically%20flat%20spacetime
An asymptotically flat spacetime is a Lorentzian manifold in which, roughly speaking, the curvature vanishes at large distances from some region, so that at large distances, the geometry becomes indistinguishable from that of Minkowski spacetime. While this notion makes sense for any Lorentzian manifold, it is most often applied to a spacetime standing as a solution to the field equations of some metric theory of gravitation, particularly general relativity. In this case, we can say that an asymptotically flat spacetime is one in which the gravitational field, as well as any matter or other fields which may be present, become negligible in magnitude at large distances from some region. In particular, in an asymptotically flat vacuum solution, the gravitational field (curvature) becomes negligible at large distances from the source of the field (typically some isolated massive object such as a star). Intuitive significance The condition of asymptotic flatness is analogous to similar conditions in mathematics and in other physical theories. Such conditions say that some physical field or mathematical function is asymptotically vanishing in a suitable sense. In general relativity, an asymptotically flat vacuum solution models the exterior gravitational field of an isolated massive object. Therefore, such a spacetime can be considered as an isolated system: a system in which exterior influences can be neglected. Indeed, physicists rarely imagine a universe containing a single star and nothing else when they construct an asymptotically flat model of a star. Rather, they are interested in modeling the interior of the star together with an exterior region in which gravitational effects due to the presence of other objects can be neglected. Since typical distances between astrophysical bodies tend to be much larger than the diameter of each body, we often can get away with this idealization, which usually helps to greatly simplify the construction and analysis of
https://en.wikipedia.org/wiki/Paleopedological%20record
The paleopedological record is, essentially, the fossil record of soils. The paleopedological record consists chiefly of paleosols buried by flood sediments, or preserved at geological unconformities, especially plateau escarpments or sides of river valleys. Other fossil soils occur in areas where volcanic activity has covered the ancient soils. Problems of recognition After burial, soil fossils tend to be altered by various chemical and physical processes. These include: Decomposition of organic matter that was once present in the old soil. This hinders the recognition of vegetation that was in the soil when it was present. Oxidation of iron from Fe2+ to Fe3+ by O2 as the former soil becomes dry and more oxygen enters the soil. Drying out of hydrous ferric oxides to anhydrous oxides - again due to the presence of more available O2 in the dry environment. The keys to recognising fossils of various soils include: Tubular structures that branch and thin irregularly downward or show the anatomy of fossilised root traces Gradational alteration down from a sharp lithological contact like that between land surface and soil horizons Complex patterns of cracks and mineral replacements like those of soil clods (peds) and planar cutans. Classification Soil fossils are usually classified by USDA soil taxonomy. With the exception of some exceedingly old soils which have a clayey, grey-green horizon that is quite unlike any present soil and clearly formed in the absence of O2, most fossil soils can be classified into one of the twelve orders recognised by this system. This is usually done by means of X-ray diffraction, which allows the various particles within the former soils to be analysed so that it can be seen to which order the soils correspond. Other methods for classifying soil fossils rely on geochemical analysis of the soil material, which allows the minerals in the soil to be identified. This is only useful where large amounts of the ancient soil are avai
https://en.wikipedia.org/wiki/Error-tolerant%20design
An error-tolerant design (or human-error-tolerant design) is one that does not unduly penalize user or human errors. It is the human equivalent of fault tolerant design that allows equipment to continue functioning in the presence of hardware faults, such as a "limp-in" mode for an automobile electronics unit that would be employed if something like the oxygen sensor failed. Use of behavior shaping constraints to prevent errors Use of forcing functions or behavior-shaping constraints is one technique in error-tolerant design. An example is the interlock or lockout of reverse in the transmission of a moving car. This prevents errors, and prevention of errors is the most effective technique in error-tolerant design. The practice is known as poka-yoke in Japan where it was introduced by Shigeo Shingo as part of the Toyota Production System. Mitigation of the effects of errors The next most effective technique in error-tolerant design is the mitigation or limitation of the effects of errors after they have been made. An example is a checking or confirmation function such as an "Are you sure" dialog box with the harmless option preselected in computer software for an action that could have severe consequences if made in error, such as deleting or overwriting files (although the consequence of inadvertent file deletion has been reduced from the DOS days by a concept like the trash can in Mac OS, which has been introduced in most GUI interfaces). Adding too great a mitigating factor in some circumstances can become a hindrance, where the confirmation becomes mechanical this may become detrimental - for example, if a prompt is asked for every file in a batch delete, one may be tempted to simply agree to each prompt, even if a file is deleted accidentally. Another example is Google's use of spell checking on searches performed through their search engine. The spell checking minimises the problems caused by incorrect spelling by not only highlighting the error to th
https://en.wikipedia.org/wiki/Birkhoff%27s%20theorem%20%28electromagnetism%29
In physics, in the context of electromagnetism, Birkhoff's theorem concerns spherically symmetric static solutions of Maxwell's field equations of electromagnetism. The theorem is due to George D. Birkhoff. It states that any spherically symmetric solution of the source-free Maxwell equations is necessarily static. Pappas (1984) gives two proofs of this theorem, using Maxwell's equations and Lie derivatives. It is a limiting case of Birkhoff's theorem (relativity) by taking the flat metric without backreaction. Derivation from Maxwell's equations The source-free Maxwell's equations state that Since the fields are spherically symmetric, they depend only on the radial distance in spherical coordinates. The field is purely radial as non-radial components cannot be invariant under rotation, which would be necessary for symmetry. Therefore, we can rewrite the fields as We find that the curls must be zero, since, Moreover, we can substitute into the source-free Maxwell equations, to find that Simply dividing by the constant coefficients, we find that both the magnetic and electric field are static Derivation using Lie derivatives Defining the 1-form and 2-form in as: Using the Hodge star operator, we can rewrite Maxwell's Equations with these forms as . The spherical symmetry condition requires that the Lie derivatives of and with respect to the vector field that represents their rotations are zero By the definition of the Lie derivative as the directional derivative along . Therefore, is equivalent to under rotation and we can write for some function . Because the product of the components of the vector are just its length . And substituting back into our equation and rewriting for a function . Taking the exterior derivative of , we find by definition that, . And using our Maxwell equation that , . Thus, we find that the magnetic field is static. Similarly, using the second rotational invariance equation, we can find that the electric
https://en.wikipedia.org/wiki/Lumber%20Cartel
The Lumber Cartel was a facetious conspiracy theory popularized on USENET that claimed anti-spammers were secretly paid agents of lumber companies. In November 1997, a participant on news.admin.net-abuse.email posted an essay to the newsgroup. The essay described a conspiracy theory: The reasoning provided in the essay was that certain companies first destroy forests and make paper out of them, which is in turn used to send bulk mail. Since sending e-mail spam does not use paper at all, the essay argued, the lumber companies would want to stop it before it would surpass paper-based bulk mailing, and consequently only those in the pay of the lumber companies would be anti-spam. The rationale was based in disclaimers in certain spam messages that they were using electronic means in order to save paper. The joke eventually led to a club and numerous parody websites, most of which have long since disappeared. Gatherings of anti-spammers on Usenet began to ridicule proponents of this theory, and many participants in news.admin.net-abuse.email chose to dub themselves as members of "the Lumber Cartel" in their signatures, followed immediately by the acronymic disclaimer "TinLC" (There is no Lumber Cartel), reminiscent of the There Is No Cabal catchphrase. People were able to register with a website about the Lumber Cartel and were given a sequential membership number. That was added to email sig files in news.admin.net-abuse.email and used on personal websites. There was no verification or requirement to receive the membership number. See also Culture jamming
https://en.wikipedia.org/wiki/Cesare%20Arzel%C3%A0
Cesare Arzelà (6 March 1847–15 March 1912) was an Italian mathematician who taught at the University of Bologna and is recognized for his contributions in the theory of functions, particularly for his characterization of sequences of continuous functions, generalizing the one given earlier by Giulio Ascoli in the Arzelà–Ascoli theorem. Life He was a pupil of the Scuola Normale Superiore of Pisa where he graduated in 1869. Arzelà came from a poor household; therefore he could not start his study until 1871, when he studied in Pisa under Enrico Betti and Ulisse Dini. He was working in Florence (from 1875) and in 1878 obtained the Chair of Algebra at the University of Palermo. After that he became a professor in 1880 at the University of Bologna at the department of analysis. He conducted research in the field of theory of functions. His most famous student was Leonida Tonelli. In 1889 he generalized the Ascoli theorem to Arzelà–Ascoli theorem, an important theorem in the theory of functions. He was a member of the Accademia Nazionale dei Lincei, and of several other academies. Works See also Total variation Further reading . Available from the website of the External links 1847 births 1912 deaths 20th-century Italian mathematicians Mathematical analysts Academic staff of the University of Palermo
https://en.wikipedia.org/wiki/John%20Wilson%20%28English%20judge%29
Sir John Wilson (6 August 1741, Applethwaite, Westmorland – 18 October 1793, Kendal, Westmorland) was an English mathematician and judge. Wilson's theorem is named after him. Wilson attended school in Staveley, Cumbria before going up to Peterhouse, Cambridge in 1757, where he was a student of Edward Waring. He was Senior Wrangler in 1761. He was later knighted, and became a Fellow of the Royal Society in 1782. He was Judge of Common Pleas from 1786 until his death in 1793. See also Wilson prime Notes
https://en.wikipedia.org/wiki/Method%20of%20loci
The method of loci is a strategy for memory enhancement, which uses visualizations of familiar spatial environments in order to enhance the recall of information. The method of loci is also known as the memory journey, memory palace, journey method, memory spaces, or mind palace technique. This method is a mnemonic device adopted in ancient Roman and Greek rhetorical treatises (in the anonymous Rhetorica ad Herennium, Cicero's De Oratore, and Quintilian's Institutio Oratoria). Many memory contest champions report using this technique to recall faces, digits, and lists of words. The term is most often found in specialised works on psychology, neurobiology, and memory, though it was used in the same general way at least as early as the first half of the nineteenth century in works on rhetoric, logic, and philosophy. John O'Keefe and Lynn Nadel refer to:... "the method of loci", an imaginal technique known to the ancient Greeks and Romans and described by Yates (1966) in her book The Art of Memory as well as by Luria (1969). In this technique the subject memorizes the layout of some building, or the arrangement of shops on a street, or any geographical entity which is composed of a number of discrete loci. When desiring to remember a set of items the subject 'walks' through these loci in their imagination and commits an item to each one by forming an image between the item and any feature of that locus. Retrieval of items is achieved by 'walking' through the loci, allowing the latter to activate the desired items. The efficacy of this technique has been well established (Ross and Lawrence 1968, Crovitz 1969, 1971, Briggs, Hawkins and Crovitz 1970, Lea 1975), as is the minimal interference seen with its use. The items to be remembered in this mnemonic system are mentally associated with specific physical locations. The method relies on memorized spatial relationships to establish order and recollect memorial content. It is also known as the "Journey Method", used fo
https://en.wikipedia.org/wiki/Schools%20Interoperability%20Framework
The Schools Interoperability Framework, Systems Interoperability Framework (UK), or SIF, is a data-sharing open specification for academic institutions from kindergarten through workforce. This specification is being used primarily in the United States, Canada, the UK, Australia, and New Zealand; however, it is increasingly being implemented in India, and elsewhere. The specification comprises two parts: an XML specification for modeling educational data which is specific to the educational locale (such as North America, Australia or the UK), and a service-oriented architecture (SOA) based on both direct and brokered RESTful-models for sharing that data between institutions, which is international and shared between the locales. SIF is not a product, but an industry initiative that enables diverse applications to interact and share data. , SIF was estimated to have been used in more than 48 US states and 6 countries, supporting five million students. The specification was started and maintained by its specification body, the Schools Interoperability Framework Association, renamed the Access For Learning Community (A4L) in 2015. History Traditionally, the standalone applications used by public school districts have the limitation of data isolation; that is, it is difficult to access and share their data. This often results in redundant data entry, data integrity problems, and inefficient or incomplete reporting. In such cases, a student's information can appear in multiple places but may not be identical, for example, or decision makers may be working with incomplete or inaccurate information. Many district and site technology coordinators also experience an increase in technical support problems from maintaining numerous proprietary systems. SIF was created to solve these issues. The Schools Interoperability Framework (SIF) began as an initiative chiefly championed initially by Microsoft to create "a blueprint for educational software interoperability and
https://en.wikipedia.org/wiki/Zanac
is a shoot 'em up video game developed by Compile and published in Japan by Pony Canyon and in North America by FCI. It was released for the MSX computer, the Family Computer Disk System, the Nintendo Entertainment System, and for the Virtual Console. It was reworked for the MSX2 computer as Zanac EX and for the PlayStation as Zanac X Zanac. Players fly a lone starfighter, dubbed the AFX-6502 Zanac, through twelve levels; their goal is to destroy the System—a part-organic, part-mechanical entity bent on destroying mankind. Zanac was developed by main core developers of Compile, including Masamitsu "Moo" Niitani, Koji "Janus" Teramoto, and Takayuki "Jemini" Hirono. All of these developers went on to make other popular similarly based games such as The Guardian Legend, Blazing Lazers, and the Puyo Puyo series. The game is known for its intense and fast-paced gameplay, level of difficulty, and music which seems to match the pace of the game. It has been praised for its unique adaptive artificial intelligence, in which the game automatically adjusts the difficulty level according to the player's skill level, rate of fire and the ship's current defensive status/capability. Gameplay In Zanac, the player controls the spaceship AFX-6502 Zanac as it flies through various planets, space stations, and outer space and through an armada of enemies comprising the defenses of the game's main antagonist—the "System". The player must fight through twelve levels and destroy the System and its defenses. The objective is to shoot down enemies and projectiles and accumulate points. Players start with three lives, and they lose a life if they get hit by an enemy or projectile. After losing a life, gameplay continues with the player reappearing on the screen and losing all previously accumulated power-ups; the player remains temporarily invincible for a moment upon reappearing on the screen. The game ends when all the player's lives have been lost or after completing the twelfth and fin
https://en.wikipedia.org/wiki/Risk%20of%20infection
Risk of infection is a nursing diagnosis which is defined as "the state in which an individual is at risk to be invaded by an opportunistic or pathogenic agent (virus, fungus, bacteria, protozoa, or other parasite) from endogenous or exogenous sources" and was approved by NANDA in 1986. Although anyone can become infected by a pathogen, patients with this diagnosis are at an elevated risk and extra infection controls should be considered. Endogenous sources The risk of infection depends on a number of endogenous sources. Skin damage from incision as well as very young or old age can increase a patient's risk of infection. Examples of risk factors includes decreased immune system secondary to disease, compromised circulation secondary to peripheral vascular disease, compromised skin integrity secondary to surgery, or repeated contact with contagious agents. Assessment The patient should be asked about a history of repeated infections, symptoms of infection, recent travel to high-risk areas, and their immunization history. They should also be assessed for objective signs such as the presence of wounds, fever, or signs of nutritional deficiency Intervention The specific nursing interventions will depend on the nature and severity of the risk. Patients should be taught how to recognize the signs of infection and how to reduce their risk. Surgery is a frequent risk factor for infection and a physician may prescribe antibiotics prophylactically. Immunization is another common medical intervention for those who are at high risk for infection. Hand washing is the best way to break the chain of infection.
https://en.wikipedia.org/wiki/Nielsen%E2%80%93Thurston%20classification
In mathematics, Thurston's classification theorem characterizes homeomorphisms of a compact orientable surface. William Thurston's theorem completes the work initiated by . Given a homeomorphism f : S → S, there is a map g isotopic to f such that at least one of the following holds: g is periodic, i.e. some power of g is the identity; g preserves some finite union of disjoint simple closed curves on S (in this case, g is called reducible); or g is pseudo-Anosov. The case where S is a torus (i.e., a surface whose genus is one) is handled separately (see torus bundle) and was known before Thurston's work. If the genus of S is two or greater, then S is naturally hyperbolic, and the tools of Teichmüller theory become useful. In what follows, we assume S has genus at least two, as this is the case Thurston considered. (Note, however, that the cases where S has boundary or is not orientable are definitely still of interest.) The three types in this classification are not mutually exclusive, though a pseudo-Anosov homeomorphism is never periodic or reducible. A reducible homeomorphism g can be further analyzed by cutting the surface along the preserved union of simple closed curves Γ. Each of the resulting compact surfaces with boundary is acted upon by some power (i.e. iterated composition) of g, and the classification can again be applied to this homeomorphism. The mapping class group for surfaces of higher genus Thurston's classification applies to homeomorphisms of orientable surfaces of genus ≥ 2, but the type of a homeomorphism only depends on its associated element of the mapping class group Mod(S). In fact, the proof of the classification theorem leads to a canonical representative of each mapping class with good geometric properties. For example: When g is periodic, there is an element of its mapping class that is an isometry of a hyperbolic structure on S. When g is pseudo-Anosov, there is an element of its mapping class that preserves a pair of tr
https://en.wikipedia.org/wiki/Nutritional%20yeast
Nutritional yeast (also known as nooch) is a deactivated yeast, often a strain of Saccharomyces cerevisiae, that is sold commercially as a food product. It is sold in the form of yellow flakes, granules, or powder and can be found in the bulk aisle of most natural food stores. It is popular with vegans and vegetarians and may be used as an ingredient in recipes or as a condiment. It is a significant source of some B-complex vitamins and contains trace amounts of several other vitamins and minerals. Sometimes nutritional yeast is fortified with vitamin B12, another reason it is popular with vegans. Nutritional yeast has a strong flavor that is described as nutty or cheesy, which makes it popular as an ingredient in cheese substitutes. It is often used by vegans in place of cheese in, for example, mashed and fried potatoes or scrambled tofu, or as a topping for popcorn. In Australia, it is sometimes sold as "savoury yeast flakes". In New Zealand, it has long been known as Brufax. Though "nutritional yeast" usually refers to commercial products, inadequately fed prisoners of war have used "home-grown" yeast to prevent vitamin deficiency. Nutritional yeast is a whole-cell inactive yeast that contains both soluble and insoluble parts, which is different from yeast extract. Yeast extract is made by centrifuging inactive nutritional yeast and concentrating the water-soluble yeast cell proteins which are rich in glutamic acid, nucleotides, and peptides, the flavor compounds responsible for umami taste. Commercial production Nutritional yeast is produced by culturing yeast in a nutrient medium for several days. The primary ingredient in the growth medium is glucose, often from either sugarcane or beet molasses. When the yeast is ready, it is killed with heat and then harvested, washed, dried and packaged. The species of yeast used is often a strain of Saccharomyces cerevisiae. The strains are cultured and selected for desirable characteristics and often exhibit a differ
https://en.wikipedia.org/wiki/Olga%20Taussky-Todd
Olga Taussky-Todd (August 30, 1906 – October 7, 1995) was an Austrian and later Czech-American mathematician. She published more than 300 research papers on algebraic number theory, integral matrices, and matrices in algebra and analysis. Early life Olga Taussky was born into a Jewish family in what is now Olomouc, Czech Republic, on August 30, 1906. Her father, Julius David Taussky, was an industrial chemist and her mother, Ida Pollach, was a housewife. She was the second of three children. Her father preferred that, if his daughters had careers, they be in the arts, but they all went into the sciences. Ilona, three years older than Olga, became a consulting chemist in the glyceride industry, and Hertha, three years younger than Olga, became a pharmacist and later a clinical chemist at Cornell University Medical College in New York City. At the age of three, her family moved to Vienna and lived there until the middle of World War I. Later Taussky's father accepted a position as director of a vinegar factory at Linz in Upper Austria. At a young age, Taussky displayed a keen interest in mathematics. After her father died during her last year at school, she worked through the summer at her father's vinegar factory and was pressured by her family to study chemistry in order to take over her father's work. Her elder sister, however, qualified in chemistry and took over her father's work. In "Red Vienna" of the day, the Social Democratic Party of Austria encouraged woman to pursue higher education, and Taussky enrolled at the University of Vienna in the fall of 1925 to study mathematics. Career Taussky worked first in algebraic number theory, with a doctorate at the University of Vienna supervised by Philipp Furtwängler, a number theorist from Germany. During that time, she attended meetings of the so-called Vienna Circle, the group of philosophers and logicians developing the philosophy of logical positivism. Taussky, like Olga Hahn-Neurath and Rose Rand, was one of
https://en.wikipedia.org/wiki/Beam%20search
In computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic. But in beam search, only a predetermined number of best partial solutions are kept as candidates. It is thus a greedy algorithm. The term "beam search" was coined by Raj Reddy of Carnegie Mellon University in 1977. Details Beam search uses breadth-first search to build its search tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost. However, it only stores a predetermined number, , of best states at each level (called the beam width). Only those states are expanded next. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to breadth-first search. The beam width bounds the memory required to perform the search. Since a goal state could potentially be pruned, beam search sacrifices completeness (the guarantee that an algorithm will terminate with a solution, if one exists). Beam search is not optimal (that is, there is no guarantee that it will find the best solution). Uses A beam search is most often used to maintain tractability in large systems with insufficient amount of memory to store the entire search tree. For example, it has been used in many machine translation systems. (The state of the art now primarily uses neural machine translation based methods.) To select the best translation, each part is processed, and many different ways of translating the words appear. The top best translations according to their sentence structures are kept, and the rest are discarded. The translator then evaluates the translations according to a given criterion,
https://en.wikipedia.org/wiki/Sperm%20bank
A sperm bank, semen bank, or cryobank is a facility or enterprise which purchases, stores and sells human semen. The semen is produced and sold by men who are known as sperm donors. The sperm is purchased by or for other persons for the purpose of achieving a pregnancy or pregnancies other than by a sexual partner. Sperm sold by a sperm donor is known as donor sperm. A sperm bank may be a separate entity supplying donor sperm to individuals or to fertility centers or clinics, or it may be a facility which is run by a clinic or other medical establishment mainly or exclusively for their patients or customers. A pregnancy may be achieved using donor sperm for insemination with similar outcomes to sexual intercourse. By using sperm from a donor rather than from the sperm recipient's partner, the process is a form of third party reproduction. In the 21st century artificial insemination with donor sperm from a sperm bank is most commonly used for individuals with no male partner, i.e. single women and coupled lesbians. A sperm donor must generally meet specific requirements regarding age and screening for medical history. In the United States, sperm banks are regulated as Human Cell and Tissue or Cell and Tissue Bank Product (HCT/Ps) establishments by the Food and Drug Administration. Many states also have regulations in addition to those imposed by the FDA. In the European Union a sperm bank must have a license according to the EU Tissue Directive. In the United Kingdom, sperm banks are regulated by the Human Fertilisation and Embryology Authority. General The first sperm banks began as early as 1964 in Iowa, USA and Tokyo, Japan and were established for a medical therapeutic approach to support individuals who were infertile. As a result, over 1 million babies were born within 40 years. Sperm banks provide the opportunity for individuals to have a child who otherwise would not be able to conceive naturally. This includes, but is not limited to, single women, same-
https://en.wikipedia.org/wiki/Tinnitus%20retraining%20therapy
Tinnitus retraining therapy (TRT) is a form of habituation therapy designed to help people who experience tinnitus—a ringing, buzzing, hissing, or other sound heard when no external sound source is present. Two key components of TRT directly follow from the neurophysiological model of tinnitus: Directive counseling aims to help the sufferer reclassify tinnitus to a category of neutral signals, and sound therapy weakens tinnitus-related neuronal activity. The goal of TRT is to allow a person to manage their reaction to their tinnitus: habituating themselves to it, and restoring unaffected perception. Neither Tinnitus Retraining Therapy or any other therapy reduces or eliminates tinnitus. An alternative to TRT is tinnitus masking: the use of noise, music, or other environmental sounds to obscure or mask the tinnitus. Hearing aids can partially mask the condition. A review of tinnitus retraining therapy trials indicates that it may be more effective than tinnitus masking. Applicability Not everyone who experiences tinnitus is significantly bothered by it. However, some experience annoyance, anxiety, panic, loss of sleep, or difficulty concentrating. The distress of tinnitus is strongly with various psychological factors; the loudness, duration, and other characteristics of the tinnitus symptoms are secondary. TRT may offer real though moderate improvement in tinnitus suffering for adults with moderate-to-severe tinnitus, in the absence of hyperacusis, significant hearing loss, or depression. Not everyone is a good candidate for TRT. Those most likely to have a favorable outcome from TRT are those with lower loudness of tinnitus, higher pitch of tinnitus, shorter duration of tinnitus since onset, , lower hearing thresholds (i.e. better hearing), high Tinnitus Handicap Inventory (THI) score, and positive attitude toward therapy. Other secondary hearing symptoms Although no studies have , TRT has been used to treat hyperacusis, misophonia, and phonophobia. Cause P
https://en.wikipedia.org/wiki/Chemical%20biology
Chemical biology is a scientific discipline between the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry applied to biology (synthesis of biomolecules, the simulation of biological systems, etc.). Introduction Some forms of chemical biology attempt to answer biological questions by studying biological systems at the chemical level. In contrast to research using biochemistry, genetics, or molecular biology, where mutagenesis can provide a new version of the organism, cell, or biomolecule of interest, chemical biology probes systems in vitro and in vivo with small molecules that have been designed for a specific purpose or identified on the basis of biochemical or cell-based screening (see chemical genetics). Chemical biology is one of several interdisciplinary sciences that tend to differ from older, reductionist fields and whose goals are to achieve a description of scientific holism. Chemical biology has scientific, historical and philosophical roots in medicinal chemistry, supramolecular chemistry, bioorganic chemistry, pharmacology, genetics, biochemistry, and metabolic engineering. Systems of interest Enrichment techniques for proteomics Chemical biologists work to improve proteomics through the development of enrichment strategies, chemical affinity tags, and new probes. Samples for proteomics often contain many peptide sequences and the sequence of interest may be highly represented or of low abundance, which creates a barrier for their detection. Chemical biology methods can reduce sample complexity by selective enrichment using affinity chromatography. This involves targeting a peptide with a di
https://en.wikipedia.org/wiki/Cycling%20probe%20technology
Cycling probe technology (CPT) is a molecular biological technique for detecting specific DNA sequences. CPT operates under isothermal conditions. In some applications, CPT offers an alternative to PCR. However, unlike PCR, CPT does not generate multiple copies of the target DNA itself, and the amplification of the signal is linear, in contrast to the exponential amplification of the target DNA in PCR. CPT uses a sequence specific chimeric probe which hybridizes to a complementary target DNA sequence and becomes a substrate for RNase H. Cleavage occurs at the RNA internucleotide linkages and results in dissociation of the probe from the target, thereby making it available for the next probe molecule. Integrated electrokinetic systems have been developed for use in CPT. Probe Cycling probe technology makes use of a chimeric nucleic acid probe to detect the presence of a particular DNA sequence. The chimeric probe consists of an RNA segment sandwiched between two DNA segments. The RNA segment contains 4 contiguous purine nucleotides. The probes should be less than 30 nucleotides in length and designed to minimize intra-probe and inter-probe interactions. Process Cycling probe technology utilizes a cyclic, isothermal process that begins with the hybridization of the chimeric probe with the target DNA. Once hybridized, the probe becomes a suitable substrate for RNase H. RNase H, an endonuclease, cleaves the RNA portion of the probe, resulting in two chimeric fragments. The melting temperature (Tm) of the newly cleaved fragments is lower than the melting temperature of original probe. Because the CPT reaction is isothermally kept just above the melting point of the original probe, the cleaved fragments dissociate from the target DNA. Once dissociated, the target DNA is free to hybridize with a new probe, beginning the cycle again. After the fragments have been cleaved and dissociated, they become detectable. A common strategy for detecting the fragments involves fl
https://en.wikipedia.org/wiki/Adaptive%20behavior
Adaptive behavior is behavior that enables a person (usually used in the context of children) to cope in their environment with greatest success and least conflict with others. This is a term used in the areas of psychology and special education. Adaptive behavior relates to everyday skills or tasks that the "average" person is able to complete, similar to the term life skills. Nonconstructive or disruptive social or personal behaviors can sometimes be used to achieve a constructive outcome. For example, a constant repetitive action could be re-focused on something that creates or builds something. In other words, the behavior can be adapted to something else. In contrast, maladaptive behavior is a type of behavior that is often used to reduce one's anxiety, but the result is dysfunctional and non-productive coping. For example, avoiding situations because you have unrealistic fears may initially reduce your anxiety, but it is non-productive in alleviating the actual problem in the long term. Maladaptive behavior is frequently used as an indicator of abnormality or mental dysfunction, since its assessment is relatively free from subjectivity. However, many behaviors considered moral can be maladaptive, such as dissent or abstinence. Adaptive behavior reflects an individual's social and practical competence to meet the demands of everyday living. Behavioral patterns change throughout a person's development, life settings and social constructs, evolution of personal values, and the expectations of others. It is important to assess adaptive behavior in order to determine how well an individual functions in daily life: vocationally, socially and educationally. Examples A child born with cerebral palsy will most likely have a form of hemiparesis or hemiplegia (the weakening, or loss of use, of one side of the body). In order to adapt to one's environment, the child may use these limbs as helpers, in some cases even adapt the use of their mouth and teeth as a tool u