source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Texas%20Instruments%20LPC%20Speech%20Chips | The Texas Instruments LPC Speech Chips are a series of speech synthesizer digital signal processor integrated circuits created by Texas Instruments beginning in 1978. They continued to be developed and marketed for many years, though the speech department moved around several times within TI until finally dissolving in late 2001. The rights to the speech-specific subset of the MSP line, the last remaining line of TI speech products as of 2001, were sold to Sensory, Inc. in October 2001.
Theory
Speech data is stored through pitch-excited linear predictive coding (PE-LPC), where words are created by a lattice filter, selectably fed by either an excitation ROM (containing a glottal pulse waveform) or an LFSR (linear-feedback shift register) noise generator. Linear predictive coding achieves a vast reduction in data volume needed to recreate intelligible speech data.
History
The TMC0280/TMS5100 was the first self-contained LPC speech synthesizer IC ever made. It was designed for Texas Instruments by Larry Brantingham, Paul S. Breedlove, Richard H. Wiggins, and Gene A. Frantz and its silicon was laid out by Larry Brantingham. The chip was designed for the 'Spelling Bee' project at TI, which later became the Speak & Spell. A speech-less 'Spelling B' was released at the same time as the Speak & Spell.
All TI LPC speech chips until the TSP50cxx series used PMOS architecture, and LPC-10 encoding in a special TI-specific format.
Chips in the TI LPC speech series were labeled as TMCxxxx or CDxxxx when used by TI's consumer product division, or labeled as TMS5xxx (later TSP5xxx) when sold to 3rd parties.
TI LPC Speech chip family
1978
TMS5100 (TMC0281, internal TI name is '0280' hence chip is sometimes labeled TMC0280): First LPC speech chip. Used a custom 4-bit serial interface using TMS6100 or TMS6125 mask ROM ICs; used on all non-super versions of the Speak & Spell except for the 1980 UK version, which used the TMC0280/CD2801 below. Publicly sold as TMS5100. It was also |
https://en.wikipedia.org/wiki/Pinocytosis | In cellular biology, pinocytosis, otherwise known as fluid endocytosis and bulk-phase pinocytosis, is a mode of endocytosis in which small molecules dissolved in extracellular fluid are brought into the cell through an invagination of the cell membrane, resulting in their containment within a small vesicle inside the cell. These pinocytotic vesicles then typically fuse with early endosomes to hydrolyze (break down) the particles.
Pinocytosis is variably subdivided into categories depending on the molecular mechanism and the fate of the internalized molecules.
Function
In humans, this process occurs primarily for absorption of fat droplets. In endocytosis the cell plasma membrane extends and folds around desired extracellular material, forming a pouch that pinches off creating an internalized vesicle. The invaginated pinocytosis vesicles are much smaller than those generated by phagocytosis. The vesicles eventually fuse with the lysosome, whereupon the vesicle contents are digested. Pinocytosis involves a considerable investment of cellular energy in the form of ATP.
Pinocytosis and ATP
Pinocytosis is used primarily for clearing extracellular fluids (ECF) and as part of immune surveillance. In contrast to phagocytosis, it generates very small amounts of ATP from the wastes of alternative substances such as lipids (fat). Unlike receptor-mediated endocytosis, pinocytosis is nonspecific in the substances that it transport: the cell takes in surrounding fluids, including all solutes present.
Etymology and pronunciation
The word pinocytosis () uses combining forms of pino- + cyto- + -osis, all Neo-Latin from Greek, reflecting píno, to drink, and cytosis. The term was proposed by W. H. Lewis in 1931.
Non-specific, adsorptive pinocytosis
Non-specific, adsorptive pinocytosis is a form of endocytosis, a process in which small particles are taken in by a cell by splitting off small vesicles from the cell membrane. Cationic proteins bind to the negative cell surface and |
https://en.wikipedia.org/wiki/Vampire%20Killer | Vampire Killer, known in Japan as is a platform video game developed and published by Konami for the MSX2 in 1986. It is a parallel version of the original Castlevania, which debuted a month earlier for the Famicom Disk System under the same Japanese title. However, the MSX2 version was localized first in Europe and was published without the Castlevania branding that the franchise would start using abroad in 1987 when the NES version was released in North America (where neither Vampire Killer nor the MSX2 platform were released). It was released on the Wii U's Virtual Console on December 17, 2014 in Japan.
Like in Castlevania, the player controls vampire hunter Simon Belmont, who ventures into Dracula's castle armed with a mystical whip inherited from his father, in order to slay the evil count.
Gameplay
While Vampire Killer shares the same premise, soundtrack, characters and locations as the original Castlevania, the structure of the game and its play mechanics differ significantly from its NES counterpart. Like Castlevania, Vampire Killer consists of 18 stages, with a boss encounter at the end of every third stage. But in contrast to the linear level designs in Castlevania, Vampire Killer features more labyrinth-like stages, requiring the player to seek out the exit to the next stage and find the skeleton key required to unlock it. Due to the hardware limitations of the MSX2, Vampire Killer uses flip screens instead of scrolling. The game can be played with a keyboard or a game controller.
Items and weapons can be obtained by breaking through candle stands and certain walls like in the NES version, and by purchasing them from merchants hidden throughout the castle or by unlocking treasure chests using keys. Simon's default whip can be replaced with one of four weapons: a chain whip, throwing daggers, a battle ax, and a battle cross - the latter two function both like a boomerang and must be retrieved on their return path if the player wishes to preserve them.
|
https://en.wikipedia.org/wiki/Pathogenicity%20island | Pathogenicity islands (PAIs), as termed in 1990, are a distinct class of genomic islands acquired by microorganisms through horizontal gene transfer. Pathogenicity islands are found in both animal and plant pathogens. Additionally, PAIs are found in both gram-positive and gram-negative bacteria. They are transferred through horizontal gene transfer events such as transfer by a plasmid, phage, or conjugative transposon. Therefore, PAIs contribute to microorganisms' ability to evolve.
One species of bacteria may have more than one PAI. For example, Salmonella has at least five.
An analogous genomic structure in rhizobia is termed a symbiosis island.
Properties
Pathogenicity islands (PAIs) are gene clusters incorporated in the genome, chromosomally or extrachromosomally, of pathogenic organisms, but are usually absent from those nonpathogenic organisms of the same or closely related species. They may be located on a bacterial chromosome or may be transferred within a plasmid or can be found in bacteriophage genomes. The GC-content and codon usage of pathogenicity islands often differs from that of the rest of the genome, potentially aiding in their detection within a given DNA sequence, unless the donor and recipient of the PAI have similar GC-content.
PAIs are discrete genetic units flanked by direct repeats, insertion sequences or tRNA genes, which act as sites for recombination into the DNA. Cryptic mobility genes may also be present, indicating the provenance as transduction. PAIs are flanked by direct repeats; the sequence of bases at two ends of the inserted sequence are the same. They carry functional genes, such as integrases, transposases, phagocytosis, or part of insertion sequences, to enable insertion into host DNA. PAIs are often associated with tRNA genes, which target sites for this integration event. They can be transferred as a single unit to new bacterial cells, thus conferring virulence to formerly benign strains.
PAIs, a type of mobile genetic |
https://en.wikipedia.org/wiki/Computer%20experiment | A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines.
Background
Computer simulations are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as discrete event simulation or finite element solvers are used. A computer model is used to make inferences about the system it replicates. For example, climate models are often used because experimentation on an earth sized object is impossible.
Objectives
Computer experiments have been employed with many purposes in mind. Some of those include:
Uncertainty quantification: Characterize the uncertainty present in a computer simulation arising from unknowns during the computer simulation's construction.
Inverse problems: Discover the underlying properties of the system from the physical data.
Bias correction: Use physical data to correct for bias in the simulation.
Data assimilation: Combine multiple simulations and physical data sources into a complete predictive model.
Systems design: Find inputs that result in optimal system performance measures.
Computer simulation modeling
Modeling of computer experiments typically uses a Bayesian framework. Bayesian statistics is an interpretation of the field of statistics where all evidence about the true state of the world is explicitly expressed in the form of probabilities. In the realm of computer experiments, the Bayesian interpretation would imply we must form a prior distribution that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989) . While the Bayesian approach is widely used, freque |
https://en.wikipedia.org/wiki/Applix%201616 | The Applix 1616 was a kit computer with a Motorola 68000 CPU, produced by a small company called Applix in Sydney, Australia, from 1986 to the early 1990s. It ran a custom multitasking multiuser operating system that was resident in ROM. A version of Minix was also ported to the 1616, as was the MGR Window System. Andrew Morton, designer of the 1616 and one of the founders of Applix, later became the maintainer of the 2.6 version of the Linux kernel.
History
Paul Berger and Andrew Morton formed the Australian company Applix Pty. Ltd. in approximately 1984 to sell a Z80 card they had developed for the Apple IIc that allowed it to run CP/M. This product was not a commercial success, but Paul later proposed they develop a Motorola 68000-based personal computer for sale in kit form.
The project was presented to Jon Fairall, then editor of the Australia and New Zealand electronics magazine Electronics Today International, and in December 1986, the first of four construction articles was published as "Project 1616", with the series concluding in June 1987. In October and November 1987, a disk controller card was also published as "Project 1617".
Over the next decade, about 400 1616s were sold.
Applix Pty. Ltd., was in no way related to the North American company of the same name that produced Applixware.
Hardware
Main board
The main board contains:
a Motorola 68000 running at 7.5 MHz, or a 68010 running at 15 MHz.
512 kibibytes of Dynamic RAM
between 64 kibibytes and 256 kibibytes of ROM
on board bit mapped colour graphics (no "text" mode), with timing provided by a Motorola 6845 CRT controller. The video could produce 320x200 in 16 colours, or 640x200 in a palette of 4 colours out of 16, with a later modification providing a 960x512 monochrome mode. The frame buffer resided in system memory and video refresh provided DRAM refresh cycles. The video output was able to drive CGA, EGA, MGA and multisync monitors.
dual RS-232 serial ports using a Zilog Z8530.
a |
https://en.wikipedia.org/wiki/All%20one%20polynomial | In mathematics, an all one polynomial (AOP) is a polynomial in which all coefficients are one. Over the finite field of order two, conditions for the AOP to be irreducible are known, which allow this polynomial to be used to define efficient algorithms and circuits for multiplication in finite fields of characteristic two. The AOP is a 1-equally spaced polynomial.
Definition
An AOP of degree m has all terms from xm to x0 with coefficients of 1, and can be written as
or
or
Thus the roots of the all one polynomial of degree m are all (m+1)th roots of unity other than unity itself.
Properties
Over GF(2) the AOP has many interesting properties, including:
The Hamming weight of the AOP is m + 1, the maximum possible for its degree
The AOP is irreducible if and only if m + 1 is prime and 2 is a primitive root modulo m + 1 (over GF(p) with prime p, it is irreducible if and only if m + 1 is prime and p is a primitive root modulo m + 1)
The only AOP that is a primitive polynomial is x2 + x + 1.
Despite the fact that the Hamming weight is large, because of the ease of representation and other improvements there are efficient implementations in areas such as coding theory and cryptography.
Over , the AOP is irreducible whenever m + 1 is a prime p, and therefore in these cases, the pth cyclotomic polynomial.
References
External links
Field (mathematics)
Polynomials |
https://en.wikipedia.org/wiki/Equally%20spaced%20polynomial | An equally spaced polynomial (ESP) is a polynomial used in finite fields, specifically GF(2) (binary).
An s-ESP of degree sm can be written as:
for
or
Properties
Over GF(2) the ESP has many interesting properties, including:
The Hamming weight of the ESP is m + 1.
A 1-ESP is known as an all one polynomial and has additional properties including the above.
References
Field (mathematics)
Polynomials |
https://en.wikipedia.org/wiki/Picture%20Transfer%20Protocol | Picture Transfer Protocol (PTP) is a protocol developed by the International Imaging Industry Association to allow the transfer of images from digital cameras to computers and other peripheral devices without the need of additional device drivers. The protocol has been standardized as ISO 15740.
It is further standardized for USB by the USB Implementers Forum as the still image capture device class. USB is the default network transport media for PTP devices. USB PTP is a common alternative to the USB mass-storage device class (USB MSC), as a digital camera connection protocol. Some cameras support both modes.
Description
PTP specifies a way of creating, transferring and manipulating objects which are typically photographic images such as a JPEG file. While it is common to think of the objects that PTP handle as files, they are abstract entities identified solely by a 32-bit object ID. These objects can however have parents and siblings so that a file-system–like view of device contents can be created.
History
Until the standardization of PTP, digital camera vendors used different proprietary protocols for controlling digital cameras and transferring images to computers and other host devices. The term "Picture Transfer Protocol" and the acronym "PTP" were both coined by Steve Mann, summarizing work on the creation of a Linux-friendly way of transferring pictures to and from home-made wearable computers, at a time when most cameras required the use of Microsoft Windows or Mac OS device drivers to transfer their pictures to a computer.
PTP was originally standardized as PIMA 15470 in 2000, while it was developed by the IT10 committee. Key contributors to the standard included Tim Looney and Tim Whitcher (Eastman Kodak Company) and Eran Steinberg (Fotonation).
Storage
PTP does not specify a way for objects to be stored – it is a communication protocol. Nor does it specify a transport layer. However, it is designed to support existing standards, such as Exif, TIF |
https://en.wikipedia.org/wiki/1089%20%28number%29 | 1089 is the integer after 1088 and before 1090. It is a square number (33 squared), a nonagonal number, a 32-gonal number, a 364-gonal number, and a centered octagonal number. 1089 is the first reverse-divisible number. The next is 2178 , and they are the only four-digit numbers that divide their reverse.
In magic
1089 is widely used in magic tricks because it can be "produced" from any two three-digit numbers. This allows it to be used as the basis for a Magician's Choice. For instance, one variation of the book test starts by having the spectator choose any two suitable numbers and then apply some basic maths to produce a single four-digit number. That number is always 1089. The spectator is then asked to turn to page 108 of a book and read the 9th word, which the magician has memorized. To the audience it looks like the number is random, but through manipulation, the result is always the same.
In base 10, the following steps always yield 1089:
Take any three-digit number where the first and last digits differ by more than 1.
Reverse the digits, and subtract the smaller from the larger one.
Add to this result the number produced by reversing its digits.
For example, if the spectator chooses 237 (or 732):
732 − 237 = 495
495 + 594 = 1089
as expected. On the other hand, if the spectator chooses 102 (or 201):
201 − 102 = 99
99 + 99 ≠ 1089
contradicting the rule. However, if we amend the third rule by reading 99 as a three-digit number 099 and take its reverse, we obtain:
201 − 102 = 099
099 + 990 = 1089
as expected.
Explanation
The spectator's 3-digit number can be written as 100 × A + 10 × B + 1 × C, and its reversal as 100 × C + 10 × B + 1 × A, where 1 ≤ A ≤ 9, 0 ≤ B ≤ 9 and 1 ≤ C ≤ 9. Their difference is 99 × (A − C) (For convenience, we assume A > C; if A < C, we first swap A and C.). Note that if A − C is 0, the difference is 0, and we do not get a 3-digit number for the next step. If A − C is 1, the difference is 99. Using a leading 0 gi |
https://en.wikipedia.org/wiki/Speaker%20recognition | Speaker recognition is the identification of a person from characteristics of voices. It is used to answer the question "Who is speaking?" The term voice recognition can refer to speaker recognition or speech recognition. Speaker verification (also called speaker authentication) contrasts with identification, and speaker recognition differs from speaker diarisation (recognizing when the same speaker is speaking).
Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades as of 2019 and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy and learned behavioral patterns.
Verification versus identification
There are two major applications of speaker recognition technologies and methodologies. If the speaker claims to be of a certain identity and the voice is used to verify this claim, this is called verification or authentication. On the other hand, identification is the task of determining an unknown speaker's identity. In a sense, speaker verification is a 1:1 match where one speaker's voice is matched to a particular template whereas speaker identification is a 1:N match where the voice is compared against multiple templates.
From a security perspective, identification is different from verification. Speaker verification is usually employed as a "gatekeeper" in order to provide access to a secure system. These systems operate with the users' knowledge and typically require their cooperation. Speaker identification systems can also be implemented covertly without the user's knowledge to identify talkers in a discussion, alert automated systems of speaker changes, check if a user is already enrolled in a system, etc.
In forensic applications, it is common to fi |
https://en.wikipedia.org/wiki/Focus%20%28geometry%29 | In geometry, focuses or foci (; : focus) are special points with reference to which any of a variety of curves is constructed. For example, one or two foci can be used in defining conic sections, the four types of which are the circle, ellipse, parabola, and hyperbola. In addition, two foci are used to define the Cassini oval and the Cartesian oval, and more than two foci are used in defining an n-ellipse.
Conic sections
Defining conics in terms of two foci
An ellipse can be defined as the locus of points for which the sum of the distances to two given foci is constant.
A circle is the special case of an ellipse in which the two foci coincide with each other. Thus, a circle can be more simply defined as the locus of points each of which is a fixed distance from a single given focus. A circle can also be defined as the circle of Apollonius, in terms of two different foci, as the locus of points having a fixed ratio of distances to the two foci.
A parabola is a limiting case of an ellipse in which one of the foci is a point at infinity.
A hyperbola can be defined as the locus of points for which the absolute value of the difference between the distances to two given foci is constant.
Defining conics in terms of a focus and a directrix
It is also possible to describe all conic sections in terms of a single focus and a single directrix, which is a given line not containing the focus. A conic is defined as the locus of points for each of which the distance to the focus divided by the distance to the directrix is a fixed positive constant, called the eccentricity . If the conic is an ellipse, if the conic is a parabola, and if the conic is a hyperbola. If the distance to the focus is fixed and the directrix is a line at infinity, so the eccentricity is zero, then the conic is a circle.
Defining conics in terms of a focus and a directrix circle
It is also possible to describe all the conic sections as loci of points that are equidistant from a single focus and |
https://en.wikipedia.org/wiki/Thin%20film | A thin film is a layer of material ranging from fractions of a nanometer (monolayer) to several micrometers in thickness. The controlled synthesis of materials as thin films (a process referred to as deposition) is a fundamental step in many applications. A familiar example is the household mirror, which typically has a thin metal coating on the back of a sheet of glass to form a reflective interface. The process of silvering was once commonly used to produce mirrors, while more recently the metal layer is deposited using techniques such as sputtering. Advances in thin film deposition techniques during the 20th century have enabled a wide range of technological breakthroughs in areas such as magnetic recording media, electronic semiconductor devices, integrated passive devices, LEDs, optical coatings (such as antireflective coatings), hard coatings on cutting tools, and for both energy generation (e.g. thin-film solar cells) and storage (thin-film batteries). It is also being applied to pharmaceuticals, via thin-film drug delivery. A stack of thin films is called a multilayer.
In addition to their applied interest, thin films play an important role in the development and study of materials with new and unique properties. Examples include multiferroic materials, and superlattices that allow the study of quantum phenomena.
Nucleation
Nucleation is an important step in growth that helps determine the final structure of a thin film. Many growth methods rely on nucleation control such as atomic-layer epitaxy (atomic layer deposition). Nucleation can be modeled by characterizing surface process of adsorption, desorption, and surface diffusion.
Adsorption and desorption
Adsorption is the interaction of a vapor atom or molecule with a substrate surface. The interaction is characterized the sticking coefficient, the fraction of incoming species thermally equilibrated with the surface. Desorption reverses adsorption where a previously adsorbed molecule overcomes the boun |
https://en.wikipedia.org/wiki/Operation%20Wolf | is a light gun shooter arcade game developed by Taito and released in 1987. It was ported to many home systems.
The game was critically and commercially successful, becoming one of the highest-grossing arcade games of 1988 and winning the Golden Joystick Award for Game of the Year. Operation Wolf popularized military-themed first-person light gun rail shooters and inspired numerous clones, imitators, and others in the genre over the next decade. It spawned four sequels: Operation Thunderbolt (1988), Operation Wolf 3 (1994), Operation Tiger (1998), and Operation Wolf Returns: First Mission (2023).
Gameplay
Assuming the role of Special Forces Operative Roy Adams, the player attempts to rescue five hostages who are being held captive in enemy territory. The game is viewed from a first-person perspective, and is on rails, with the screen scrolling horizontally through the landscape. The game has six stages to advance the story. For example, after the jungle stage is completed, Adams interrogates an enemy soldier and learns the location of the concentration camp and hostages. Each stage has unique objectives and effects on gameplay after completion, all based on rescuing hostages. Game over screens vary depending on situations, such as the player's death or failure to rescue all hostages. Continuing the game restarts the stage. The Nintendo Entertainment System version has multiple endings depending on the number of rescued hostages.
The arcade cabinet has an optical controller resembling an Uzi submachine gun which the player can swivel and elevate, and which vibrates to simulate recoil of gunfire. Pulling the trigger allows fully automatic fire, and pressing the button near the muzzle launches a grenade with a wide blast radius against multiple targets.
To complete each stage, the player must shoot a required number of soldiers and vehicles (trucks, boats, helicopters, armored transports), as indicated by an on-screen counter. The limited ammunition and grenades c |
https://en.wikipedia.org/wiki/Puppy%20Linux | {{Infobox OS
| name = Puppy Linux
| logo = Banner logo Puppy.png
| logo_size = 220px
| screenshot =
| screenshot_size = 260px
| caption = Puppy Linux FossaPup 9.5
| developer = Barry Kauler (original)Larry Short, Mick Amadio and Puppy community (current)
| family = Linux (Unix-like)
| working state = Current
| source model = Primarily open source
| released = 0.1/
| latest release version = (FossaPup64)
| latest release date =
| marketing target = Live CD, Netbooks, older systems and general use
| language =
| update model =
| package manager = Puppy Package Manager
| supported platforms = x86, x86-64, ARM
| kernel type = Linux
| ui = JWM / IceWM + ROX Desktop
| license = GNU GPL and various others
| website =
}}
Puppy Linux is an operating system and family of light-weight Linux distributions that focus on ease of use and minimal memory footprint. The entire system can be run from random-access memory (RAM) with current versions generally taking up about 600 MB (64-bit), 300 MB (32-bit), allowing the boot medium to be removed after the operating system has started. Applications such as AbiWord, Gnumeric and MPlayer are included, along with a choice of lightweight web browsers and a utility for downloading other packages. The distribution was originally developed by Barry Kauler and other members of the community, until Kauler retired in 2013. The tool Woof can build a Puppy Linux distribution from the binary packages of other Linux distributions.
History
Barry Kauler started Puppy Linux in response to a trend of other distributions becoming stricter on system requirements over time. His own distribution, with an emphasis on speed and efficiency and being lightweight, started from "Boot disk HOWTO" and gradually included components file-by-file until Puppy Linux was completed. Puppy Linux was initially based on Vector Linux but then became a fully independent distribution.
Release versions
Puppy 0.1 is the initial release of Puppy Linux. It has no Uni |
https://en.wikipedia.org/wiki/Breeding%20bird%20survey | A breeding bird survey monitors the status and trends of bird populations. Data from the survey are an important source for the range maps found in field guides. The North American Breeding Bird Survey is a joint project of the United States Geological Survey (USGS) and the Canadian Wildlife Service. The UK Breeding Bird Survey is administered by the British Trust for Ornithology, the Joint Nature Conservation Committee, and the Royal Society for the Protection of Birds.
The results of the BBS are valuable in evaluating the increasing and decreasing range of bird population which can be a key point to bird conservation. The BBS was designed to provide a continent-wide perspective of population change.
History
The North American Breeding Bird Survey was launched in 1966 after the concept of a continental monitoring program for all breeding birds had been developed by Chandler Robbins and his associates from the Migratory Bird Population Station. The program was developed in Laurel, Maryland. In the first year of its existence there were nearly 600 surveys conducted east of the Mississippi River. One year later, in 1967, the survey spread to the Great Plains states and by 1968 almost 2000 routes had been established across southern Canada and 48 American states. As more birders were introduced to this program, the number of active BBS routes continued to increase. In the 1980s, the Breeding Bird Survey included Yukon, Northwest Territories of Canada and Alaska. Additionally, the number of routes in established states has increased. Currently, there are approximately 3700 active BBS routes in the United States and Canada, of which approximately 2900 are surveyed on an annual basis. The density of the routes varies greatly across the continent and the largest number of routes can be found in New England and Mid-Atlantic states. Many bird watchers participate in these surveys as they find the experience rewarding. Future plans for the BBS include expanding coverage |
https://en.wikipedia.org/wiki/Morava%20K-theory | In stable homotopy theory, a branch of mathematics, Morava K-theory is one of a collection of cohomology theories introduced in algebraic topology by Jack Morava in unpublished preprints in the early 1970s. For every prime number p (which is suppressed in the notation), it consists of theories K(n) for each nonnegative integer n, each a ring spectrum in the sense of homotopy theory. published the first account of the theories.
Details
The theory K(0) agrees with singular homology with rational coefficients, whereas K(1) is a summand of mod-p complex K-theory. The theory K(n) has coefficient ring
Fp[vn,vn−1]
where vn has degree 2(pn − 1). In particular, Morava K-theory is periodic with this period, in much the same way that complex K-theory has period 2.
These theories have several remarkable properties.
They have Künneth isomorphisms for arbitrary pairs of spaces: that is, for X and Y CW complexes, we have
They are "fields" in the category of ring spectra. In other words every module spectrum over K(n) is free, i.e. a wedge of suspensions of K(n).
They are complex oriented (at least after being periodified by taking the wedge sum of (pn − 1) shifted copies), and the formal group they define has height n.
Every finite p-local spectrum X has the property that K(n)∗(X) = 0 if and only if n is less than a certain number N, called the type of the spectrum X. By a theorem of Devinatz–Hopkins–Smith, every thick subcategory of the category of finite p-local spectra is the subcategory of type-n spectra for some n.
See also
Chromatic homotopy theory
Morava E-theory
References
Hovey-Strickland, "Morava K-theory and localisation"
Algebraic topology
Cohomology theories |
https://en.wikipedia.org/wiki/Complex%20cobordism | In mathematics, complex cobordism is a generalized cohomology theory related to cobordism of manifolds. Its spectrum is denoted by MU. It is an exceptionally powerful cohomology theory, but can be quite hard to compute, so often instead of using it directly one uses some slightly weaker theories derived from it, such as Brown–Peterson cohomology or Morava K-theory, that are easier to compute.
The generalized homology and cohomology complex cobordism theories were introduced by using the Thom spectrum.
Spectrum of complex cobordism
The complex bordism of a space is roughly the group of bordism classes of manifolds over with a complex linear structure on the stable normal bundle. Complex bordism is a generalized homology theory, corresponding to a spectrum MU that can be described explicitly in terms of Thom spaces as follows.
The space is the Thom space of the universal -plane bundle over the classifying space of the unitary group . The natural inclusion from into induces a map from the double suspension to . Together these maps give the spectrum ; namely, it is the homotopy colimit of .
Examples: is the sphere spectrum. is the desuspension of .
The nilpotence theorem states that, for any ring spectrum , the kernel of consists of nilpotent elements. The theorem implies in particular that, if is the sphere spectrum, then for any , every element of is nilpotent (a theorem of Goro Nishida). (Proof: if is in , then is a torsion but its image in , the Lazard ring, cannot be torsion since is a polynomial ring. Thus, must be in the kernel.)
Formal group laws
and showed that the coefficient ring (equal to the complex cobordism of a point, or equivalently the ring of cobordism classes of stably complex manifolds) is a polynomial ring on infinitely many generators of positive even degrees.
Write for infinite dimensional complex projective space, which is the classifying space for complex line bundles, so that tensor product of line bundles indu |
https://en.wikipedia.org/wiki/Reduction%20%28mathematics%29 | In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals.
Algebra
In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination.
Calculus
In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms.
Static (Guyan) reduction
In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem:
where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations
is obtained by writing out the set of equations as follows:
Equation () can be solved for (assuming invertibility of ):
And substituting into () gives
Thus
In a similar fashion, any row or c |
https://en.wikipedia.org/wiki/Strain%20%28biology%29 | In biology, a strain is a genetic variant, a subtype or a culture within a biological species. Strains are often seen as inherently artificial concepts, characterized by a specific intent for genetic isolation. This is most easily observed in microbiology where strains are derived from a single cell colony and are typically quarantined by the physical constraints of a Petri dish. Strains are also commonly referred to within virology, botany, and with rodents used in experimental studies.
Microbiology and virology
It has been said that "there is no universally accepted definition for the terms 'strain', 'variant', and 'isolate' in the virology community, and most virologists simply copy the usage of terms from others".
A strain is a genetic variant or subtype of a microorganism (e.g., a virus, bacterium or fungus). For example, a "flu strain" is a certain biological form of the influenza or "flu" virus. These flu strains are characterized by their differing isoforms of surface proteins. New viral strains can be created due to mutation or swapping of genetic components when two or more viruses infect the same cell in nature. These phenomena are known respectively as antigenic drift and antigenic shift. Microbial strains can also be differentiated by their genetic makeup using metagenomic methods to maximize resolution within species. This has become a valuable tool to analyze the microbiome.
Artificial constructs
Scientists have modified strains of viruses in order to study their behavior, as in the case of the H5N1 influenza virus. While funding for such research has aroused controversy at times due to safety concerns, leading to a temporary pause, it has subsequently proceeded.
In biotechnology, microbial strains have been constructed to establish metabolic pathways suitable for treating a variety of applications. Historically, a major effort of metabolic research has been devoted to the field of biofuel production. Escherichia coli is most common species for |
https://en.wikipedia.org/wiki/Most%20recent%20common%20ancestor | In biology and genetic genealogy, the most recent common ancestor (MRCA), also known as the last common ancestor (LCA), of a set of organisms is the most recent individual from which all the organisms of the set are descended. The term is also used in reference to the ancestry of groups of genes (haplotypes) rather than organisms.
The MRCA of a set of individuals can sometimes be determined by referring to an established pedigree. However, in general, it is impossible to identify the exact MRCA of a large set of individuals, but an estimate of the time at which the MRCA lived can often be given. Such time to most recent common ancestor (TMRCA) estimates can be given based on DNA test results and established mutation rates as practiced in genetic genealogy, or by reference to a non-genetic, mathematical model or computer simulation.
In organisms using sexual reproduction, the matrilineal MRCA and patrilineal MRCA are the MRCAs of a given population considering only matrilineal and patrilineal descent, respectively. The MRCA of a population by definition cannot be older than either its matrilineal or its patrilineal MRCA. In the case of Homo sapiens, the matrilineal and patrilineal MRCA are also known as "Mitochondrial Eve" (mt-MRCA) and "Y-chromosomal Adam" (Y-MRCA) respectively.
The age of the human MRCA is unknown. It is no greater than the age of either the Y-MRCA or the mt-MRCA, estimated at around 200,000 years.
Unlike in pedigrees of individual humans or domesticated lineages where historical parentage is known, in the inference of relationships among species or higher groups of taxa (systematics or phylogenetics), ancestors are not directly observable or recognizable. They are inferences based on patterns of relationship among taxa inferred in a phylogenetic analysis of extant organisms and/or fossils.
The last universal common ancestor (LUCA) is the most recent common ancestor of all current life on Earth, estimated to have lived some 3.5 to 3.8 billion |
https://en.wikipedia.org/wiki/Chirplet%20transform | In signal processing, the chirplet transform is an inner product of an input signal with a family of analysis primitives called chirplets.
Similar to the wavelet transform, chirplets are usually generated from (or can be expressed as being from) a single mother chirplet (analogous to the so-called mother wavelet of wavelet theory).
Definitions
The term chirplet transform was coined by Steve Mann, as the title of the first published paper on chirplets. The term chirplet itself (apart from chirplet transform) was also used by Steve Mann, Domingo Mihovilovic, and Ronald Bracewell to describe a windowed portion of a chirp function. In Mann's words:
The chirplet transform thus represents a rotated, sheared, or otherwise transformed tiling of the time–frequency plane. Although chirp signals have been known for many years in radar, pulse compression, and the like, the first published reference to the chirplet transform described specific signal representations based on families of functions related to one another by time–varying frequency modulation or frequency varying time modulation, in addition to time and frequency shifting, and scale changes. In that paper, the Gaussian chirplet transform was presented as one such example, together with a successful application to ice fragment detection in radar (improving target detection results over previous approaches). The term chirplet (but not the term chirplet transform) was also proposed for a similar transform, apparently independently, by Mihovilovic and Bracewell later that same year.
Applications
The first practical application of the chirplet transform was in water-human-computer interaction (WaterHCI) for marine safety, to assist vessels in navigating through ice-infested waters, using marine radar to detect growlers (small iceberg fragments too small to be visible on conventional radar, yet large enough to damage a vessel).
Other applications of the chirplet transform in WaterHCI include the SWIM (Sequential Wav |
https://en.wikipedia.org/wiki/Sound%20server | A sound server is software that manages the use of and access to audio devices (usually a sound card). It commonly runs as a background process.
Sound server in an operating system
In a Unix-like operating system, a sound server mixes different data streams (usually raw PCM audio) and sends out a single unified audio to an output device. The mixing is usually done by software, or by hardware if there is a supported sound card.
Layers
The "sound stack" can be visualized as follows, with programs in the upper layers calling elements in the lower layers:
Applications (e.g. mp3 player, web video)
Sound server (e.g. aRts, ESD, JACK, PulseAudio)
Sound subsystem (described as kernel modules or drivers; e.g. OSS, ALSA)
Operating system kernel (e.g. Linux, Unix)
Motivation
Sound servers appeared in Unix-like operating systems after limitations in Open Sound System were recognized. OSS is a basic sound interface that was incapable of playing multiple streams simultaneously, dealing with multiple sound cards, or streaming sound over the network.
A sound server can provide these features by running as a daemon. It receives calls from different programs and sound flows, mixes the streams, and sends raw audio out to the audio device.
With a sound server, users can also configure global and per-application sound preferences.
Diversification and problems
there are multiple sound servers; some focus on providing very low latency, while others concentrate on features suitable for general desktop systems. While diversification allows a user to choose just the features that are important to a particular application, it also forces developers to accommodate these options by necessitating code that is compatible with the various sound servers available. Consequently, this variety has resulted in a desire for a standard API to unify efforts.
List of sound servers
aRts
Enlightened Sound Daemon
JACK
Network Audio System
PipeWire
PulseAudio
sndio - OpenBSD audio an |
https://en.wikipedia.org/wiki/Podcast | A podcast is a program made available in digital format for download over the Internet. For example, an episodic series of digital audio files that a user can download to a personal device to listen to at a time of their choosing. Podcasts are primarily an audio medium, with some programs offering a supplemental video component. Streaming applications and podcasting services provide a convenient and integrated way to manage a personal consumption queue across many podcast sources and playback devices. There are also podcast search engines, which help users find and share podcast episodes.
A podcast series usually features one or more recurring hosts engaged in a discussion about a particular topic or current event. Discussion and content within a podcast can range from carefully scripted to completely improvised. Podcasts combine elaborate and artistic sound production with thematic concerns ranging from scientific research to slice-of-life journalism. Many podcast series provide an associated website with links and show notes, guest biographies, transcripts, additional resources, commentary, and occasionally a community forum dedicated to discussing the show's content.
The cost to the consumer is low, and many podcasts are free to download. Some podcasts are underwritten by corporations or sponsored, with the inclusion of commercial advertisements. In other cases, a podcast could be a business venture supported by some combination of a paid subscription model, advertising or product delivered after sale. Because podcast content is often free, podcasting is often classified as a disruptive medium, adverse to the maintenance of traditional revenue models.
Podcasting is the preparation and distribution of audio files using RSS feeds to the devices of subscribed users. A podcaster normally buys this service from a podcast hosting company like SoundCloud or Libsyn. Hosting companies then distribute these audio files to streaming services, such as Apple and Spotify, |
https://en.wikipedia.org/wiki/The%20Planiverse | The Planiverse is a novel by A. K. Dewdney, written in 1984.
Plot
In the spirit of Edwin Abbott Abbott's Flatland, Dewdney and his computer science students simulate a two-dimensional world with a complex ecosystem. To their surprise, they find their artificial 2D universe has somehow accidentally become a means of communication with an actual 2D world: Arde. They make a sort of "telepathic" contact with "YNDRD", referred to by the students as Yendred, a highly philosophical Ardean, as he begins a journey across the western half, Punizla, of the single continent Ajem Kollosh to learn more about the spiritual beliefs of the people of the East, Vanizla. Yendred mistakes Dewdney's class for "spirits" and takes great interest in communicating with them. The students and narrator communicate with Yendred by typing on the keyboard; Yendred's answers appear on the computer's printout. The name Yendred (or "Yendwed", as pronounced by one of the students, who has a speech impediment) is simply "Dewdney" reversed.
Written as a travelogue, Yendred's journey through the West takes him through several cities. He visits the Punizlan Institute for Technology and Science, where Arde's technology is explored in great detail. For example, all houses are underground, so as not to be demolished by the periodic 2D rivers; nails are useless for attaching two objects, so tape and glue are used instead; most Ardean creatures cannot have deuterostomic digestive tracts since they would split into two; even games such as Go have one-dimensional Alak analogues. An appendix explains various other aspects of two-dimensional science and technology which could not fit into the main story.
The underlying allegory culminates in Yendred's arrival at the watershed of the continent and the planet's only building above ground, where he at last finds Drabk, an Ardean who professes "knowledge of the Beyond", and teaches Yendred to fly. Yendred finds that to keep contact with Earth is no longer of benef |
https://en.wikipedia.org/wiki/Netatalk | Netatalk (pronounced "ned-uh-talk") is a free, open-source implementation of the Apple Filing Protocol (AFP). It allows Unix-like operating systems to serve as file servers for Macintosh computers running macOS or Classic Mac OS.
Netatalk was originally developed by the Research Systems Unix Group at the University of Michigan for BSD-derived Unix systems and released in 1990. Apple had introduced AppleTalk soon after the release of the original Macintosh in 1985, followed by the file sharing application AppleShare (which was built on top of AFP) in 1987. This was an early example of zero-configuration networking, gaining significant adoption in educational and small to mid size office environments in the late 80s. Netatalk emerged as a part of the software ecosystem around AppleTalk.
In 1986 Columbia University published the Columbia AppleTalk Package (CAP), which was an open source implementation of AppleTalk originally written for BSD 4.2, allowing Unix servers to be part of AppleTalk networks. CAP also had its own implementation of AFP/AppleShare, but Netatalk appearing in 1990 claimed better performance due to software design advantages. CAP and Netatalk were also interoperable, the latter being able to be run on an AppleTalk backend provided by CAP.
As part of transitioning the software into an open source community project, the codebase was moved to SourceForge for revision control in July 2000, then re-licensed under the terms of the GNU General Public License with version 1.5pre7 in August 2001.
Since Classic Mac OS used a forked file system, unlike the host operating systems where Netatalk would be running, Netatalk originally implemented the AppleDouble format for storing the resource fork separately from the data fork when a Mac OS file was transferred to the Unix-like computer's file system. This was required in order not to ruin most files by discarding the resource fork when copied to the Netatalk served AppleShare volume. With the release of Neta |
https://en.wikipedia.org/wiki/Oneirology | In the field of psychology, the subfield of oneirology (; from Greek ὄνειρον, oneiron, "dream"; and -λογία, -logia, "the study of") is the scientific study of dreams. Current research seeks correlations between dreaming and current knowledge about the functions of the brain, as well as understanding of how the brain works during dreaming as pertains to memory formation and mental disorders. The study of oneirology can be distinguished from dream interpretation in that the aim is to quantitatively study the process of dreams instead of analyzing the meaning behind them.
History
In the 19th century, two advocates of this discipline were the French sinologists Marquis d'Hervey de Saint Denys and Alfred Maury. The field gained momentum in 1952, when Nathaniel Kleitman and his student Eugene Aserinsky discovered regular cycles. A further experiment by Kleitman and William C. Dement, then another medical student, demonstrated the particular period of sleep during which electrical brain activity, as measured by an electroencephalograph (EEG), closely resembled that of waking, in which the eyes dart about actively. This kind of sleep became known as rapid eye movement (REM) sleep, and Kleitman and Dement's experiment found a correlation of 0.80 between REM sleep and dreaming.
Field of work
Research into dreams includes exploration of the mechanisms of dreaming, the influences on dreaming, and disorders linked to dreaming. Work in oneirology overlaps with neurology and can vary from quantifying dreams, to analyzing brain waves during dreaming, to studying the effects of drugs and neurotransmitters on sleeping or dreaming. Though debate continues about the purpose and origins of dreams, there could be great gains from studying dreams as a function of brain activity. For example, knowledge gained in this area could have implications in the treatment of certain mental illnesses.
Mechanisms of dreaming
Dreaming occurs mainly during REM sleep, and brain scans recording brain |
https://en.wikipedia.org/wiki/Levelling | Levelling or leveling (American English; see spelling differences) is a branch of surveying, the object of which is to establish or verify or measure the height of specified points relative to a datum. It is widely used in geodesy and cartography to measure vertical position with respect to a vertical datum, and in construction to measure height differences of construction artifacts.
Optical levelling
Optical levelling, also known as spirit levelling and differential levelling, employs an optical level, which consists of a precision telescope with crosshairs and stadia marks. The cross hairs are used to establish the level point on the target, and the stadia allow range-finding; stadia are usually at ratios of 100:1, in which case one metre between the stadia marks on the level staff (or rod) represents 100metres from the target.
The complete unit is normally mounted on a tripod, and the telescope can freely rotate 360° in a horizontal plane. The surveyor adjusts the instrument's level by coarse adjustment of the tripod legs and fine adjustment using three precision levelling screws on the instrument to make the rotational plane horizontal. The surveyor does this with the use of a bull's eye level built into the instrument mount.
Procedure
The surveyor looks through the eyepiece of telescope while an assistant holds a vertical level staff which is graduated in inches or centimeters. The level staff is placed vertically using a level, with its foot on the point for which the level measurement is required. The telescope is rotated and focused until the level staff is plainly visible in the crosshairs. In the case of a high accuracy manual level, the fine level adjustment is made by an altitude screw, using a high accuracy bubble level fixed to the telescope. This can be viewed by a mirror whilst adjusting or the ends of the bubble can be displayed within the telescope, which also allows assurance of the accurate level of the telescope whilst the sight is being ta |
https://en.wikipedia.org/wiki/Smooth%20number | In number theory, an n-smooth (or n-friable) number is an integer whose prime factors are all less than or equal to n. For example, a 7-smooth number is a number whose every prime factor is at most 7, so 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. The 2-smooth numbers are just the powers of 2, while 5-smooth numbers are known as regular numbers.
Definition
A positive integer is called B-smooth if none of its prime factors are greater than B. For example, 1,620 has prime factorization 22 × 34 × 5; therefore 1,620 is 5-smooth because none of its prime factors are greater than 5. This definition includes numbers that lack some of the smaller prime factors; for example, both 10 and 12 are 5-smooth, even though they miss out the prime factors 3 and 5, respectively. All 5-smooth numbers are of the form 2a × 3b × 5c, where a, b and c are non-negative integers.
The 3-smooth numbers have also been called "harmonic numbers", although that name has other more widely used meanings.
5-smooth numbers are also called regular numbers or Hamming numbers; 7-smooth numbers are also called humble numbers, and sometimes called highly composite, although this conflicts with another meaning of highly composite numbers.
Here, note that B itself is not required to appear among the factors of a B-smooth number. If the largest prime factor of a number is p then the number is B-smooth for any B ≥ p. In many scenarios B is prime, but composite numbers are permitted as well. A number is B-smooth if and only if it is p-smooth, where p is the largest prime less than or equal to B.
Applications
An important practical application of smooth numbers is the fast Fourier transform (FFT) algorithms (such as the Cooley–Tukey FFT algorithm), which operates by recursively breaking down a prob |
https://en.wikipedia.org/wiki/In%20silico | In biology and other experimental sciences, an in silico experiment is one performed on computer or via computer simulation. The phrase is pseudo-Latin for 'in silicon' (correct ), referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases , , and , which are commonly used in biology (especially systems biology). The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature.
History
The earliest known use of the phrase was by Christopher Langton to describe artificial life, in the announcement of a workshop on that subject at the Center for Nonlinear Studies at the Los Alamos National Laboratory in 1987. The expression in silico was first used to characterize biological experiments carried out entirely in a computer in 1989, in the workshop "Cellular Automata: Theory and Applications" in Los Alamos, New Mexico, by Pedro Miramontes, a mathematician from National Autonomous University of Mexico (UNAM), presenting the report "DNA and RNA Physicochemical Constraints, Cellular Automata and Molecular Evolution". The work was later presented by Miramontes as his dissertation.
In silico has been used in white papers written to support the creation of bacterial genome programs by the Commission of the European Community. The first referenced paper where in silico appears was written by a French team in 1991. The first referenced book chapter where in silico appears was written by Hans B. Sieburg in 1990 and presented during a Summer School on Complex Systems at the Santa Fe Institute.
The phrase in silico originally applied only to computer simulations that modeled natural or laboratory processes (in all the natural sciences), and did not refer to calculations done by computer generically.
Drug discovery with virtual screening
In silico study in medicine is thought to have the potential to speed the rate of discovery while reducing the need for expen |
https://en.wikipedia.org/wiki/Radiopharmacology | Radiopharmacology is radiochemistry applied to medicine and thus the pharmacology of radiopharmaceuticals (medicinal radiocompounds, that is, pharmaceutical drugs that are radioactive). Radiopharmaceuticals are used in the field of nuclear medicine as radioactive tracers in medical imaging and in therapy for many diseases (for example, brachytherapy). Many radiopharmaceuticals use technetium-99m (Tc-99m) which has many useful properties as a gamma-emitting tracer nuclide. In the book Technetium a total of 31 different radiopharmaceuticals based on Tc-99m are listed for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, kidneys, skeleton, blood and tumors.
The term radioisotope, which in its general sense refers to any radioactive isotope (radionuclide), has historically been used to refer to all radiopharmaceuticals, and this usage remains common. Technically, however, many radiopharmaceuticals incorporate a radioactive tracer atom into a larger pharmaceutically-active molecule, which is localized in the body, after which the radionuclide tracer atom allows it to be easily detected with a gamma camera or similar gamma imaging device. An example is fludeoxyglucose in which fluorine-18 is incorporated into deoxyglucose. Some radioisotopes (for example gallium-67, gallium-68, and radioiodine) are used directly as soluble ionic salts, without further modification. This use relies on the chemical and biological properties of the radioisotope itself, to localize it within the body.
History
See nuclear medicine.
Production
Production of a radiopharmaceutical involves two processes:
The production of the radionuclide on which the pharmaceutical is based.
The preparation and packaging of the complete radiopharmaceutical.
Radionuclides used in radiopharmaceuticals are mostly radioactive isotopes of elements with atomic numbers less than that of bismuth, that is, they are radioactive isotopes of elements that also have one or m |
https://en.wikipedia.org/wiki/Testbed | A testbed (also spelled test bed) is a platform for conducting rigorous, transparent, and replicable testing of scientific theories, computing tools, and new technologies.
The term is used across many disciplines to describe experimental research and new product development platforms and environments. They may vary from hands-on prototype development in manufacturing industries such as automobiles (known as "mules"), aircraft engines or systems and to intellectual property refinement in such fields as computer software development shielded from the hazards of testing live.
Software development
In software development, testbedding is a method of testing a particular module (function, class, or library) in an isolated fashion. It may be used as a proof of concept or when a new module is tested apart from the program or system, it will later be added to. A skeleton framework is implemented around the module so that the module behaves as if already part of the larger program.
A typical testbed could include software, hardware, and networking components. In software development, the specified hardware and software environment can be set up as a testbed for the application under test. In this context, a testbed is also known as the test environment made of:
Testing hardware equipment (test bench, optical table, custom testing rig, dummy equipment as simulates an actual product or its counterpart, external environment means, like showers, heaters, fans, vacuum chamber, anechoic chamber).
Computing equipment (processing units, data centers, in-line FPGA, environment simulation equipment).
Testing software (DAQ / oscilloscopes, visualisation and testing software, environment software to feed a dummmy equipment with data).
Testbeds are also pages on the Internet where the public are given the opportunity to test CSS or HTML they have created and want to preview the results, for example:
The Arena web browser was created by the World Wide Web Consortium (W3C) and CE |
https://en.wikipedia.org/wiki/Sieve%20theory | Sieve theory is a set of general techniques in number theory, designed to count, or more realistically to estimate the size of, sifted sets of integers. The prototypical example of a sifted set is the set of prime numbers up to some prescribed limit X. Correspondingly, the prototypical example of a sieve is the sieve of Eratosthenes, or the more general Legendre sieve. The direct attack on prime numbers using these methods soon reaches apparently insuperable obstacles, in the way of the accumulation of error terms. In one of the major strands of number theory in the twentieth century, ways were found of avoiding some of the difficulties of a frontal attack with a naive idea of what sieving should be.
One successful approach is to approximate a specific sifted set of numbers (e.g. the set of
prime numbers) by another, simpler set (e.g. the set of almost prime numbers), which is typically somewhat larger than the original set, and easier to analyze. More sophisticated sieves also do not work directly with sets per se, but instead count them according to carefully chosen weight functions on these sets (options for giving some elements of these sets more "weight" than others). Furthermore, in some modern applications, sieves are used not to estimate the size of a sifted
set, but to produce a function that is large on the set and mostly small outside it, while being easier to analyze than
the characteristic function of the set.
Basic sieve theory
For information on notation see at the end.
We start with some countable sequence of non-negative numbers . In the most basic case this sequence is just the indicator function of some set we want to sieve. However this abstraction allows for more general situations. Next we introduce a general set of prime numbers called the sifting range and their product up to as a function .
The goal of sieve theory is to estimate the sifting function
In the case of this just counts the cardinality of a subset of numbers, that ar |
https://en.wikipedia.org/wiki/Flash%20mob%20computing | Flash mob computing or flash mob computer is a temporary ad hoc computer cluster running specific software to coordinate the individual computers into one single supercomputer. A flash mob computer is distinct from other types of computer clusters in that it is set up and broken down on the same day or during a similar brief amount of time and involves many independent owners of computers coming together at a central physical location to work on a specific problem and/or social event.
Flash mob computer derives its name from the more general term flash mob which can mean any activity involving many people co-ordinated through virtual communities coming together for brief periods of time for a specific task or event. Flash mob computing is a more specific type of flash mob for the purpose of bringing people and their computers together to work on a single task or event.
History
The first flash mob computer was created on April 3, 2004 at the University of San Francisco using software written at USF called FlashMob (not to be confused with the more general term flash mob).
The event, called FlashMob I, was a success. There was a call for computers on the computer news website Slashdot. An article in The New York Times "Hey, Gang, Let’s Make Our Own Supercomputer" brought a lot of attention to the effort. More than 700 computers were brought to the gym at the University of San Francisco, and were wired to a network donated by Foundry Networks.
At FlashMob I the participants were able to run a benchmark on 256 of the computers, and achieved a peak rate of 180 Gflops (billions of calculations per second), though this computation stopped three quarters of the way due to a node failure.
The best, complete run used 150 computers and resulted in 77 Gflops. FlashMob I was run off a bootable CD-ROM that ran a copy of Morphix Linux, which was only available for the x86 platform.
Despite these efforts, the project was unable to achieve its original goal of running a clus |
https://en.wikipedia.org/wiki/Refrigerator | A refrigerator, colloquially fridge, is a commercial and home appliance consisting of a thermally insulated compartment and a heat pump (mechanical, electronic or chemical) that transfers heat from its inside to its external environment so that its inside is cooled to a temperature below the room temperature. Refrigeration is an essential food storage technique around the world. The lower temperature lowers the reproduction rate of bacteria, so the refrigerator reduces the rate of spoilage. A refrigerator maintains a temperature a few degrees above the freezing point of water. The optimal temperature range for perishable food storage is . A similar device that maintains a temperature below the freezing point of water is called a freezer. The refrigerator replaced the icebox, which had been a common household appliance for almost a century and a half. The United States Food and Drug Administration recommends that the refrigerator be kept at or below and that the freezer be regulated at .
The first cooling systems for food involved ice. Artificial refrigeration began in the mid-1750s, and developed in the early 1800s. In 1834, the first working vapor-compression refrigeration system was built. The first commercial ice-making machine was invented in 1854. In 1913, refrigerators for home use were invented. In 1923 Frigidaire introduced the first self-contained unit. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s. Home freezers as separate compartments (larger than necessary just for ice cubes) were introduced in 1940. Frozen foods, previously a luxury item, became commonplace.
Freezer units are used in households as well as in industry and commerce. Commercial refrigerator and freezer units were in use for almost 40 years prior to the common home models. The freezer-over-refrigerator style had been the basic style since the 1940s, until modern, side-by-side refrigerators broke the trend. A vapor compression cycle is used in m |
https://en.wikipedia.org/wiki/Internet%20archive%20%28disambiguation%29 | Internet Archive is a nonprofit organization based in San Francisco, California, United States.
Internet archive may also refer to:
Wayback Machine, digital archive of the World Wide Web maintained by Internet Archive
arXiv, a repository of scientific preprints ("e-prints") available online
Web archiving, archiving of the World Wide Web itself
Marxists Internet Archive
See also
Web archive (disambiguation)
.
Online archives |
https://en.wikipedia.org/wiki/Institute%20for%20Scientific%20Information | The Institute for Scientific Information (ISI) was an academic publishing service, founded by Eugene Garfield in Philadelphia in 1956. ISI offered scientometric and bibliographic database services. Its specialty was citation indexing and analysis, a field pioneered by Garfield.
Services
ISI maintained citation databases covering thousands of academic journals, including a continuation of its longtime print-based indexing service the Science Citation Index (SCI), as well as the Social Sciences Citation Index (SSCI) and the Arts and Humanities Citation Index (AHCI). All of these were available via ISI's Web of Knowledge database service. This database allows a researcher to identify which articles have been cited most frequently, and who has cited them. The database provides some measure of the academic impact of the papers indexed in it, and may increase their impact by making them more visible and providing them with a quality label. Some anecdotal evidence suggests that appearing in this database can double the number of citations received by a given paper. The company's main product was Current Contents, which gathers the tables of contents for recent academic journals.
The ISI also published the annual Journal Citation Reports which list an impact factor for each of the journals that it tracked. Within the scientific community, journal impact factors continue to play a large but controversial role in determining the kudos attached to a scientist's published research record.
A list of over 14,000 journals was maintained by the ISI. The list included some 1,100 arts and humanities journals as well as scientific journals. Listings were based on published selection criteria and are an indicator of journal quality and impact.
ISI published Science Watch, a newsletter which every two months identified one paper published in the previous two years as a "fast-breaking paper" in each of 22 broad fields of science, such as Mathematics (including Statistics), Engineerin |
https://en.wikipedia.org/wiki/F1000%20%28publisher%29 | F1000 (formerly "Faculty of 1000") is an open research publisher for scientists, scholars, and clinical researchers. F1000 offers a different research evaluation service from standard academic journals by offering peer-review after, rather than before, publishing a research article. Initially, F1000 was named after the 1,000 faculty members that performed peer-reviews, but over time F1000 expanded to more than 8,000 members. When F1000 was acquired by Taylor & Francis Group in January 2020, it kept the publishing services. F1000Prime (AKA Faculty Opinions) and F1000 Workspace (AKA Sciwheel) were acquired by different brands.
History
Faculty of 1000 was founded in 2000 by publishing entrepreneur Vitek Tracz in London. Initially, it was named after the 1,000 experts it had reviewing academic works, but over time F1000 expanded to more than 8,000 members. In 2002, it introduced F1000Prime (later known as Faculty Opinions), which recommended scientific articles selected by its experts. At first, F1000 was focused on biology, but later expanded to additional scientific fields over time, including a focus on medicine beginning in 2006.
The company was part of the Science Navigation Group until its acquisition by Taylor & Francis in January 2020. As part of the deal, founder Vitek Tracz remained the owner of Prime and Workspace, leaving the new F1000 (and F1000Research) owned by Taylor & Francis. Faculty Opinions (F1000Prime) was later acquired by a tech company called H1 in February 2022. F1000 now only provides publishing and related services. Services
F1000 is an open research publisher for academic works. Its model focuses on publishing findings quickly using a post-publication peer-review system. Authors submit an article and all of its underlying data. F1000 does a prepublication check and publishes the article, usually within a couple weeks. After the article is published, an expert is assigned to conduct a peer-review of the work. The peer-review is done publ |
https://en.wikipedia.org/wiki/Internet%20Broadway%20Database | The Internet Broadway Database (IBDB) is an online database of Broadway theatre productions and their personnel. It was conceived and created by Karen Hauser in 1996 and is operated by the Research Department of The Broadway League, a trade association for the North American commercial theatre community.
History
Karen Hauser, research director for the Broadway League, developed the Internet Broadway Database which launched in 1996 or 2001. Prior to that she served as the League's media director. She has written on the economic health of Broadway and how it contributes to New York City's economy as well as that of the cities that touring productions visit. Hauser co-produced the 2000 production of Keith Reddin's The Perpetual Patient.
Overview
This comprehensive history of Broadway provides records of productions from the beginnings of New York theatre in the 18th century up to today. Details include cast and creative lists for opening night and current day, song lists, awards and other interesting facts about every Broadway production. Other features of IBDB include an extensive archive of photos from past and present Broadway productions, headshots, links to cast recordings on iTunes or Amazon, gross and attendance information.
Its mission was to be an interactive, user-friendly, searchable database for League members, journalists, researchers, and Broadway fans.
The League recently added Broadway Touring shows to the database for ease of tracking shows that play in theatres across the country.
It is managed by Michael Abourizk of the Broadway League.
See also
Internet Theatre Database – ITDb
Internet Movie Database – IMDb
Internet Book Database – IBookDb
Lortel Archives – IOBDb
The Broadway League
References
External links
Broadway League website
Theatre in the United States
Culture of New York City
Online databases
Broadway theatre
Internet properties established in 2000
Theatre databases |
https://en.wikipedia.org/wiki/124%20%28number%29 | 124 (one hundred [and] twenty-four) is the natural number following 123 and preceding 125.
In mathematics
124 is an untouchable number, meaning that it is not the sum of proper divisors of any positive number.
It is a stella octangula number, the number of spheres packed in the shape of a stellated octahedron. It is also an icosahedral number.
There are 124 different polygons of length 12 formed by edges of the integer lattice, counting two polygons as the same only when one is a translated copy of the other.
124 is a perfectly partitioned number, meaning that it divides the number of partitions of 124. It is the first number to do so after 1, 2, and 3.
See also
The year AD 124 or 124 BC
124th (disambiguation)
List of highways numbered 124
References
Integers |
https://en.wikipedia.org/wiki/Iterated%20logarithm | In computer science, the iterated logarithm of , written (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation:
On the positive real numbers, the continuous super-logarithm (inverse tetration) is essentially equivalent:
i.e. the base b iterated logarithm is if n lies within the interval , where denotes tetration. However, on the negative real numbers, log-star is , whereas for positive , so the two functions differ for negative arguments.
The iterated logarithm accepts any positive real number and yields an integer. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval on the x-axis.
In computer science, is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e).
Mathematically, the iterated logarithm is well-defined for any base greater than , not only for base and base e.
Analysis of algorithms
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n n) time.
Fürer's algorithm for integer multiplication: O(n log n 2O( n)).
Finding an approximate maximum (element at least as large as the median): n − 4 to n + 2 parallel operations.
Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O( n) synchronous communication rounds.
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself. For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known |
https://en.wikipedia.org/wiki/Planetary%20engineering | Planetary engineering is the development and application of technology for the purpose of influencing the environment of a planet. Planetary engineering encompasses a variety of methods such as terraforming, seeding, and geoengineering.
Widely discussed in the scientific community, terraforming refers to the alteration of other planets to create a habitable environment for terrestrial life. Seeding refers to the introduction of life from Earth to habitable planets. Geoengineering refers to the engineering of a planet's climate, and has already been applied on Earth. Each of these methods are composed of varying approaches and possess differing levels of feasibility and ethical concern.
Terraforming
Terraforming is the process of modifying the atmosphere, temperature, surface topography or ecology of a planet, moon, or other body in order to replicate the environment of Earth.
Technologies
A common object of discussion on potential terraforming is the planet Mars. To terraform Mars, humans would need to create a new atmosphere, due to the planet's high carbon dioxide concentration and low atmospheric pressure. This would be possible by introducing more greenhouse gases to below "freezing point from indigenous materials". To terraform Venus, carbon dioxide would need to be converted to graphite since Venus receives twice as much sunlight as Earth. This process is only possible if the greenhouse effect is removed with the use of "high-altitude absorbing fine particles" or a sun shield, creating a more habitable Venus.
NASA has defined categories of habitability systems and technologies for terraforming to be feasible. These topics include creating power-efficient systems for preserving and packaging food for crews, preparing and cooking foods, dispensing water, and developing facilities for rest, trash and recycling, and areas for crew hygiene and rest.
Feasibility
A variety of planetary engineering challenges stand in the way of terraforming efforts. The atmo |
https://en.wikipedia.org/wiki/Kawasaki%20disease | Kawasaki disease (also known as mucocutaneous lymph node syndrome) is a syndrome of unknown cause that results in a fever and mainly affects children under 5 years of age. It is a form of vasculitis, where medium-sized blood vessels become inflamed throughout the body. The fever typically lasts for more than five days and is not affected by usual medications. Other common symptoms include large lymph nodes in the neck, a rash in the genital area, lips, palms, or soles of the feet, and red eyes. Within three weeks of the onset, the skin from the hands and feet may peel, after which recovery typically occurs. The disease is the leading cause of acquired heart disease in children in developed countries, which include the formation of coronary artery aneurysms and myocarditis.
While the specific cause is unknown, it is thought to result from an excessive immune system response to an infection in children who are genetically predisposed. It does not spread between people. Diagnosis is usually based on a person's signs and symptoms. Other tests such as an ultrasound of the heart and blood tests may support the diagnosis. Diagnosis must take into account many other conditions that may present similar features, including scarlet fever and juvenile rheumatoid arthritis. An emerging 'Kawasaki-like' disease temporally associated with COVID-19 appears to be a distinct syndrome.
Typically, initial treatment of Kawasaki disease consists of high doses of aspirin and immunoglobulin. Usually, with treatment, fever resolves within 24 hours and full recovery occurs. If the coronary arteries are involved, ongoing treatment or surgery may occasionally be required. Without treatment, coronary artery aneurysms occur in up to 25% and about 1% die. With treatment, the risk of death is reduced to 0.17%. People who have had coronary artery aneurysms after Kawasaki disease require lifelong cardiological monitoring by specialized teams.
Kawasaki disease is rare. It affects between 8 and 67 |
https://en.wikipedia.org/wiki/Tension-leg%20platform |
A tension-leg platform (TLP) or extended tension leg platform (ETLP) is a vertically moored floating structure normally used for the offshore production of oil or gas, and is particularly suited for water depths greater than 300 metres (about 1000 ft) and less than 1500 metres (about 4900 ft). Use of tension-leg platforms has also been proposed for offshore wind turbines.
The platform is permanently moored by means of tethers or tendons grouped at each of the structure's corners. A group of tethers is called a tension leg. A feature of the design of the tethers is that they have relatively high axial stiffness (low elasticity), such that virtually all vertical motion of the platform is eliminated. This allows the platform to have the production wellheads on deck (connected directly to the subsea wells by rigid risers), instead of on the seafloor. This allows a simpler well completion and gives better control over the production from the oil or gas reservoir, and easier access for downhole intervention operations.
TLPs have been in use since the early 1980s. The first tension leg platform was built for Conoco's Hutton field in the North Sea in the early 1980s. The hull was built in the dry-dock at Highland Fabricator's Nigg yard in the north of Scotland, with the deck section built nearby at McDermott's yard at Ardersier. The two parts were mated in the Moray Firth in 1984.
The Hutton TLP was originally designed for a service life of 25 years in Nord Sea depth of 100 to 1000 metres. It had 16 tension legs. Its weight varied between 46,500 and 55,000 tons when moored to the seabed, but up to 61,580 tons when floating freely. The total area of its living quarters was about 3,500 square metres and accommodated over a 100 cabins though only 40 people were necessary to maintain the structure in place.
The hull of the Hutton TLP has been separated from the topsides. Topsides have been redeployed to the Prirazlomnoye field in the Barents Sea, while the hull was reporte |
https://en.wikipedia.org/wiki/Cut%20rule | In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus ponens inference rule. Its meaning is that, if a formula A appears as a conclusion in one proof and a hypothesis in another, then another proof in which the formula A does not appear can be deduced. In the particular case of the modus ponens, for example occurrences of man are eliminated of Every man is mortal, Socrates is a man to deduce Socrates is mortal.
Formal notation
Formal notation in sequent calculus notation :
cut
Elimination
The cut rule is the subject of an important theorem, the cut elimination theorem. It states that any judgement that possesses a proof in the sequent calculus that makes use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule.
Rules of inference
Logical calculi |
https://en.wikipedia.org/wiki/Spring%20bloom | The spring bloom is a strong increase in phytoplankton abundance (i.e. stock) that typically occurs in the early spring and lasts until late spring or early summer. This seasonal event is characteristic of temperate North Atlantic, sub-polar, and coastal waters. Phytoplankton blooms occur when growth exceeds losses, however there is no universally accepted definition of the magnitude of change or the threshold of abundance that constitutes a bloom. The magnitude, spatial extent and duration of a bloom depends on a variety of abiotic and biotic factors. Abiotic factors include light availability, nutrients, temperature, and physical processes that influence light availability, and biotic factors include grazing, viral lysis, and phytoplankton physiology. The factors that lead to bloom initiation are still actively debated (see Critical depth).
Classical mechanism
In the spring, more light becomes available and stratification of the water column occurs as increasing temperatures warm the surface waters (referred to as thermal stratification). As a result, vertical mixing is inhibited and phytoplankton and nutrients are entrained in the euphotic zone. This creates a comparatively high nutrient and high light environment that allows rapid phytoplankton growth.
Along with thermal stratification, spring blooms can be triggered by salinity stratification due to freshwater input, from sources such as high river runoff. This type of stratification is normally limited to coastal areas and estuaries, including Chesapeake Bay. Freshwater influences primary productivity in two ways. First, because freshwater is less dense, it rests on top of seawater and creates a stratified water column. Second, freshwater often carries nutrients that phytoplankton need to carry out processes, including photosynthesis.
Rapid increases in phytoplankton growth, that typically occur during the spring bloom, arise because phytoplankton can reproduce rapidly under optimal growth conditions (i.e |
https://en.wikipedia.org/wiki/SystemC | SystemC is a set of C++ classes and macros which provide an event-driven simulation interface (see also discrete event simulation). These facilities enable a designer to simulate concurrent processes, each described using plain C++ syntax. SystemC processes can communicate in a simulated real-time environment, using signals of all the datatypes offered by C++, some additional ones offered by the SystemC library, as well as user defined. In certain respects, SystemC deliberately mimics the hardware description languages VHDL and Verilog, but is more aptly described as a system-level modeling language.
SystemC is applied to system-level modeling, architectural exploration, performance modeling, software development, functional verification, and high-level synthesis. SystemC is often associated with electronic system-level (ESL) design, and with transaction-level modeling (TLM).
Language specification
SystemC is defined and promoted by the Open SystemC Initiative (OSCI — now Accellera), and has been approved by the IEEE Standards Association as IEEE 1666-2011 - the SystemC Language Reference Manual (LRM). The LRM provides the definitive statement of the semantics of SystemC. OSCI also provide an open-source proof-of-concept simulator (sometimes incorrectly referred to as the reference simulator), which can be downloaded from the OSCI website. Although it was the intent of OSCI that commercial vendors and academia could create original software compliant to IEEE 1666, in practice most SystemC implementations have been at least partly based on the OSCI proof-of-concept simulator.
Compared to HDLs
SystemC has semantic similarities to VHDL and Verilog, but may be said to have a syntactical overhead compared to these when used as a hardware description language. On the other hand, it offers a greater range of expression, similar to object-oriented design partitioning and template classes. Although strictly a C++ class library, SystemC is sometimes viewed as being a la |
https://en.wikipedia.org/wiki/Privilege%20separation | In computer programming and computer security, privilege separation is one software-based technique for implementing the principle of least privilege. With privilege separation, a program is divided into parts which are limited to the specific privileges they require in order to perform a specific task. This is used to mitigate the potential damage of a computer security vulnerability.
A common method to implement privilege separation is to have a computer program fork into two processes. The main program drops privileges, and the smaller program keeps privileges in order to perform a certain task. The two halves then communicate via a socket pair. Thus, any successful attack against the larger program will gain minimal access, even though the pair of programs will be capable of performing privileged operations.
Privilege separation is traditionally accomplished by distinguishing a real user ID/group ID from the effective user ID/group ID, using the setuid(2)/setgid(2) and related system calls, which were specified by POSIX. If these are incorrectly positioned, gaps can allow widespread network penetration.
Many network service daemons have to do a specific privileged operation such as open a raw socket or an Internet socket in the well known ports range. Administrative utilities can require particular privileges at run-time as well. Such software tends to separate privileges by revoking them completely after the critical section is done, and change the user it runs under to some unprivileged account after so doing. This action is known as dropping root under Unix-like operating systems. The unprivileged part is usually run under the "nobody" user or an equivalent separate user account.
Privilege separation can also be done by splitting functionality of a single program into multiple smaller programs, and then assigning the extended privileges to particular parts using file system permissions. That way the different programs have to communicate with each othe |
https://en.wikipedia.org/wiki/Cartoon%20All-Stars%20to%20the%20Rescue | Cartoon All-Stars to the Rescue is a 1990 American animated television propaganda film starring many characters from several animated television series at the time of its release. Financed by McDonald's, Ronald McDonald Children's Charities, it was originally simulcast for a limited time on April 21, 1990, on all four major American television networks (by supporting their Saturday morning characters): ABC, CBS, NBC, and Fox, and most independent stations, as well as various cable networks. McDonald's released a VHS home video edition of the special distributed by Buena Vista Home Video, which opened with an introduction from President George H. W. Bush, First Lady Barbara Bush and their dog, Millie. It was produced by the Academy of Television Arts & Sciences Foundation and Southern Star Productions, and was animated overseas by Wang Film Productions. The musical number "Wonderful Ways to Say No" was written by Academy Award-winning composer, Alan Menken and lyricist Howard Ashman, who also wrote the songs for Walt Disney Animation Studios' The Little Mermaid, Beauty and the Beast, and Aladdin.
The plot chronicles the exploits of Michael, a young teenage boy who is using marijuana as well as stealing and drinking alcohol. His younger sister, Corey, is worried about him because he started acting differently which becomes a concern for their parents (who are also starting to notice his changes). When Corey's piggy bank goes missing one morning, her cartoon toys come to life to help her find it. After discovering it in Michael's room along with his stash of drugs, they proceed to work together to do an intervention and take him on a fantasy journey to teach him the risks and consequences a life of drug abuse can bring.
Plot
In Corey's room, an unseen person steals Corey's piggy bank off her dresser. The theft is witnessed by Papa Smurf, who emerges from a Smurfs comic book with the other Smurfs and he alerts the other cartoon characters in the room (Alf from a pict |
https://en.wikipedia.org/wiki/Cope%27s%20rule | Cope's rule, named after American paleontologist Edward Drinker Cope, postulates that population lineages tend to increase in body size over evolutionary time. It was never actually stated by Cope, although he favoured the occurrence of linear evolutionary trends. It is sometimes also known as the Cope–Depéret rule, because Charles Depéret explicitly advocated the idea. Theodor Eimer had also done so earlier. The term "Cope's rule" was apparently coined by Bernhard Rensch, based on the fact that Depéret had "lionized Cope" in his book. While the rule has been demonstrated in many instances, it does not hold true at all taxonomic levels, or in all clades. Larger body size is associated with increased fitness for a number of reasons, although there are also some disadvantages both on an individual and on a clade level: clades comprising larger individuals are more prone to extinction, which may act to limit the maximum size of organisms.
Function
Effects of growth
Directional selection appears to act on organisms' size, whereas it exhibits a far smaller effect on other morphological traits, though it is possible that this perception may be a result of sample bias. This selectional pressure can be explained by a number of advantages, both in terms of mating success and survival rate.
For example, larger organisms find it easier to avoid or fight off predators and capture prey, to reproduce, to kill competitors, to survive temporary lean times, and to resist rapid climatic changes. They may also potentially benefit from better thermal efficiency, increased intelligence, and a longer lifespan.
Offsetting these advantages, larger organisms require more food and water, and shift from r to K-selection. Their longer generation time means a longer period of reliance on the mother, and on a macroevolutionary scale restricts the clade's ability to evolve rapidly in response to changing environments.
Capping growth
Left unfettered, the trend of ever-larger size would produc |
https://en.wikipedia.org/wiki/Aroma%20compound | An aroma compound, also known as an odorant, aroma, fragrance or flavoring, is a chemical compound that has a smell or odor. For an individual chemical or class of chemical compounds to impart a smell or fragrance, it must be sufficiently volatile for transmission via the air to the olfactory system in the upper part of the nose. As examples, various fragrant fruits have diverse aroma compounds, particularly strawberries which are commercially cultivated to have appealing aromas, and contain several hundred aroma compounds.
Generally, molecules meeting this specification have molecular weights of less than 310. Flavors affect both the sense of taste and smell, whereas fragrances affect only smell. Flavors tend to be naturally occurring, and the term fragrances may also apply to synthetic compounds, such as those used in cosmetics.
Aroma compounds can naturally be found in various foods, such as fruits and their peels, wine, spices, floral scent, perfumes, fragrance oils, and essential oils. For example, many form biochemically during the ripening of fruits and other crops. Wines have more than 100 aromas that form as byproducts of fermentation. Also, many of the aroma compounds play a significant role in the production of compounds used in the food service industry to flavor, improve, and generally increase the appeal of their products.
An odorizer may add a detectable odor to a dangerous odorless substance, like propane, natural gas, or hydrogen, as a safety measure.
Aroma compounds classified by structure
Esters
Linear terpenes
Cyclic terpenes
Note: Carvone, depending on its chirality, offers two different smells.
Aromatic
Amines
Other aroma compounds
Alcohols
Furaneol (strawberry)
1-Hexanol (herbaceous, woody)
cis-3-Hexen-1-ol (fresh cut grass)
Menthol (peppermint)
Aldehydes
High concentrations of aldehydes tend to be very pungent and overwhelming, but low concentrations can evoke a wide range of aromas.
Acetaldehyde (ethereal)
Hexanal (gr |
https://en.wikipedia.org/wiki/Directional%20drilling | Directional drilling (or slant drilling) is the practice of drilling non-vertical bores. It can be broken down into four main groups: oilfield directional drilling, utility installation directional drilling, directional boring (horizontal directional drilling - HDD), and surface in seam (SIS), which horizontally intersects a vertical bore target to extract coal bed methane.
History
Many prerequisites enabled this suite of technologies to become productive. Probably, the first requirement was the realization that oil wells, or water wells, do not necessarily need to be vertical. This realization was quite slow, and did not really grasp the attention of the oil industry until the late 1920s when there were several lawsuits alleging that wells drilled from a rig on one property had crossed the boundary and were penetrating a reservoir on an adjacent property. Initially, proxy evidence such as production changes in other wells was accepted, but such cases fueled the development of small diameter tools capable of surveying wells during drilling. Horizontal directional drill rigs are developing towards large-scale, micro-miniaturization, mechanical automation, hard stratum working, exceeding length and depth oriented monitored drilling.
Measuring the inclination of a wellbore (its deviation from the vertical) is comparatively simple, requiring only a pendulum. Measuring the azimuth (direction with respect to the geographic grid in which the wellbore was running from the vertical), however, was more difficult. In certain circumstances, magnetic fields could be used, but would be influenced by metalwork used inside wellbores, as well as the metalwork used in drilling equipment. The next advance was in the modification of small gyroscopic compasses by the Sperry Corporation, which was making similar compasses for aeronautical navigation. Sperry did this under contract to Sun Oil (which was involved in a lawsuit as described above), and a spin-off company "Sperry Sun" was |
https://en.wikipedia.org/wiki/HOMFLY%20polynomial | In the mathematical field of knot theory, the HOMFLY polynomial or HOMFLYPT polynomial, sometimes called the generalized Jones polynomial, is a 2-variable knot polynomial, i.e. a knot invariant in the form of a polynomial of variables m and l.
A central question in the mathematical theory of knots is whether two knot diagrams represent the same knot. One tool used to answer such questions is a knot polynomial, which is computed from a diagram of the knot and can be shown to be an invariant of the knot, i.e. diagrams representing the same knot have the same polynomial. The converse may not be true. The HOMFLY polynomial is one such invariant and it generalizes two polynomials previously discovered, the Alexander polynomial and the Jones polynomial, both of which can be obtained by appropriate substitutions from HOMFLY. The HOMFLY polynomial is also a quantum invariant.
The name HOMFLY combines the initials of its co-discoverers: Jim Hoste, Adrian Ocneanu, Kenneth Millett, Peter J. Freyd, W. B. R. Lickorish, and David N. Yetter. The addition of PT recognizes independent work carried out by Józef H. Przytycki and Paweł Traczyk
Definition
The polynomial is defined using skein relations:
where are links formed by crossing and smoothing changes on a local region of a link diagram, as indicated in the figure.
The HOMFLY polynomial of a link L that is a split union of two links and is given by
See the page on skein relation for an example of a computation using such relations.
Other HOMFLY skein relations
This polynomial can be obtained also using other skein relations:
Main properties
, where # denotes the knot sum; thus the HOMFLY polynomial of a composite knot is the product of the HOMFLY polynomials of its components.
, so the HOMFLY polynomial can often be used to distinguish between two knots of different chirality. However there exist chiral pairs of knots that have the same HOMFLY polynomial, e.g. knots 942 and 1071 together with their |
https://en.wikipedia.org/wiki/Noise%20gate | A noise gate or simply gate is an electronic device or software that is used to control the volume of an audio signal. Comparable to a compressor, which attenuates signals above a threshold, such as loud attacks from the start of musical notes, noise gates attenuate signals that register below the threshold. However, noise gates attenuate signals by a fixed amount, known as the range. In its simplest form, a noise gate allows a main signal to pass through only when it is above a set threshold: the gate is "open". If the signal falls below the threshold, no signal is allowed to pass (or the signal is substantially attenuated): the gate is "closed". A noise gate is used when the level of the "signal" is above the level of the unwanted "noise". The threshold is set above the level of the "noise", and so when there is no main "signal", the gate is closed.
A common application is with electric guitar to remove hum and hiss noise caused by distortion effects units. A noise gate does not remove noise from the signal itself; when the gate is open, both the signal and the noise will pass through. Even though the signal and the unwanted noise are both present in open gate status, the noise is not as noticeable. The noise becomes most noticeable during periods where the main signal is not present, such as a bar of rest in a guitar solo. Gates typically feature "attack", "release", and "hold" settings and may feature a "look-ahead" function.
Controls and parameters
Noise gates have a threshold control to set the level at which the gate will open. More advanced noise gates have more features.
The release control is used to define the length of time the gate takes to change from open to fully closed. It is the fade-out duration. A fast release abruptly cuts off the sound, whereas a slower release smoothly attenuates the signal from open to closed, resulting in a slow fade-out. If the release time is too short, a click can be heard when the gate re-opens. Release is the secon |
https://en.wikipedia.org/wiki/Tangent%20half-angle%20formula | In trigonometry, tangent half-angle formulas relate the tangent of half of an angle to trigonometric functions of the entire angle. The tangent of half an angle is the stereographic projection of the circle through the point at angle onto the line through the angles . Among these formulas are the following:
From these one can derive identities expressing the sine, cosine, and tangent as functions of tangents of half-angles:
Proofs
Algebraic proofs
Using double-angle formulae and the Pythagorean identity gives
Taking the quotient of the formulae for sine and cosine yields
Combining the Pythagorean identity with the double-angle formula for the cosine,
rearranging, and taking the square roots yields
and
which, upon division gives
Alternatively,
It turns out that the absolute value signs in these last two formulas may be dropped, regardless of which quadrant is in. With or without the absolute value bars these formulas do not apply when both the numerator and denominator on the right-hand side are zero.
Also, using the angle addition and subtraction formulae for both the sine and cosine one obtains:
Pairwise addition of the above four formulae yields:
Setting and and substituting yields:
Dividing the sum of sines by the sum of cosines one arrives at:
Geometric proofs
Applying the formulae derived above to the rhombus figure on the right, it is readily shown that
In the unit circle, application of the above shows that . By similarity of triangles,
It follows that
The tangent half-angle substitution in integral calculus
In various applications of trigonometry, it is useful to rewrite the trigonometric functions (such as sine and cosine) in terms of rational functions of a new variable . These identities are known collectively as the tangent half-angle formulae because of the definition of . These identities can be useful in calculus for converting rational functions in sine and cosine to functions of in order to find their antideriv |
https://en.wikipedia.org/wiki/List%20of%20satellites%20in%20geosynchronous%20orbit | This is a list of satellites in geosynchronous orbit (GSO). These satellites are commonly used for communication purposes, such as radio and television networks, back-haul, and direct broadcast. Traditional global navigation systems do not use geosynchronous satellites, but some SBAS navigation satellites do. A number of weather satellites are also present in geosynchronous orbits. Not included in the list below are several more classified military geosynchronous satellites, such as PAN.
A special case of geosynchronous orbit is the geostationary orbit, which is a circular geosynchronous orbit at zero inclination (that is, directly above the equator). A satellite in a geostationary orbit appears stationary, always at the same point in the sky, to ground observers. Popularly or loosely, the term "geosynchronous" may be used to mean geostationary. Specifically, geosynchronous Earth orbit (GEO) may be a synonym for geosynchronous equatorial orbit, or geostationary Earth orbit. To avoid confusion, geosynchronous satellites that are not in geostationary orbit are sometimes referred to as being in an inclined geostationary orbit (IGSO).
Some of these satellites are separated from each other by as little as 0.1° longitude. This corresponds to an inter-satellite spacing of approximately 73 km. The major consideration for spacing of geostationary satellites is the beamwidth at-orbit of uplink transmitters, which is primarily a factor of the size and stability of the uplink dish, as well as what frequencies the satellite's transponders receive; satellites with discontiguous frequency allocations can be much closer together.
As of July 2023, the website UCS Satellite Database lists 6,718 known satellites. This includes all orbits and everything down to the little CubeSats, not just satellites in GEO. Of these, 580 are listed in the database as being at GEO. The website provides a spreadsheet containing details of all the satellites, which can be downloaded.
Listings are fr |
https://en.wikipedia.org/wiki/Cray-3 | The Cray-3 was a vector supercomputer, Seymour Cray's designated successor to the Cray-2. The system was one of the first major applications of gallium arsenide (GaAs) semiconductors in computing, using hundreds of custom built ICs packed into a CPU. The design goal was performance around 16 GFLOPS, about 12 times that of the Cray-2.
Work started on the Cray-3 in 1988 at Cray Research's (CRI) development labs in Chippewa Falls, Wisconsin. Other teams at the lab were working on designs with similar performance. To focus the teams, the Cray-3 effort was moved to a new lab in Colorado Springs, Colorado later that year. Shortly thereafter, the corporate headquarters in Minneapolis decided to end work on the Cray-3 in favor of another design, the Cray C90. In 1989 the Cray-3 effort was spun off to a newly formed company, Cray Computer Corporation (CCC).
The launch customer, Lawrence Livermore National Laboratory, cancelled their order in 1991 and a number of company executives left shortly thereafter. The first machine was finally ready in 1993, but with no launch customer, it was instead loaned as a demonstration unit to the nearby National Center for Atmospheric Research in Boulder. The company went bankrupt in May 1995, and the machine was officially decommissioned.
With the delivery of the first Cray-3, Seymour Cray immediately moved on to the similar-but-improved Cray-4 design, but the company went bankrupt before it was completely tested. The Cray-3 was Cray's last completed design; with CCC's bankruptcy, he formed SRC Computers to concentrate on parallel designs, but died in a car accident in 1996 before this work was delivered.
History
Background
Seymour Cray began the design of the Cray-3 in 1985, as soon as the Cray-2 reached production. Cray generally set himself the goal of producing new machines with ten times the performance of the previous models. Although the machines did not always meet this goal, this was a useful technique in defining the project |
https://en.wikipedia.org/wiki/Phil%20Kaufman%20Award | The Phil Kaufman Award for Distinguished Contributions to EDA honors individuals for their impact on electronic design by their contributions to electronic design automation (EDA). It was established in 1994 by the EDA Consortium (now the Electronic System Design Alliance, a SEMI Technology Community). The IEEE Council on Electronic Design Automation (CEDA) became a co-sponsor of the award. The first Phil Kaufman Award was presented in 1994.
The IEEE has a policy not to issue awards to deceased persons. To honor individuals who made a significant impact on EDA, but died before the award was established the Phil Kaufman Hall of Fame was created by the ESDA in 2020. The first Hall of Fame honor was presented in June 2021. Phil Kaufman awardees are included in the Phil Kaufman Hall of Fame.
Contributions to qualify for the Phil Kaufman Award are evaluated in any of the following categories:
Business
Industry Direction and Promotion
Technology and Engineering
Educational and Mentoring
The award was established to honor Phil Kaufman, the deceased former president of Quickturn Systems.
The award is described as the "Nobel Prize of EDA".
Recipients
All recipients are listed at the ESDA Phil Kaufman Award webpage.
1994 – Hermann Gummel
1995 – Donald Pederson
1996 – Carver Mead
1997 – James Solomon
1998 – Ernest S. Kuh
1999 – Hugo De Man, known for his contributions in creating and driving the development of design automation tools that have had measurable impact on the productivity of electronic design engineers.
2000 – Paul (Yen-Son) Huang
2001 – Alberto Sangiovanni-Vincentelli
2002 – Ronald A. Rohrer, electronic industry pioneer, entrepreneur, researcher and educator, who head led a students' circuit simulator projects, which had eventually led to the development of SPICE.
2003 – A. Richard Newton
2004 – Joseph Costello
2005 – Phil Moorby, inventor of Verilog
2006 – Robert Dutton, creator of SUPREM (Stanford University Process Engineering Models |
https://en.wikipedia.org/wiki/Ehrenfeucht%E2%80%93Fra%C3%AFss%C3%A9%20game | In the mathematical discipline of model theory, the Ehrenfeucht–Fraïssé game (also called back-and-forth games)
is a technique based on game semantics for determining whether two structures
are elementarily equivalent. The main application of Ehrenfeucht–Fraïssé games is in proving the inexpressibility of certain properties in first-order logic. Indeed, Ehrenfeucht–Fraïssé games provide a complete methodology for proving inexpressibility results for first-order logic. In this role, these games are of particular importance in finite model theory and its applications in computer science (specifically computer aided verification and database theory), since Ehrenfeucht–Fraïssé games are one of the few techniques from model theory that remain valid in the context of finite models. Other widely used techniques for proving inexpressibility results, such as the compactness theorem, do not work in finite models.
Ehrenfeucht–Fraïssé-like games can also be defined for other logics, such as fixpoint logics and pebble games for finite variable logics; extensions are powerful enough to characterise definability in existential second-order logic.
Main idea
The main idea behind the game is that we have two structures, and two players – Spoiler and Duplicator. Duplicator wants to show that the two structures are elementarily equivalent (satisfy the same first-order sentences), whereas Spoiler wants to show that they are different. The game is played in rounds. A round proceeds as follows: Spoiler chooses any element from one of the structures, and Duplicator chooses an element from the other structure. In simplified terms, the Duplicator's task is to always pick an element "similar" to the one that the Spoiler has chosen, whereas the Spoiler's task is to choose an element for which no "similar" element exists in the other structure. Duplicator wins if there exists an isomorphism between the eventual substructures chosen from the two different structures; otherwise, Spoiler wins. |
https://en.wikipedia.org/wiki/129%20%28number%29 | 129 (one hundred [and] twenty-nine) is the natural number following 128 and preceding 130.
In mathematics
129 is the sum of the first ten prime numbers. It is the smallest number that can be expressed as a sum of three squares in four different ways: , , , and .
129 is the product of only two primes, 3 and 43, making 129 a semiprime. Since 3 and 43 are both Gaussian primes, this means that 129 is a Blum integer.
129 is a repdigit in base 6 (333).
129 is a happy number.
129 is a centered octahedral number.
In the military
Raytheon AGM-129 ACM (Advanced Cruise Missile) was a low observable, sub-sonic, jet-powered, air-launched cruise missile used by the United States Air Force
Soviet submarine K-129 (1960) was a Soviet Pacific Fleet nuclear submarine that sank in 1968
was a United States Navy Mission Buenaventura-class fleet oilers during World War II
was a Crosley-class high speed transport of the United States Navy
was the lead ship of her class of destroyer escort in the United States Navy
was a United States Navy Haskell-class attack transport during World War II
was a United States Navy Crater-class cargo ship during World War II
was a United States Navy Auk-class minesweeper for removing naval mines laid in the water
Agusta A129 Mangusta is an attack helicopter originally designed and produced by Italian company Agusta
The 129th Rescue Wing (129 RQW) is a unit of the California Air National Guard
In transportation
LZ 129 Hindenburg was a German zeppelin which went up in flames while landing on May 6, 1937
London Buses route 129 is a Transport for London contracted bus route in London
STS-129 was a Space Shuttle mission to the International Space Station, flown in November 2009 by the shuttle Atlantis''.
In other fields
129 is also:
The year AD 129 or 129 BC
129 AH is a year in the Islamic calendar that corresponds to 746–747 CE
129 Antigone is a main belt asteroid
The atomic number of unbiennium, an element yet to be discovered
A |
https://en.wikipedia.org/wiki/Recuperator | A recuperator is a special purpose counter-flow energy recovery heat exchanger positioned within the supply and exhaust air streams of an air handling system, or in the exhaust gases of an industrial process, in order to recover the waste heat. Generally, they are used to extract heat from the exhaust and use it to preheat air entering the combustion system. In this way they use waste energy to heat the air, offsetting some of the fuel, and thereby improve the energy efficiency of the system as a whole.
Description
In many types of processes, combustion is used to generate heat, and the recuperator serves to recuperate, or reclaim this heat, in order to reuse or recycle it. The term recuperator refers as well to liquid-liquid counterflow heat exchangers used for heat recovery in the chemical and refinery industries and in closed processes such as ammonia-water or LiBr-water absorption refrigeration cycle.
Recuperators are often used in association with the burner portion of a heat engine, to increase the overall efficiency. For example, in a gas turbine engine, air is compressed, mixed with fuel, which is then burned and used to drive a turbine. The recuperator transfers some of the waste heat in the exhaust to the compressed air, thus preheating it before entering the fuel burner stage. Since the gases have been pre-heated, less fuel is needed to heat the gases up to the turbine inlet temperature. By recovering some of the energy usually lost as waste heat, the recuperator can make a heat engine or gas turbine significantly more efficient.
Energy transfer process
Normally the heat transfer between airstreams provided by the device is termed as "sensible heat", which is the exchange of energy, or enthalpy, resulting in a change in temperature of the medium (air in this case), but with no change in moisture content. However, if moisture or relative humidity levels in the return air stream are high enough to allow condensation to take place in the device, then this |
https://en.wikipedia.org/wiki/Raymond%20Damadian | Raymond Vahan Damadian (March 16, 1936 – August 3, 2022) was an American physician, medical practitioner, and inventor of the first NMR (nuclear magnetic resonance) scanning machine.
Damadian's research into sodium and potassium in living cells led him to his first experiments with nuclear magnetic resonance (NMR) which caused him to first propose the MR body scanner in 1969. Damadian discovered that tumors and normal tissue can be distinguished in vivo by nuclear magnetic resonance (NMR) because of their prolonged relaxation times, both T1 (spin-lattice relaxation) or T2 (spin-spin relaxation). Damadian was the first to perform a full-body scan of a human being in 1977 to diagnose cancer. Damadian invented an apparatus and method to use NMR safely and accurately to scan the human body, a method now well known as magnetic resonance imaging (MRI).
Damadian received several prizes. In 2001, the Lemelson-MIT Prize Program bestowed its $100,000 Lifetime Achievement Award on Damadian as "the man who invented the MRI scanner." He went on to collaborate with Wilson Greatbach, one early developer of the implantable pacemaker, to develop an MRI-compatible pacemaker. The Franklin Institute in Philadelphia gave its recognition of Damadian's work on MRI with the Bower Award in Business Leadership. He was also named Knights of Vartan 2003 "Man of the Year". He received a National Medal of Technology in 1988 and was inducted into the National Inventors Hall of Fame in 1989.
Biography
Early life
Raymond Vahan Damadian () was born in New York City, to an Armenian family. His father Vahan was a photoengraver who had immigrated from what is now Turkey, while his mother Odette (née Yazedjian) was an accountant. He earned his bachelor's degree in mathematics from the University of Wisconsin–Madison in 1956, and an M.D. degree from the Albert Einstein College of Medicine in New York City in 1960. He studied the violin at Juilliard for 8 years, and played in Junior Davis Cup tennis c |
https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n%20vortex%20street | In fluid dynamics, a Kármán vortex street (or a von Kármán vortex street) is a repeating pattern of swirling vortices, caused by a process known as vortex shedding, which is responsible for the unsteady separation of flow of a fluid around blunt bodies.
It is named after the engineer and fluid dynamicist Theodore von Kármán, and is responsible for such phenomena as the "singing" of suspended telephone or power lines and the vibration of a car antenna at certain speeds. Mathematical modeling of von Kármán vortex street can be performed using different techniques including but not limited to solving the full Navier-Stokes equations with k-epsilon, SST, k-omega and Reynolds stress, and large eddy simulation (LES) turbulence models, by numerically solving some dynamic equations such as the Ginzburg–Landau equation, or by use of a bicomplex variable.
Analysis
A vortex street forms only at a certain range of flow velocities, specified by a range of Reynolds numbers (Re), typically above a limiting Re value of about 90. The (global) Reynolds number for a flow is a measure of the ratio of inertial to viscous forces in the flow of a fluid around a body or in a channel, and may be defined as a nondimensional parameter of the global speed of the whole fluid flow:
where:
= the free stream flow speed (i.e. the flow speed far from the fluid boundaries like the body speed relative to the fluid at rest, or an inviscid flow speed, computed through the Bernoulli equation), which is the original global flow parameter, i.e. the target to be non-dimensionalised.
= a characteristic length parameter of the body or channel
= the free stream kinematic viscosity parameter of the fluid, which in turn is the ratio:
between:
= the reference fluid density.
= the free stream fluid dynamic viscosity
For common flows (the ones which can usually be considered as incompressible or isothermal), the kinematic viscosity is everywhere uniform over all the flow field and constant in time, s |
https://en.wikipedia.org/wiki/Lab-on-a-chip | A lab-on-a-chip (LOC) is a device that integrates one or several laboratory functions on a single integrated circuit (commonly called a "chip") of only millimeters to a few square centimeters to achieve automation and high-throughput screening. LOCs can handle extremely small fluid volumes down to less than pico-liters. Lab-on-a-chip devices are a subset of microelectromechanical systems (MEMS) devices and sometimes called "micro total analysis systems" (µTAS). LOCs may use microfluidics, the physics, manipulation and study of minute amounts of fluids. However, strictly regarded "lab-on-a-chip" indicates generally the scaling of single or multiple lab processes down to chip-format, whereas "µTAS" is dedicated to the integration of the total sequence of lab processes to perform chemical analysis.
History
After the invention of microtechnology (~1954) for realizing integrated semiconductor structures for microelectronic chips, these lithography-based technologies were soon applied in pressure sensor manufacturing (1966) as well. Due to further development of these usually CMOS-compatibility limited processes, a tool box became available to create micrometre or sub-micrometre sized mechanical structures in silicon wafers as well: the microelectromechanical systems (MEMS) era had started.
Next to pressure sensors, airbag sensors and other mechanically movable structures, fluid handling devices were developed. Examples are: channels (capillary connections), mixers, valves, pumps and dosing devices. The first LOC analysis system was a gas chromatograph, developed in 1979 by S.C. Terry at Stanford University. However, only at the end of the 1980s and beginning of the 1990s did the LOC research start to seriously grow as a few research groups in Europe developed micropumps, flowsensors and the concepts for integrated fluid treatments for analysis systems. These µTAS concepts demonstrated that integration of pre-treatment steps, usually done at lab-scale, could extend t |
https://en.wikipedia.org/wiki/Row%20%28database%29 | In the context of a relational database, a row—also called a tuple—represents a single, implicitly structured data item in a table. In simple terms, a database table can be thought of as consisting of rows and columns. Each row in a table represents a set of related data, and every row in the table has the same structure.
For example, in a table that represents companies, each row would represent a single company. Columns might represent things like company name, company street address, whether the company is publicly held, its VAT number, etc. In a table that represents the association of employees with departments, each row would associate one employee with one department.
The implicit structure of a row, and the meaning of the data values in a row, requires that the row be understood as providing a succession of data values, one in each column of the table. The row is then interpreted as a relvar composed of a set of tuples, with each tuple consisting of the two items: the name of the relevant column and the value this row provides for that column.
Each column expects a data value of a particular type. For example, one column might require a unique identifier, another might require text representing a person's name, another might require an integer representing hourly pay in dollars.
See also
Column (database)
References
zh-yue:列 (數據庫)
Data modeling
Relational model |
https://en.wikipedia.org/wiki/Column%20%28database%29 | In a relational database, a column is a set of data values of a particular type, one value for each row of the database. A column may contain text values, numbers, or even pointers to files in the operating system. Columns typically contain simple types, though some relational database systems allow columns to contain more complex data types, such as whole documents, images, or even video clips. A column can also be called an attribute.
Each row would provide a data value for each column and would then be understood as a single structured data value. For example, a database that represents company contact information might have the following columns: ID, Company Name, Address Line 1, Address Line 2, City, and Postal Code. More formally, a row is a tuple containing a specific value for each column, for example: (1234, 'Big Company Inc.', '123 East Example Street', '456 West Example Drive', 'Big City', 98765).
Field
The word 'field' is normally used interchangeably with 'column'. However, database perfectionists tend to favor using 'field' to signify a specific cell of a given row. This is to enable accuracy in communicating with other developers. Columns (really column names) being referred to as field names (common for each row/record in the table). Then a field refers to a single storage location in a specific record (like a cell) to store one value (the field value). The terms record and field come from the more practical field of database usage and traditional DBMS system usage (This was linked into business like terms used in manual databases e.g. filing cabinet storage with records for each customer). The terms row and column come from the more theoretical study of relational theory.
Another distinction between the terms 'column' and 'field' is that the term 'column' does not apply to certain databases, for instance key-value stores, that do not conform to the traditional relational database structure.
See also
Column-oriented DBMS, optimization for co |
https://en.wikipedia.org/wiki/Granular%20computing | Granular computing is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowledge from information or data. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional or physical adjacency, indistinguishability, coherency, or the like.
At present, granular computing is more a theoretical perspective than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented.
Types of granulation
As mentioned above, granular computing is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher-resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is to try to take advantage of this fact in designing more effective machine-learning and reasoni |
https://en.wikipedia.org/wiki/Beeline%20%28brand%29 | Beeline (), formerly Bee Line GSM () is a telecommunications brand by company PJSC VimpelCom, founded in Russia.
PJSC VimpelCom is Russia's third-largest wireless and second-largest telecommunications operator. Its headquarters is located in Moscow. Since 2009, PJSC VimpelCom has been a subsidiary of VimpelCom Ltd., which has become Veon in 2017. It is based in Amsterdam. VimpelCom's main competitors in Russia are Mobile TeleSystems, MegaFon and Tele2.
The commercial service was launched under the Beeline brand, a brand developed by Fabela in late 1993 to differentiate the company as a youthful and fun company, rather than a technical company. The name comes from the English term "beeline", meaning the most direct way between two points.
VimpelCom relaunched Beeline with the current characteristic black-and-yellow striped circle in 2005 with a campaign to associate the brand with the principles of brightness, friendliness, effectiveness, simplicity, and positive emotions; with a new slogan "Живи на яркой стороне" (Live on the bright side). The rebranding campaign was hugely successful and the principles associated with the brand "captured hearts and minds", in the words of the company.
History in Russia
OJSC VimpelCom was founded in 1992 and initially operated AMPS/D-AMPS network in Moscow area. In 1996 it became the first Russian company listed on the New York Stock Exchange ().
In November 2005 OJSC VimpelCom stepped further with foreign acquisitions by acquiring 100% of Ukrainian RadioSystems, a marginal Ukrainian GSM operator operating under the Wellcom and Mobi brands. The deal has been surrounded by a controversy involving two major shareholders of VimpelCom: the Russian Alfa Group and Telenor, the incumbent Norwegian telecommunications company.
The company's current (as of July 2008) license portfolio covers a territory where 97% of Russia's population resides, as well as 100% of the territory of Kazakhstan, Ukraine, Uzbekistan, Tajikistan, Georgia, an |
https://en.wikipedia.org/wiki/Peter%20Chen | Peter Pin-Shan Chen (; born 3 January 1947) is a Taiwanese American computer scientist. He is a (retired) distinguished career scientist and faculty member at Carnegie Mellon University and Distinguished Chair Professor Emeritus at LSU. He is known for the development of the entity–relationship model in 1976.
Biography
Born 1947 in Taichung, Taiwan, Peter Chen received a B.S. in electrical engineering in 1968 at the National Taiwan University, and a Ph.D. in computer science/applied mathematics at Harvard University in 1973. In 1970, he worked one summer at IBM. After graduating from Harvard, he spent one year at Honeywell and a summer at Digital Equipment Corporation.
From 1974 to 1978 Chen was an assistant professor at the MIT Sloan School of Management. From 1978 to 1983 he was an associate professor at the University of California, Los Angeles (UCLA Management School). From 1983 to 2011 Chen held the position of M. J. Foster Distinguished Chair Professor of Computer Science at Louisiana State University and, for several years, adjunct professor in its Business School and Medical School (Shreveport). During this time period, he was a visiting professor once at Harvard in '89-'90 and three times at Massachusetts Institute of Technology (EECS Dept. in '86-'87, Sloan School in '90-'91, and Division of Engineering Systems in 06-'07). From 2010 to 2020, Chen was a Distinguished Career Scientist and faculty member at Carnegie Mellon University, U.S.A.
Besides lecturing around the world, he has also served as an (honorary) professor outside of the U.S. In 1984, under the sponsorship of the United Nations, he taught a one-month short course on databases at Huazhong University of Science and Technology in Wuhan, China, and was awarded as Honorary Professor there. Then, he went to Beijing as a member of the IEEE delegation of the First International Conference on Computers and Applications (the first major IEEE computer conference held in China). From 2008 to 2014, h |
https://en.wikipedia.org/wiki/Radiation%20hardening | Radiation hardening is the process of making electronic components and circuits resistant to damage or malfunction caused by high levels of ionizing radiation (particle radiation and high-energy electromagnetic radiation), especially for environments in outer space (especially beyond the low Earth orbit), around nuclear reactors and particle accelerators, or during nuclear accidents or nuclear warfare.
Most semiconductor electronic components are susceptible to radiation damage, and radiation-hardened (rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the extensive development and testing required to produce a radiation-tolerant design of a microelectronic chip, the technology of radiation-hardened chips tends to lag behind the most recent developments.
Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs).
Problems caused by radiation
Environments with high levels of ionizing radiation create special design challenges. A single charged particle can knock thousands of electrons loose, causing electronic noise and signal spikes. In the case of digital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design of satellites, spacecraft, future quantum computers, military aircraft, nuclear power stations, and nuclear weapons. In order to ensure the proper operation of such systems, manufacturers of integrated circuits and sensors intended for the military or aerospace markets employ various methods of radiation hardening. The resulting systems are said to be rad(iation)-hardened, rad-hard, or (within context) hardened.
Major radiation damage sources
Typical sources of exposure of electronics to ioni |
https://en.wikipedia.org/wiki/Sergey%20Lebedev%20%28scientist%29 | Sergey Alekseyevich Lebedev (; 2 November, 1902 – 3 July, 1974) was a Soviet scientist in the fields of electrical engineering and computer science, and designer of the first Soviet computers.
Biography
Lebedev was born in Nizhny Novgorod, Russian Empire. He graduated from Moscow Highest Technical School in 1928. From then until 1946 he worked at All-Union Electrotechnical Institute (formerly a division of MSTU) in Moscow and Kyiv. In 1939 he was awarded the degree of Doctor of Sciences for the development of the theory of "artificial stability" of electrical systems.
During World War II, Lebedev worked in the field of control automation of complex systems. His group designed a weapon-aiming stabilization system for tanks and an automatic guidance system for airborne missiles. To perform these tasks Lebedev developed an analog computer system to solve ordinary differential equations.
From 1946 to 1951 he headed the Kiev Electrotechnical Institute of the Ukrainian Academy of Sciences, working on improving the stability of electrical systems. For this work he received the Stalin (State) prize in 1950.
In 1948 Lebedev learned from foreign magazines that scientists in western countries were working on the design of electronic computers, although the details were secret. In the autumn of the same year he decided to focus the work of his laboratory on computer design. Lebedev's first computer, MESM, was fully completed by the end of 1951. In April 1953 the State commission accepted the BESM-1 as operational, but it did not go into series production because of opposition from the Ministry of Machine and Instrument Building, which had developed its own weaker and less reliable machine.
Lebedev then began development of a new, more powerful computer, the M-20, the number denoting its expected processing speed of twenty thousand operations per second. In 1958 the machine was accepted as operational and put into series production. Simultaneously the BESM-2, a development |
https://en.wikipedia.org/wiki/Telehouse%20Europe | Telehouse is a major carrier-neutral colocation, information and communications technology services provider based in Docklands, London. Established in 1988, it operates eight facilities in London, Paris and Frankfurt. Part of the global Telehouse network of data centres, the brand has 45 colocation facilities in 26 major cities around the world including Moscow, Istanbul, Johannesburg, Cape Town, Beijing, Shanghai, Hong Kong, Singapore, Vietnam, Seoul, Tokyo, New York and Los Angeles. KDDI, Telehouse's Japanese telecommunications and systems integration parent company, operates data centre facilities in America and Asia.
Operations
London
Operational since 1990, Telehouse North became Europe's first purpose-built neutral colocation facility. LINX traffic has been moving through the carrier-neutral Telehouse campus since its opening. Telehouse hosts the vast majority of internet peering traffic from LINX.
It is the main hub of the Internet in the United Kingdom. In response to growing demand for a Central London location, Telehouse opened an additional colocation facility in 1997, Telehouse Metro, in the London Borough of Islington near Silicon Roundabout, Telehouse Metro later closed in 2020.
A second building at the Docklands site, Telehouse East, was opened in 1999 and the construction of a third building, Telehouse West, at its Docklands site was completed in March 2010. In July 2014 KDDI announced that a fourth building North Two would be built on the site, adjacent to the existing Telehouse North building. In August 2016, Telehouse Europe opened $177 million North Two data center of 24,000 square meters, increasing its capacity at the Docklands site where it already had 73,000 square meters of space. According to Telehouse, North Two is the only UK data center to own a 132 kV on-campus grid substation that is directly connected to the National Grid, reducing transmission losses and improving power density and service continuity. North Two also utilizes th |
https://en.wikipedia.org/wiki/You%20aren%27t%20gonna%20need%20it | "You aren't gonna need it" (YAGNI) is a principle which arose from extreme programming (XP) that states a programmer should not add functionality until deemed necessary. Other forms of the phrase include "You aren't going to need it" (YAGTNI) and "You ain't gonna need it".
Ron Jeffries, a co-founder of XP, explained the philosophy: "Always implement things when you actually need them, never when you just foresee that you [will] need them." John Carmack wrote "It is hard for less experienced developers to appreciate how rarely architecting for future requirements / applications turns out net-positive."
Context
YAGNI is a principle behind the XP practice of "do the simplest thing that could possibly work" (DTSTTCPW). It is meant to be used in combination with several other practices, such as continuous refactoring, continuous automated unit testing, and continuous integration. Used without continuous refactoring, it could lead to disorganized code and massive rework, known as technical debt. YAGNI's dependency on supporting practices is part of the original definition of XP.
See also
Don't repeat yourself
Feature creep
If it ain't broke, don't fix it
KISS principle
Minimum viable product
MoSCoW method
Muntzing
Overengineering
Single-responsibility principle
SOLID
Unix philosophy
Worse is better
References
Software development philosophies
Programming principles |
https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20jargon | The language of mathematics has a vast vocabulary of specialist and technical terms. It also has a certain amount of jargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand for rigorous arguments or precise ideas. Much of this is common English, but with a specific non-obvious meaning when used in a mathematical sense.
Some phrases, like "in general", appear below in more than one section.
Philosophy of mathematics
abstract nonsenseA tongue-in-cheek reference to category theory, using which one can employ arguments that establish a (possibly concrete) result without reference to any specifics of the present problem. For that reason, it's also known as general abstract nonsense or generalized abstract nonsense.
canonicalA reference to a standard or choice-free presentation of some mathematical object (e.g., canonical map, canonical form, or canonical ordering). The same term can also be used more informally to refer to something "standard" or "classic". For example, one might say that Euclid's proof is the "canonical proof" of the infinitude of primes.
deepA result is called "deep" if its proof requires concepts and methods that are advanced beyond the concepts needed to formulate the result. For example, the prime number theorem — originally proved using techniques of complex analysis — was once thought to be a deep result until elementary proofs were found. On the other hand, the fact that π is irrational is usually known to be a deep result, because it requires a considerable development of real analysis before the proof can be established — even though the claim itself can be stated in terms of simple number theory and geometry.
elegantAn aesthetic term referring to the ability of an idea to provide insight into mathematics, whether by unifying disparate fields, introducing a new perspective on a single field, or by providing a |
https://en.wikipedia.org/wiki/Landline | A landline (land line, land-line, main line, fixed-line, and wireline) is a telephone connection that uses metal wires from the owner's premises also referred to as: POTS, Twisted pair, telephone line or public switched telephone network (PSTN).
Landline services are traditionally provided via an analogue copper wire to a telephone exchange. Landline service is usually distinguished from other more modern forms of telephone services which use Internet Protocol based services over optical fiber (Fiber-to-the-x) or other broadband services (VDSL/Cable) using Voice over IP, although sometimes modern fixed phone services delivered over a fixed internet connection are sometimes referred to as a landline (non-cellular service).
Characteristics
Landline service is typically provided through the outside plant of a telephone company's central office, or wire center. The outside plant comprises tiers of cabling between distribution points in the exchange area, so that a single pair of copper wire, or an optical fiber, reaches each subscriber location, such as a home or office, at the network interface. Customer premises wiring extends from the network interface ("NID") to the location of one or more telephones inside the premises.
A subscriber's telephone connected to a landline can be hard-wired or cordless and typically refers to the operation of wireless devices or systems in fixed locations such as homes. Fixed wireless devices usually derive their electrical power from the utility mains electricity, unlike mobile wireless or portable wireless, which tend to be battery-powered. Although mobile and portable systems can be used in fixed locations, efficiency and bandwidth are compromised compared with fixed systems. Mobile or portable, battery-powered wireless systems can be used as emergency backups for fixed systems in case of a power blackout or natural disaster.
Other aspects of landline is the ability to carry high-speed internet popularly known as Digital subsc |
https://en.wikipedia.org/wiki/Poynting%27s%20theorem | In electrodynamics, Poynting's theorem is a statement of conservation of energy for electromagnetic fields developed by British physicist John Henry Poynting. It states that in a given volume, the stored energy changes at a rate given by the work done on the charges within the volume, minus the rate at which energy leaves the volume. It is only strictly true in media which is not dispersive, but can be extended for the dispersive case.
The theorem is analogous to the work-energy theorem in classical mechanics, and mathematically similar to the continuity equation.
Definition
Poynting's theorem states that the rate of energy transfer per unit volume from a region of space equals the rate of work done on the charge distribution in the region, plus the energy flux leaving that region.
Mathematically:
where:
is the rate of change of the energy density in the volume.
∇•S is the energy flow out of the volume, given by the divergence of the Poynting vector S.
J•E is the rate at which the fields do work on charges in the volume (J is the current density corresponding to the motion of charge, E is the electric field, and • is the dot product).
Integral Form
Using the divergence theorem, Poynting's theorem can also be written in integral form:
where
S is the energy flow, given by the Poynting Vector.
is the energy density in the volume.
is the boundary of the volume. The shape of the volume is arbitrary but fixed for the calculation.
Continuity Equation Analog
In an electrical engineering context the theorem is sometimes written with the energy density term u expanded as shown. This form resembles the continuity equation:
,
where
ε0 is the vacuum permittivity and μ0 is the vacuum permeability.
is the density of reactive power driving the build-up of electric field,
is the density of reactive power driving the build-up of magnetic field, and
is the density of electric power dissipated by the Lorentz force acting on charge carriers.
Derivation
For an in |
https://en.wikipedia.org/wiki/Ground%20station | A ground station, Earth station, or Earth terminal is a terrestrial radio station designed for extraplanetary telecommunication with spacecraft (constituting part of the ground segment of the spacecraft system), or reception of radio waves from astronomical radio sources. Ground stations may be located either on the surface of the Earth, or in its atmosphere. Earth stations communicate with spacecraft by transmitting and receiving radio waves in the super high frequency (SHF) or extremely high frequency (EHF) bands (e.g. microwaves). When a ground station successfully transmits radio waves to a spacecraft (or vice versa), it establishes a telecommunications link. A principal telecommunications device of the ground station is the parabolic antenna.
Ground stations may have either a fixed or itinerant position. Article 1 § III of the International Telecommunication Union (ITU) Radio Regulations describes various types of stationary and mobile ground stations, and their interrelationships.
Specialized satellite Earth stations are used to telecommunicate with satellites — chiefly communications satellites. Other ground stations communicate with crewed space stations or uncrewed space probes. A ground station that primarily receives telemetry data, or that follows space missions, or satellites not in geostationary orbit, is called a ground tracking station, or space tracking station, or simply a tracking station.
When a spacecraft or satellite is within a ground station's line of sight, the station is said to have a view of the spacecraft (see pass). A spacecraft can communicate with more than one ground station at a time. A pair of ground stations are said to have a spacecraft in mutual view when the stations share simultaneous, unobstructed, line-of-sight contact with the spacecraft.
Telecommunications port
A telecommunications port — or, more commonly, teleport — is a satellite ground station that functions as a hub connecting a satellite or geocentric orbital ne |
https://en.wikipedia.org/wiki/Mongolian%20gerbil | The Mongolian gerbil or Mongolian jird (Meriones unguiculatus) is a small rodent belonging to the subfamily Gerbillinae. Their body size is typically , with a tail, and body weight , with adult males larger than females. The animal is used in science and research or kept as a small house pet. Their use in science dates back to the latter half of the 19th century, but they only started to be kept as pets in the English-speaking world after 1954, when they were brought to the United States. However, their use in scientific research has fallen out of favor.
Habitat
Mongolian gerbils inhabit grassland, shrubland and desert, including semidesert and steppes in China, Mongolia, and the Russian Federation.
Soil on the steppes is sandy and is covered with grasses, herbs, and shrubs. The steppes have cool, dry winters and hot summers. The temperature can get up to , but the average temperature for most of the year is around .
In the wild, these gerbils live in patriarchal groups generally consisting of one parental pair, the most recent litter, and a few older pups; sometimes the dominant female's sister(s) also live with them. Only the dominant females will produce pups, and will mostly mate with the dominant male while in estrus (heat), female gerbils are generally more loyal than male gerbils. One group of gerbils generally ranges over .
A group lives in a central burrow with 10–20 exits. Some deeper burrows with only one to three exits in their territory may exist. These deeper burrows are used to escape from predators when they are too far from the central burrow. A group's burrows often interconnect with other groups.
History
The first known mention of gerbils came in 1866, by Father Armand David, who sent "yellow rats" to the French National Museum of Natural History in Paris, from northern China. They were named Gerbillus unguiculatus by the scientist Alphonse Milne-Edwards in 1867.
There is a popular misconception about the meaning of this scientific name, ap |
https://en.wikipedia.org/wiki/IBM%203790 | The IBM 3790 Communications System was one of the first distributed computing platforms. The 3790 was developed by IBM's Data Processing Division (DPD) and announced in 1974. It preceded the IBM 8100, announced in 1979.
It was designed to be installed in branch offices, stores, subsidiaries, etc., and to be connected to the central host mainframe, using IBM Systems Network Architecture (SNA).
Although its successor's role in distributed data processing was said to be "a turning point in the general direction of worldwide computer development," the 3790 was described by Datamation in March 1979 as "less than successful."
System description
IBM described it as "a programmable, operator oriented terminal system."
Components
The 3790 supported
up to 16 IBM 3277 display stations
an integrated floppy disk unit
an integrated 120 lines per minute (lpm) line printer
up to three 3292 auxiliary control units
up to four 3793 keyboard-printers
a Synchronous Data Link Control (SDLC) communications interface
A 1200 baud internal or external modem
The base unit of the 3790 was the IBM 3791 programmable control unit, which was offered as a choice of:
the model 1, supporting 8.3MB of disk storage
the model 2, with up to 26.9MB.
Attached to the 3791 were:
The 3792 auxiliary control unit, which had options for attachment of
up to two dial-in IBM 2741 communications terminals,
up to four 3793 display stations, and a line printer.
The 3793 printer-keyboard (up to four).
The 3411 model 1, Magnetic tape unit and controller (added in 1977) and
up to three 3410 tape units attached to the 3411 unit.
Host software
Function Support Program.
Subsystem Support Services.
VTAM (with the host running DOS/VS, OS/VS1, or OS/VS2)
User Application Support Program.
Reception
The 3790 failed to achieve the success IBM intended, due to several issues. It had a complex programming language, The 3790 Macro Assembler, and the customers found it difficult to deploy applications |
https://en.wikipedia.org/wiki/IBM%20System%209000 | The System 9000 (S9000) is a family of microcomputers from IBM consisting of the System 9001, 9002, and 9003. The first member of the family, the System 9001 laboratory computer, was introduced in May 1982 as the IBM Instruments Computer System Model 9000. It was renamed to the System 9001 in 1984 when the System 9000 family name and the System 9002 multi-user general-purpose business computer was introduced. The last member of the family, the System 9003 industrial computer, was introduced in 1985. All members of the System 9000 family did not find much commercial success and the entire family was discontinued on 2 December 1986. The System 9000 was based around the Motorola 68000 microprocessor and the Motorola VERSAbus system bus. All members had the IBM CSOS real-time operating system (OS) stored on read-only memory; and the System 9002 could also run the multi-user Microsoft Xenix OS, which was suitable for business use and supported up to four users.
Features
There were three versions of the System 9000. The 9001 was the benchtop (lab) model, the 9002 was the desktop model without laboratory-specific features, and the 9003 was a manufacturing and process control version modified to be suitable for factory environments. The System 9002 and 9003 were based on the System 9001, which was based on around an 8MHz Motorola 68000, and the Motorola VERSAbus system bus (the System 9000 was one of the few that used the VERSAbus). Input/output ports included three RS-232C serial ports, an IEEE-488 instrument port, and a bidirectional 8-bit parallel port. For laboratory data acquisition, analog-to-digital converters that could be attached to its I/O ports were available. User input could be via a user-definable 10-key touch panel on the integrated CRT display, a 57-key user-definable keypad, or a 83-key Model F keyboard. The touch panel and keypad were designed for controlling experiments.
All System 9000 members had an IBM real-time operating system called CSOS (Compute |
https://en.wikipedia.org/wiki/Resource%20Access%20Control%20Facility |
Introduction
RACF, [pronounced Rack-Eff] short for Resource Access Control Facility, is an IBM software product. It is a security system that provides access control and auditing functionality for the z/OS and z/VM operating systems. RACF was introduced in 1976. Originally called RACF it was renamed to z/OS Security Server (RACF) although most mainframe folks still refer to it as RACF.
Its main features are:
Identification and verification of a user via user id and password check (authentication)
Identification, classification and protection of system resources
Maintenance of access rights to the protected resources (authorization)
Controlling the means of access to protected resources
Logging of accesses to a protected system and protected resources (auditing)
RACF establishes security policies rather than just permission records. It can set permissions for file patterns — that is, set the permissions even for files that do not yet exist. Those permissions are then used for the file (or other object) created at a later time .
Community
There is a long established technical support community for RACF based around a LISTSERV operated out of the University of Georgia. The list is called RACF-L which is described as RACF Discussion List. The email address of the listserv is RACF-L@LISTSERV.UGA.EDU and can also be viewed via a webportal at https://listserv.uga.edu/scripts/wa-UGA.exe .
Books
The first text book published (first printing December 2007) aimed at giving security professionals an introduction to the concepts and conventions of how RACF is designed and administered was Mainframe Basics for Security Professionals: Getting Started with RACF by Ori Pomerantz (Author), Barbara Vander Weele (Author), Mark Nelson (Author), Tim Hahn (Author).
Evolution
RACF has continuously evolved to support such modern security features as digital certificates/public key infrastructure services, LDAP interfaces, and case sensitive IDs/passwords. The latter is a re |
https://en.wikipedia.org/wiki/IEFBR14 | IEFBR14 is an IBM mainframe utility program. It runs in all IBM mainframe environments derived from OS/360, including z/OS. It is a placeholder that returns the exit status zero, similar to the true command on UNIX-like systems.
Purpose
Allocation (also called Initiation)
On OS/360 and derived mainframe systems, most programs never specify files (usually called datasets) directly, but instead reference them indirectly through the Job Control Language (JCL) statements that invoke the programs. These data definition (or "DD") statements can include a "disposition" (DISP=...) parameter that indicates how the file is to be managed — whether a new file is to be created or an old one re-used; and whether the file should be deleted upon completion or retained; etc.
IEFBR14 was created because while DD statements can create or delete files easily, they cannot do so without a program to be run due to a certain peculiarity of the Job Management system, which always requires that the Initiator actually execute a program, even if that program is effectively a null statement. The program used in the JCL does not actually need to use the files to cause their creation or deletion — the DD DISP=... specification does all the work. Thus a very simple do-nothing program was needed to fill that role.
IEFBR14 can thus be used to create or delete a data set using JCL.
Deallocation (also called Termination)
A secondary reason to run IEFBR14 was to unmount devices (usually tapes or disks) that had been left mounted from a previous job, perhaps because of an error in that job's JCL or because the job ended in error. In either event, the system operators would often need to demount the devices, and a started task – DEALLOC – was often provided for this purpose.
Simply entering the command
S DEALLOC
at the system console would run the started task, which consisted of just one step. However, due to the design of Job Management, DEALLOC must actually exist in the system's procedure l |
https://en.wikipedia.org/wiki/Voice%20chat%20in%20online%20gaming | Voice chat is telecommunication via voice over IP (VoIP) technologies—especially when those technologies are used as intercoms among players in multiplayer online games. The VoIP functionality can be built into some games, be a system-wide communication system, or a third-party chat software.
History
Voice chat in video games began in the sixth generation with the Sega Dreamcast (circa 1999). Some games, including Seaman and Alien Front Online included built in voice chat functionality, though it required an active subscription to the Dreamcast's online service, SegaNet.
In 2001, Sony released the Network adapter for their PlayStation 2 video game console, which allowed voice chatting with a headset. In 2002, Microsoft launched the Xbox Live service, including support for voice chat. Later, Microsoft required all Xbox Live console game developers to integrate voice chat capability into their games and bundled a microphone and headset with the Xbox Live retail unit. In 2005, Nintendo launched the Nintendo Wi-Fi Connection, an online multiplayer service for both the Nintendo DS and for the Wii. Metroid Prime Hunters, which was released in March 2006, was the first game that allowed voice chatting through the Nintendo DS's microphone. Nintendo also released a Nintendo DS headset for voice chat alongside the release of Pokémon Diamond and Pearl (2006).
2010s
Starting in the 2010s, third-party software have become very popular among gamers, even when in game VoIP services are available. Notable software includes Discord, Ventrilo, TeamSpeak, SONIX and Mumble. Support for Discord was added to the Xbox Series X|S and Xbox One consoles in 2022, with support coming to PlayStation 5 in 2023.
Impact
While voice chat has become a big hit in console games, also leads to problems such as griefing, cyberbullying, harassment, and scams.
See also
Audio headset
Comparison of VoIP software
Gamergate (harassment campaign)
Glossary of video game terms
Griefer
Massively m |
https://en.wikipedia.org/wiki/131%20%28number%29 | 131 (one hundred [and] thirty-one) is the natural number following 130 and preceding 132.
In mathematics
131 is a Sophie Germain prime, an irregular prime, the second 3-digit palindromic prime, and also a permutable prime with 113 and 311. It can be expressed as the sum of three consecutive primes, 131 = 41 + 43 + 47. 131 is an Eisenstein prime with no imaginary part and real part of the form . Because the next odd number, 133, is a semiprime, 131 is a Chen prime. 131 is an Ulam number.
131 is a full reptend prime in base 10 (and also in base 2). The decimal expansion of 1/131 repeats the digits 007633587786259541984732824427480916030534351145038167938931 297709923664122137404580152671755725190839694656488549618320 6106870229 indefinitely.
In the military
Convair C-131 Samaritan was an American military transport produced from 1954 to 1956
Strike Fighter Squadron (VFA-131) is a United States Navy F/A-18C Hornet fighter squadron stationed at Naval Air Station Oceana
Tiger 131 is a German Tiger I heavy tank captured in Tunisia by the British 48th Royal Tank Regiment during World War II
was a Mission Buenaventura-class fleet oiler during World War II
was a is a United States Navy ship during World War II
was a United States Navy
was a United States Navy General G. O. Squier-class transport ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a ship of the United States Navy during World War II
ZIL-131 is a 3.5-ton 6x6 army truck
In transportation
London Buses route 131 is a Transport for London contracted bus route in London
The Fiat 131 Mirafiori small/medium family car produced from 1974 to 1984
STS-131 is a NASA Contingency Logistic Flight (CLF) of the Space Shuttle Atlantis which launched in April 2010
In other fields
131 is also:
The year AD 131 or 131 BC
131 AH is a year in the Islamic calendar that corresponds to 748 – 749 CE.
131 Vala is an inner main belt asteroi |
https://en.wikipedia.org/wiki/GNU%20MPFR | The GNU Multiple Precision Floating-Point Reliable Library (GNU MPFR) is a GNU portable C library for arbitrary-precision binary floating-point computation with correct rounding, based on GNU Multi-Precision Library.
Library
MPFR's computation is both efficient and has a well-defined semantics: the functions are completely specified on all the possible operands and the results do not depend on the platform. This is done by copying the ideas from the ANSI/IEEE-754 standard for fixed-precision floating-point arithmetic (correct rounding and exceptions, in particular). More precisely, its main features are:
Support for special numbers: signed zeros (+0 and −0), infinities and not-a-number (a single NaN is supported: MPFR does not differentiate between quiet NaNs and signaling NaNs).
Each number has its own precision (in bits since MPFR uses radix 2). The floating-point results are correctly rounded to the precision of the target variable, in one of the five supported rounding modes (including the four from IEEE 754-1985).
Supported functions: MPFR implements all mathematical functions from C99 and other usual mathematical functions: the logarithm and exponential in natural base, base 2 and base 10, the log(1+x) and exp(x)−1 functions (log1p and expm1), the six trigonometric and hyperbolic functions and their inverses, the gamma, zeta and error functions, the arithmetic–geometric mean, the power (xy) function. All those functions are correctly rounded over their complete range.
Subnormal numbers are not supported, but can be emulated with the mpfr_subnormalize function.
MPFR is not able to track the accuracy of numbers in a whole program or expression; this is not its goal. Interval arithmetic packages like Arb, MPFI, or Real RAM implementations like iRRAM, which may be based on MPFR, can do that for the user.
MPFR is dependent upon the GNU Multiple Precision Arithmetic Library (GMP).
MPFR is needed to build the GNU Compiler Collection (GCC). Other software uses |
https://en.wikipedia.org/wiki/Bertrand%27s%20ballot%20theorem | In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives p votes and candidate B receives q votes with p > q, what is the probability that A will be strictly ahead of B throughout the count?" The answer is
The result was first published by W. A. Whitworth in 1878, but is named after Joseph Louis François Bertrand who rediscovered it in 1887.
In Bertrand's original paper, he sketches a proof based on a general formula for the number of favourable sequences using a recursion relation. He remarks that it seems probable that such a simple result could be proved by a more direct method. Such a proof was given by Désiré André, based on the observation that the unfavourable sequences can be divided into two equally probable cases, one of which (the case where B receives the first vote) is easily computed; he proves the equality by an explicit bijection. A variation of his method is popularly known as André's reflection method, although André did not use any reflections.
Bertrand's ballot theorem is related to the cycle lemma. They give similar formulas, but the cycle lemma considers circular shifts of a given ballot counting order rather than all permutations.
Example
Suppose there are 5 voters, of whom 3 vote for candidate A and 2 vote for candidate B (so p = 3 and q = 2). There are ten equally likely orders in which the votes could be counted:
AAABB
AABAB
ABAAB
BAAAB
AABBA
ABABA
BAABA
ABBAA
BABAA
BBAAA
For the order AABAB, the tally of the votes as the election progresses is:
For each column the tally for A is always larger than the tally for B, so A is always strictly ahead of B. For the order AABBA the tally of the votes as the election progresses is:
For this order, B is tied with A after the fourth vote, so A is not always strictly ahead of B.
Of the 10 possible orders, A is always ahead of B only for AAABB and AABAB. So the probability that A will always be strictly ahead is
and this is indeed equal to as th |
https://en.wikipedia.org/wiki/Excitotoxicity | In excitotoxicity, nerve cells suffer damage or death when the levels of otherwise necessary and safe neurotransmitters such as glutamate become pathologically high, resulting in excessive stimulation of receptors. For example, when glutamate receptors such as the NMDA receptor or AMPA receptor encounter excessive levels of the excitatory neurotransmitter, glutamate, significant neuronal damage might ensue. Excess glutamate allows high levels of calcium ions (Ca2+) to enter the cell. Ca2+ influx into cells activates a number of enzymes, including phospholipases, endonucleases, and proteases such as calpain. These enzymes go on to damage cell structures such as components of the cytoskeleton, membrane, and DNA. In evolved, complex adaptive systems such as biological life it must be understood that mechanisms are rarely, if ever, simplistically direct. For example, NMDA in subtoxic amounts induces neuronal survival of otherwise toxic levels of glutamate.
Excitotoxicity may be involved in cancers, spinal cord injury, stroke, traumatic brain injury, hearing loss (through noise overexposure or ototoxicity), and in neurodegenerative diseases of the central nervous system such as multiple sclerosis, Alzheimer's disease, amyotrophic lateral sclerosis (ALS), Parkinson's disease, alcoholism, alcohol withdrawal or hyperammonemia and especially over-rapid benzodiazepine withdrawal, and also Huntington's disease. Other common conditions that cause excessive glutamate concentrations around neurons are hypoglycemia. Blood sugars are the primary glutamate removal method from inter-synaptic spaces at the NMDA and AMPA receptor site. Persons in excitotoxic shock must never fall into hypoglycemia. Patients should be given 5% glucose (dextrose) IV drip during excitotoxic shock to avoid a dangerous build up of glutamate around NMDA and AMPA neurons. When 5% glucose (dextrose) IV drip is not available high levels of fructose are given orally. Treatment is administered during the acute |
https://en.wikipedia.org/wiki/Biomedicine | Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and |
https://en.wikipedia.org/wiki/MAC%20times | MAC times are pieces of file system metadata which record when certain events pertaining to a computer file occurred most recently. The events are usually described as "modification" (the data in the file was modified), "access" (some part of the file was read), and "metadata change" (the file's permissions or ownership were modified), although the acronym is derived from the "mtime", "atime", and "ctime" structures maintained by Unix file systems. Windows file systems do not update ctime when a file's metadata is changed, instead using the field to record the time when a file was first created, known as "creation time" or "birth time". Some other systems also record birth times for files, but there is no standard name for this metadata; ZFS, for example, stores birth time in a field called "crtime". MAC times are commonly used in computer forensics. The name Mactime was originally coined by Dan Farmer, who wrote a tool with the same name.
Modification time (mtime)
A file's modification time describes when the content of the file most recently changed. Because most file systems do not compare data written to a file with what is already there, if a program overwrites part of a file with the same data as previously existed in that location, the modification time will be updated even though the contents did not technically change.
Access time (atime)
A file's access time identifies when the file was most recently opened for reading. Access times are usually updated even if only a small portion of a large file is examined. A running program can maintain a file as "open" for some time, so the time at which a file was opened may differ from the time data was most recently read from the file.
Because some computer configurations are much faster at reading data than at writing it, updating access times after every read operation can be very expensive. Some systems mitigate this cost by storing access times at a coarser granularity than other times; by rounding access |
https://en.wikipedia.org/wiki/Snap%20freezing | Snap freezing (or cook-chill or blast freezing) is the process of rapid cooling of a substance for the purpose of preservation. It is widely used in the culinary and scientific industries.
Culinary uses
Cooked meals can be preserved by rapid freezing after cooking is complete. The main target group for these products are those with little time for cooking such as schools, prisons, and hospitals.
The process involves the cooking of meals at a central factory then rapidly chilling them for storage until they are needed. Snap frozen foods need to be packed in shallow trays to make the process more efficient. The food is cooled to a temperature under 3 degrees Celsius within 90 minutes of cooking and stored at a temperature of 0 to 3 degrees Celsius. The meals can then be transported in refrigerated transport to where the food is to be reheated and consumed when needed.
The length of storage depends on the method used but is usually five days. For longer storage the food may be subjected to pasteurization after cooking.
These processes have the advantage that preparation and cooking of meals is not tied to the times when the food is to be served, enabling staff and equipment to be used more efficiently. A properly managed operation is capable of supplying high-quality meals economically despite high initial equipment costs. There are potential problems; careful attention has to be paid to hygiene as there are a number of points in the process where food pathogens can gain access. This requires careful attention to both the control of the process and to staff training.
Scientific use
Snap-freeze is a term often used in scientific papers to describe a process by which a sample is very quickly lowered to temperatures below -70 °C. This is often accomplished by submerging a sample in liquid nitrogen. This prevents water from crystallising when it forms ice, and so better preserves the structure of the sample (e.g. RNA, protein, or live cells)
See also
Flash freezi |
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability%20theory | Energy–maneuverability theory is a model of aircraft performance. It was developed by Col. John Boyd, a fighter pilot, and Thomas P. Christie, a mathematician with the United States Air Force, and is useful in describing an aircraft's performance as the total of kinetic and potential energies or aircraft specific energy. It relates the thrust, weight, aerodynamic drag, wing area, and other flight characteristics of an aircraft into a quantitative model. This allows combat capabilities of various aircraft or prospective design trade-offs to be predicted and compared.
Formula
All of these aspects of airplane performance are compressed into a single value by the following formula:
History
John Boyd, a U.S. jet fighter pilot in the Korean War, began developing the theory in the early 1960s. He teamed with mathematician Thomas Christie at Eglin Air Force Base to use the base's high-speed computer to compare the performance envelopes of U.S. and Soviet aircraft from the Korean and Vietnam Wars. They completed a two-volume report on their studies in 1964. Energy Maneuverability came to be accepted within the U.S. Air Force and brought about improvements in the requirements for the F-15 Eagle and later the F-16 Fighting Falcon fighters.
See also
Lagrangian mechanics
Notes
References
Hammond, Grant T. The Mind of War: John Boyd and American Security. Washington, D.C.: Smithsonian Institution Press, 2001. and .
Coram, Robert. Boyd: The Fighter Pilot Who Changed the Art of War. New York: Back Bay Books, 2002. and .
Wendl, M.J., G.G. Grose, J.L. Porter, and V.R. Pruitt. Flight/Propulsion Control Integration Aspects of Energy Management. Society of Automotive Engineers, 1974, p. 740480.
Aerospace engineering |
https://en.wikipedia.org/wiki/Integral%20Equations%20and%20Operator%20Theory | Integral Equations and Operator Theory is a journal dedicated to operator theory and its applications to engineering and other mathematical sciences. As some approaches to the study of integral equations (theoretically and numerically) constitute a subfield of operator theory, the journal also deals with the theory of integral equations and hence of differential equations. The journal consists of two sections: a main section consisting of refereed papers and a second consisting of short announcements of important results, open problems, information, etc. It has been published monthly by Springer-Verlag since 1978. The journal is also available online by subscription.
The founding editor-in-chief of the journal, in 1978, was Israel Gohberg. Its current editor-in-chief is Christiane Tretter.
References
External links
Journal homepage
Mathematics journals
Academic journals established in 1978 |
https://en.wikipedia.org/wiki/Krakout | Krakout is a Breakout clone that was released for the ZX Spectrum, Amstrad CPC, BBC Micro, Commodore 64, Thomson computers and MSX platforms in 1987. One of the wave of enhanced Breakout variants to emerge in the wake of Arkanoid, its key distinctions are that gameplay is horizontal in layout, and that it allows the player to select the acceleration characteristics of the bat before playing. It was written by Andy Green and Rob Toone and published by Gremlin Graphics. The music was composed by Ben Daglish.
Reception
In 1990, Dragon gave the game 4 out of 5 stars, calling it "one of our favorites, this is Breakout with a different flavor".
Reviews
Computer Gamer (Jun, 1987)
Tilt (May, 1987)
Happy Computer (1987)
ASM (Aktueller Software Markt) (Mar, 1987)
Tilt (Jul, 1987)
Computer Gamer (Apr, 1987)
Commodore User (Apr, 1987)
Your Sinclair (Feb, 1989)
Zzap! (Apr, 1987)
Crash! (Feb, 1989)
Jeux & Stratégie #45
References
External links
Krakout at Complete BBC Games Archive
1987 video games
Breakout clones
BBC Micro and Acorn Electron games
Amstrad CPC games
Commodore 64 games
MSX games
ZX Spectrum games
Video games scored by Ben Daglish
Video games developed in the United Kingdom |
https://en.wikipedia.org/wiki/Workplace%20OS | Workplace OS is IBM's ultimate operating system prototype of the 1990s. It is the product of an exploratory research program in 1991 which yielded a design called the Grand Unifying Theory of Systems (GUTS), proposing to unify the world's systems as generalized personalities cohabitating concurrently upon a universally sophisticated platform of object-oriented frameworks upon one microkernel. Developed in collaboration with Taligent and its Pink operating system imported from Apple via the AIM alliance, the ambitious Workplace OS was intended to improve software portability and maintenance costs by aggressively recruiting all operating system vendors to convert their products into Workplace OS personalities. In 1995, IBM reported that "Nearly 20 corporations, universities, and research institutes worldwide have licensed the microkernel, laying the foundation for a completely open microkernel standard." At the core of IBM's new unified strategic direction for the entire company, the project was intended also as a bellwether toward PowerPC hardware platforms, to compete with the Wintel duopoly.
With protracted development spanning four years and $2 billion (or 0.6% of IBM's revenue for that period), the project suffered development hell characterized by workplace politics, feature creep, and the second-system effect. Many idealistic key assumptions made by IBM architects about software complexity and system performance were never tested until far too late in development, and found to be infeasible. In January 1996, the first and only commercial preview was billed under the OS/2 family with the name "OS/2 Warp Connect (PowerPC Edition)" for limited special order by select IBM customers, as a crippled product. The entire Workplace OS platform was discontinued in March due to very low market demand, including that for enterprise PowerPC hardware.
A University of California case study described the Workplace OS project as "one of the most significant operating systems s |
https://en.wikipedia.org/wiki/Comb%20filter | In signal processing, a comb filter is a filter implemented by adding a delayed version of a signal to itself, causing constructive and destructive interference. The frequency response of a comb filter consists of a series of regularly spaced notches in between regularly spaced peaks (sometimes called teeth) giving the appearance of a comb.
Comb filters exist in two forms, feedforward and feedback; which refer to the direction in which signals are delayed before they are added to the input.
Comb filters may be implemented in discrete time or continuous time forms which are very similar.
Applications
Comb filters are employed in a variety of signal processing applications, including:
Cascaded integrator–comb (CIC) filters, commonly used for anti-aliasing during interpolation and decimation operations that change the sample rate of a discrete-time system.
2D and 3D comb filters implemented in hardware (and occasionally software) in PAL and NTSC analog television decoders, reduce artifacts such as dot crawl.
Audio signal processing, including delay, flanging, physical modelling synthesis and digital waveguide synthesis. If the delay is set to a few milliseconds, a comb filter can model the effect of acoustic standing waves in a cylindrical cavity or in a vibrating string.
In astronomy the astro-comb promises to increase the precision of existing spectrographs by nearly a hundredfold.
In acoustics, comb filtering can arise as an unwanted artifact. For instance, two loudspeakers playing the same signal at different distances from the listener, create a comb filtering effect on the audio. In any enclosed space, listeners hear a mixture of direct sound and reflected sound. The reflected sound takes a longer, delayed path compared to the direct sound, and a comb filter is created where the two mix at the listener. Similarly, comb filtering may result from mono mixing of multiple mics, hence the 3:1 rule of thumb that neighboring mics should be separated at least t |
https://en.wikipedia.org/wiki/Brettanomyces | Brettanomyces is a non-spore forming genus of yeast in the family Saccharomycetaceae, and is often colloquially referred to as "Brett". The genus name Dekkera is used interchangeably with Brettanomyces, as it describes the teleomorph or spore forming form of the yeast, but is considered deprecated under the one fungus, one name change. The cellular morphology of the yeast can vary from ovoid to long "sausage" shaped cells. The yeast is acidogenic, and when grown on glucose rich media under aerobic conditions, produces large amounts of acetic acid. Brettanomyces is important to both the brewing and wine industries due to the sensory compounds it produces.
In the wild, Brettanomyces lives on the skins of fruit.
History
In 1889, Seyffert of the Kalinkin Brewery in St. Petersburg was the first to isolate a "Torula" from English beer which produced the typical "English" taste in lager beer, and in 1899 JW Tullo at Guinness described two types of "secondary yeast" in Irish stout. However N. Hjelte Claussen at the Carlsberg brewery was the first to publish a description in 1904, following a 1903 patent (UK patent GB190328184) that was the first patented microorganism in history. The term Brettanomyces comes from the Greek for "British fungus".
Wine
When Brettanomyces grows in wine it produces several compounds that can alter the palate and bouquet. At low levels some winemakers agree that the presence of these compounds has a positive effect on wine, contributing to complexity, and giving an aged character to some young red wines. Many wines even rely on Brettanomyces to give their distinctive character, such as Château Musar. However, when the levels of the sensory compounds greatly exceed the sensory threshold, their perception is almost always negative. The sensory threshold can differ between individuals, and some find the compounds more unattractive than others. While it can be desirable at lower levels, there is no guarantee that high levels will not be produce |
https://en.wikipedia.org/wiki/Chung%20Kwei%20%28algorithm%29 | Chung Kwei is a spam filtering algorithm based on the TEIRESIAS Algorithm for finding coding genes within bulk DNA. It is named after Zhong Kui, a figure in Chinese folklore.
See also
Spam (electronic)
CAN-SPAM Act of 2003
DNSBL
SpamAssassin
External links
Official Report
TEIRESIAS: Sequence Pattern Discovery, from IBM Bioinformatics Group
DNA technique protects against "evil" emails, from NewScientist.com
"DNA analysis" spots e-mail spam, from BBC News
Networking algorithms
Anti-spam |
https://en.wikipedia.org/wiki/Storage%20tube | Storage tubes are a class of cathode-ray tubes (CRTs) that are designed to hold an image for a long period of time, typically as long as power is supplied to the tube.
A specialized type of storage tube, the Williams tube, was used as a main memory system on a number of early computers, from the late 1940s into the early 1950s. They were replaced with other technologies, notably core memory, starting in the 1950s.
In a new form, the bistable tube, storage tubes made a comeback in the 1960s and 1970s for use in computer graphics, most notably the Tektronix 4010 series. Today they are obsolete, their functions provided by low-cost memory devices and liquid crystal displays.
Operation
Background
A conventional CRT consists of an electron gun at the back of the tube that is aimed at a thin layer of phosphor at the front of the tube. Depending on the role, the beam of electrons emitted by the gun is steered around the display using magnetic (television) or electrostatic (oscilloscope) means. When the electrons strike the phosphor, the phosphor "lights up" at that location for a time, and then fades away. The length of time the spot remains is a function of the phosphor chemistry.
At very low energies, electrons from the gun will strike the phosphor and nothing will happen. As the energy is increased, it will reach a critical point, , that will activate the phosphor and cause it to give off light. As the voltage increases beyond the brightness of the spot will increase. This allows the CRT to display images with varying intensity, like a television image.
Above another effect also starts, secondary emission. When any insulating material is struck by electrons over a certain critical energy, electrons within the material are forced out of it through collisions, increasing the number of free electrons. This effect is used in electron multipliers as found in night vision systems and similar devices. In the case of a CRT this effect is generally undesirable; the new e |
https://en.wikipedia.org/wiki/Landrace | A landrace is a domesticated, locally adapted, often traditional variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism, and due to isolation from other populations of the species. Landraces are distinct from cultivars and from standard breeds.
A significant proportion of farmers around the world grow landrace crops, and most plant landraces are associated with traditional agricultural systems. Landraces of many crops have probably been grown for millennia. Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity, because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use.
Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture, not animal husbandry. Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure.
Characteristics
There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the cla |
https://en.wikipedia.org/wiki/Multinomial%20distribution | In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided die rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so n determines the suffix, and k the prefix).
The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a k-sided die n times.
Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and . Then if the random variables Xi indicate the number of times outcome number i is observed over the n trials, the vector X = (X1, ..., Xk) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk). While the trials are independent, their outcomes Xi are dependent because they must be summ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.