source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Cyclotomic%20identity | In mathematics, the cyclotomic identity states that
where M is Moreau's necklace-counting function,
and μ is the classic Möbius function of number theory.
The name comes from the denominator, 1 − z j, which is the product of cyclotomic polynomials.
The left hand side of the cyclotomic identity is the generating function for the free associative algebra on α generators, and the right hand side is the generating function for the universal enveloping algebra of the free Lie algebra on α generators. The cyclotomic identity witnesses the fact that these two algebras are isomorphic.
There is also a symmetric generalization of the cyclotomic identity found by Strehl:
References
Mathematical identities
Infinite products |
https://en.wikipedia.org/wiki/Fused%20quartz | Fused quartz, fused silica or quartz glass is a glass consisting of almost pure silica (silicon dioxide, SiO2) in amorphous (non-crystalline) form. This differs from all other commercial glasses in which other ingredients are added which change the glasses' optical and physical properties, such as lowering the melt temperature. Fused quartz, therefore, has high working and melting temperatures, making it less desirable for most common applications.
The terms fused quartz and fused silica are used interchangeably but can refer to different manufacturing techniques, as noted below, resulting in different trace impurities. However fused quartz, being in the glassy state, has quite different physical properties compared to crystalline quartz. Due to its physical properties it finds specialty uses in semiconductor fabrication and laboratory equipment, for instance.
Compared to other common glasses, the optical transmission of pure silica extends well into the ultraviolet and infrared wavelengths, so is used to make lenses and other optics for these wavelengths. Depending on manufacturing processes, impurities will restrict the optical transmission, resulting in commercial grades of fused quartz optimized for use in the infrared, or (then more often referred to as fused silica) in the ultraviolet. The low coefficient of thermal expansion of fused quartz makes it a useful material for precision mirror substrates.
Manufacture
Fused quartz is produced by fusing (melting) high-purity silica sand, which consists of quartz crystals. There are four basic types of commercial silica glass:
Type I is produced by induction melting natural quartz in a vacuum or an inert atmosphere.
Type II is produced by fusing quartz crystal powder in a high-temperature flame.
Type III is produced by burning SiCl4 in a hydrogen-oxygen flame.
Type IV is produced by burning SiCl4 in a water vapor-free plasma flame.
Quartz contains only silicon and oxygen, although commercial quartz glass oft |
https://en.wikipedia.org/wiki/Still | A still is an apparatus used to distill liquid mixtures by heating to selectively boil and then cooling to condense the vapor. A still uses the same concepts as a basic distillation apparatus, but on a much larger scale. Stills have been used to produce perfume and medicine, water for injection (WFI) for pharmaceutical use, generally to separate and purify different chemicals, and to produce distilled beverages containing ethanol.
Application
Since ethanol boils at a much lower temperature than water, simple distillation can separate ethanol from water by applying heat to the mixture. Historically, a copper vessel was used for this purpose, since copper removes undesirable sulfur-based compounds from the alcohol. However, many modern stills are made of stainless steel pipes with copper linings to prevent erosion of the entire vessel and lower copper levels in the waste product (which in large distilleries is processed to become animal feed). Copper is the preferred material for stills because it yields an overall better-tasting spirit. The taste is improved by the chemical reaction between the copper in the still and the sulfur compounds created by the yeast during fermentation. These unwanted and flavor-changing sulfur compounds are chemically removed from the final product resulting in a smoother, better-tasting drink. All copper stills will require repairs about every eight years due to the precipitation of copper-sulfur compounds. The beverage industry was the first to implement a modern distillation apparatus and led the way in developing equipment standards which are now widely accepted in the chemical industry.
There is also an increasing usage of the distillation of gin under glass and PTFE, and even at reduced pressures, to facilitate a fresher product. This is irrelevant to alcohol quality because the process starts with triple distilled grain alcohol, and the distillation is used solely to harvest botanical flavors such as limonene and other terpe |
https://en.wikipedia.org/wiki/Woodbury%20matrix%20identity | In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury, says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report.
The Woodbury matrix identity is
where A, U, C and V are conformable matrices: A is n×n, C is k×k, U is n×k, and V is k×n. This can be derived using blockwise matrix inversion.
While the identity is primarily used on matrices, it holds in a general ring or in an Ab-category.
The Woodbury matrix identity allows cheap computation of inverses and solutions to linear equations. However, little is known about the numerical stability of the formula. There are no published results concerning its error bounds. Anecdotal evidence suggests that it may diverge even for seemingly benign examples (when both the original and modified matrices are well-conditioned).
Discussion
To prove this result, we will start by proving a simpler one. Replacing A and C with the identity matrix I, we obtain another identity which is a bit simpler:
To recover the original equation from this reduced identity, set and .
This identity itself can be viewed as the combination of two simpler identities. We obtain the first identity from
thus,
and similarly
The second identity is the so-called push-through identity
that we obtain from
after multiplying by on the right and by on the left.
Putting all together,
where the first and second equality come from the first and second identity, respectively.
Special cases
When are vectors, the identity reduces to the Sherman–Morrison formula.
In the scalar case, the reduced version is simply
Inverse of a sum
If n = k and U = V = In is the identity matrix, then
Continuing with the merging o |
https://en.wikipedia.org/wiki/Project%20Habakkuk | Project Habakkuk or Habbakuk (spelling varies) was a plan by the British during the Second World War to construct an aircraft carrier out of pykrete (a mixture of wood pulp and ice) for use against German U-boats in the mid-Atlantic, which were beyond the flight range of land-based planes at that time. The plan was to create what would have been the largest ship ever at long, which would have been much bigger than even USS Enterprise, the largest naval vessel ever, at long. The idea came from Geoffrey Pyke, who worked for Combined Operations Headquarters. After promising scale tests and the creation of a prototype on Patricia Lake, Jasper National Park, in Alberta, Canada, the project was shelved due to rising costs, added requirements, and the availability of longer-range aircraft and escort carriers which closed the Mid-Atlantic gap that the project was intended to address.
History
Initial concept
Geoffrey Pyke was an old friend of J. D. Bernal and had been recommended to Lord Mountbatten, Chief of Combined Operations, by the cabinet minister Leopold Amery. Pyke worked at Combined Operations Headquarters (COHQ) alongside Bernal and was regarded as a genius by Mountbatten.
Pyke conceived the idea of Habakkuk while he was in the United States organising the production of M29 Weasels for Project Plough, a scheme to assemble an elite unit for winter operations in Norway, Romania and the Italian Alps. He had been considering the problem of how to protect seaborne landings and Atlantic convoys out of reach of aircraft cover. The problem was that steel and aluminium were in short supply, and were required for other purposes. Pyke decided that the answer was ice, which could be manufactured for just 1% of the energy needed to make an equivalent mass of steel. He proposed that an iceberg, natural or artificial, be levelled to provide a runway and hollowed out to shelter aircraft.
From New York Pyke sent the proposal via diplomatic bag to COHQ, with a label forbiddi |
https://en.wikipedia.org/wiki/Truncated%20mean | A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, and typically discarding an equal amount of both. This number of points to be discarded is usually given as a percentage of the total number of points, but may also be given as a fixed number of points.
For most statistical applications, 5 to 25 percent of the ends are discarded. For example, given a set of 8 points, trimming by 12.5% would discard the minimum and maximum value in the sample: the smallest and largest values, and would compute the mean of the remaining 6 points. The 25% trimmed mean (when the lowest 25% and the highest 25% are discarded) is known as the interquartile mean.
The median can be regarded as a fully truncated mean and is most robust. As with other trimmed estimators, the main advantage of the trimmed mean is robustness and higher efficiency for mixed distributions and heavy-tailed distribution (like the Cauchy distribution), at the cost of lower efficiency for some other less heavily tailed distributions (such as the normal distribution). For intermediate distributions the differences between the efficiency of the mean and the median are not very big, e.g. for the student-t distribution with 2 degrees of freedom the variances for mean and median are nearly equal.
Terminology
In some regions of Central Europe it is also known as a Windsor mean, but this name should not be confused with the Winsorized mean: in the latter, the observations that the trimmed mean would discard are instead replaced by the largest/smallest of the remaining values.
Discarding only the maximum and minimum is known as the , particularly in management statistics. This is also known as the (for example in US agriculture, like the Average Crop Revenue Election), due to its use in Olympic events, such as the ISU Judging Syst |
https://en.wikipedia.org/wiki/Hexspeak | Hexspeak, like leetspeak, is a novelty form of variant English spelling using the hexadecimal digits. Created by programmers as memorable magic numbers, hexspeak words can serve as a clear and unique identifier with which to mark memory or data.
Hexadecimal notation represents numbers using the 16 digits 0123456789ABCDEF. Using only the letters ABCDEF it is possible to spell several words. Further words can be made by treating some of the decimal numbers as letters - the digit "0" can represent the letter "O", and "1" can represent the letters "I" or "L". Less commonly, "5" can represent "S", "7" represent "T", "12" represent "R" and "6" or "9" can represent "G" or "g", respectively. Numbers such as 2, 4 or 8 can be used in a manner similar to leet or rebuses; e.g. the word "defecate" can be expressed either as DEFECA7E or DEFEC8.
Notable magic numbers
Many computer processors, operating systems, and debuggers make use of magic numbers, especially as a magic debug value.
Alternative letters
Many computer languages require that a hexadecimal number be marked with a prefix or suffix (or both) to identify it as a number. Sometimes the prefix or suffix is used as part of the word.
The C programming language uses the "0x" prefix to indicate a hexadecimal number, but the "0x" is usually ignored when people read such values as words. C also allows the suffix L to declare an integer as long, or LL to declare it as long long, making it possible to write "0xDEADCELL" (dead cell). In either case a U may also appear in the suffix to declare the integer as unsigned, making it possible to write "0xFEEDBULL" (feed bull).
In the (non-Unix) Intel assembly language, hexadecimal numbers are denoted by a "h" suffix, making it possible to write "0beach" (beach). Note that numbers in this notation that begin with a letter must be prefixed with a zero to distinguish them from variable names. A Unix-style assembler uses C language convention instead (but non-Unix-style assemblers |
https://en.wikipedia.org/wiki/Motive%20%28algebraic%20geometry%29 | In algebraic geometry, motives (or sometimes motifs, following French usage) is a theory proposed by Alexander Grothendieck in the 1960s to unify the vast array of similarly behaved cohomology theories such as singular cohomology, de Rham cohomology, etale cohomology, and crystalline cohomology. Philosophically, a "motif" is the "cohomology essence" of a variety.
In the formulation of Grothendieck for smooth projective varieties, a motive is a triple , where X is a smooth projective variety, is an idempotent correspondence, and m an integer, however, such a triple contains almost no information outside the context of Grothendieck's category of pure motives, where a morphism from to is given by a correspondence of degree . A more object-focused approach is taken by Pierre Deligne in Le Groupe Fondamental de la Droite Projective Moins Trois Points. In that article, a motive is a "system of realisations" – that is, a tuple
consisting of modules
over the rings
respectively, various comparison isomorphisms
between the obvious base changes of these modules, filtrations , a -action on and a "Frobenius" automorphism of . This data is modeled on the cohomologies of a smooth projective -variety and the structures and compatibilities they admit, and gives an idea about what kind of information is contained in a motive.
Introduction
The theory of motives was originally conjectured as an attempt to unify a rapidly multiplying array of cohomology theories, including Betti cohomology, de Rham cohomology, l-adic cohomology, and crystalline cohomology. The general hope is that equations like
[projective line] = [line] + [point]
[projective plane] = [plane] + [line] + [point]
can be put on increasingly solid mathematical footing with a deep meaning. Of course, the above equations are already known to be true in many senses, such as in the sense of CW-complex where "+" corresponds to attaching cells, and in the sense of various cohomology theories, where "+" corresponds |
https://en.wikipedia.org/wiki/Necklace%20polynomial | In combinatorial mathematics, the necklace polynomial, or Moreau's necklace-counting function, introduced by , counts the number of distinct necklaces of n colored beads chosen out of α available colors. The necklaces are assumed to be aperiodic (not consisting of repeated subsequences), and counted up to rotation (rotating the beads around the necklace counts as the same necklace), but without flipping over (reversing the order of the beads counts as a different necklace). This counting function also describes, among other things, the dimensions in a free Lie algebra and the number of irreducible polynomials over a finite field.
Definition
The necklace polynomials are a family of polynomials in the variable such that
By Möbius inversion they are given by
where is the classic Möbius function.
A closely related family, called the general necklace polynomial or general necklace-counting function, is:
where is Euler's totient function.
Applications
The necklace polynomials appear as:
The number of aperiodic necklaces (or equivalently Lyndon words) which can be made by arranging n colored beads having α available colors. Two such necklaces are considered equal if they are related by a rotation (but not a reflection). Aperiodic refers to necklaces without rotational symmetry, having n distinct rotations. The polynomials give the number of necklaces including the periodic ones: this is easily computed using Pólya theory.
The dimension of the degree n piece of the free Lie algebra on α generators ("Witt's formula"). Here should be the dimension of the degree n piece of the corresponding free Jordan algebra.
The number of distinct words of length n in a Hall set. Note that the Hall set provides an explicit basis for a free Lie algebra; thus, this is the generalized setting for the above.
The number of monic irreducible polynomials of degree n over a finite field with α elements (when is a prime power). Here is the number of polynomials which are prim |
https://en.wikipedia.org/wiki/Dot%20pitch | Dot pitch (sometimes called line pitch, stripe pitch, or phosphor pitch) is a specification for a computer display, computer printer, image scanner, or other pixel-based devices that describe the distance, for example, between dots (sub-pixels) on a display screen. In the case of an RGB color display, the derived unit of pixel pitch is a measure of the size of a triad plus the distance between triads.
Dot pitch may be measured in linear units (with smaller numbers meaning higher resolution), usually millimeters (mm), or as a rate, for example, dots per inch (with a larger number meaning higher resolution). Closer spacing produces a sharper image (as there are more dots in a given area). However, other factors may affect image quality, including:
Undocumented or inadequately documented measurement method, complicated by ignorance of the existence of different methods
Confusion of pixels and subpixels
Element spacing varying across screen area (e.g., widening in corners compared to center)
Differing pixel geometries
Differing image and pixel aspect ratios
Miscellanea such as Kell factor or interlaced video
The exact difference between horizontal and diagonal dot pitch varies with the design of the monitor (see pixel geometry and widescreen), but a typical entry-level 0.28 mm (diagonal) monitor has a horizontal pitch of 0.24 or 0.25 mm, and a good quality 0.26 mm (diagonal) unit has a horizontal pitch of 0.22 mm.
The above dot pitch measurement does not apply to aperture grille displays. Such monitors use continuous vertical phosphor bands on the screen, so the vertical distance between scan lines is limited only by the video input signal's vertical resolution and the thickness of the electron beam, so there is no vertical 'dot pitch' on such devices. Aperture grille only has horizontal 'dot pitch', or otherwise known as 'stripe pitch'.
Common dot pitch sizes
References
External links
PPI calculator – Shows dot pitch
Pixels Per Inch PPI Calculator – Dete |
https://en.wikipedia.org/wiki/Mean%20time%20to%20recovery | Mean time to recovery (MTTR) is the average time that a device will take to recover from any failure. Examples of such devices range from self-resetting fuses (where the MTTR would be very short, probably seconds), to whole systems which have to be repaired or replaced.
The MTTR would usually be part of a maintenance contract, where the user would pay more for a system MTTR of which was 24 hours, than for one of, say, 7 days. This does not mean the supplier is guaranteeing to have the system up and running again within 24 hours (or 7 days) of being notified of the failure. It does mean the average repair time will tend towards 24 hours (or 7 days). A more useful maintenance contract measure is the maximum time to recovery which can be easily measured and the supplier held accountably.
Note that some suppliers will interpret MTTR to mean 'mean time to respond' and others will take it to mean 'mean time to replace/repair/recover/resolve'. The former indicates that the supplier will acknowledge a problem and initiate mitigation within a certain timeframe. Some systems may have an MTTR of zero, which means that they have redundant components which can take over the instant the primary one fails, see RAID for example. However, the failed device involved in this redundant configuration still needs to be returned to service and hence the device itself has a non-zero MTTR even if the system as a whole (through redundancy) has an MTTR of zero. But, as long as service is maintained, this is a minor issue.
See also
Mean time to repair
Mean time between failures
Mean down time
Service-level agreement
References
Failure
Reliability engineering |
https://en.wikipedia.org/wiki/Trac | Trac is an open-source, web-based project management and bug tracking system. It has been adopted by a variety of organizations for use as a bug tracking system for both free and open-source software and proprietary projects and products. Trac integrates with major version control systems including ("out of the box") Subversion and Git. Trac is used, among others, by the Internet Research Task Force, Django, FFmpeg, jQuery UI, WebKit, 0 A.D., and WordPress.
Trac is available on all major operating systems including Windows via Installer or Bitnami, OS X via MacPorts or pkgsrc, Debian, Ubuntu, Arch Linux or FreeBSD, as well as on various cloud hosting services.
History
Inspired by CVSTrac, Jonas Borgström and Daniel Lundin from Edgewall Software started writing svntrac in August 2003 using SQLite and Subversion. In December 2003 they renamed it to Trac. In February 2004 the Trac version was changed first from 0.0.1 to 0.1 and then directly from 0.1 to 0.5. That release was followed in March 2004 by 0.6 and 0.7, and 0.8 in November 2004.
Edgewall Software is an umbrella organization for hosting edgewall.org for the community to collaborate on developing open source Python software. It used to offer software development, consulting and support services. Some of the earliest community members to collaborate in the open source development of Trac were Rocky Burt in March 2004, Christopher Lenz and Francois Harvey in May 2004, Christian Boos and Otavio Salvador in December 2004 and Mark Rowe March 2005.
In August 2005 the license was changed from GPL-2.0-or-later to BSD-3-Clause. The first release under this final license was Trac 0.9 in October 2005, which among other features introduced PostgreSQL database support.
Trac 0.10, released in September 2006, was an important release that first introduced the component system that to this day allows plugins to extend and add features to Trac's core. Trac itself since this point consists mainly of optional plugin compon |
https://en.wikipedia.org/wiki/Digital%20audio%20workstation | A digital audio workstation (DAW) is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.
DAWs are used for producing and recording music, songs, speech, radio, television, soundtracks, podcasts, sound effects and nearly any other situation where complex recorded audio is needed.
Hardware
Early attempts at digital audio workstations in the 1970s and 1980s faced limitations such as the high price of storage, and the vastly slower processing and disk speeds of the time.
In 1978, Soundstream, who had made one of the first commercially available digital audio tape recorders in 1977, built what could be considered the first digital audio workstation using some of the most current computer hardware of the time. The Digital Editing System, as Soundstream called it, consisted of a DEC PDP-11/60 minicomputer running a custom software package called DAP (Digital Audio Processor), a Braegen 14"-platter hard disk drive, a storage oscilloscope to display audio waveforms for editing, and a video display terminal for controlling the system. Interface cards that plugged into the PDP-11's Unibus slots (the Digital Audio Interface, or DAI) provided analog and digital audio input and output for interfacing to Soundstream's digital recorders and conventional analog tape recorders. The DAP software could perform edits to the audio recorded on the system's hard disks and produce simple effects such as crossfades.
By the late 1980s, a number of personal computers such as the Yamaha CX5M, Macintosh, Atari ST, and Amiga began to have enough power to hand |
https://en.wikipedia.org/wiki/Chemical%20engineer | In the field of engineering, a chemical engineer is a professional, equipped with the knowledge of chemical engineering, who works principally in the chemical industry to convert basic raw materials into a variety of products and deals with the design and operation of plants and equipment.<ref>MobyDick Dictionary of Engineering", McGraw-Hill, 2nd Ed.</ref> In general, a chemical engineer is one who applies and uses principles of chemical engineering in any of its various practical applications; these often include
Design, manufacture, and operation of plants and machinery in industrial chemical and related processes ("chemical process engineers")
Development of new or adapted substances for products ranging from foods and beverages to cosmetics to cleaners to pharmaceutical ingredients, among many other products ("chemical product engineers")
Development of new technologies such as fuel cells, hydrogen power and nanotechnology, as well as working in fields wholly or partially derived from chemical engineering such as materials science, polymer engineering, and biomedical engineering. This can include working of geophysical projects such as rivers, stones, and signs
History
The president of the Institution of Chemical Engineers said in his presidential address "I believe most of us would be willing to regard Edward Charles Howard (1774–1816) as the first chemical engineer of any eminence". Others have suggested Johann Rudolf Glauber (1604–1670) for his development of processes for the manufacture of the major industrial acids.
The term appeared in print in 1839, though from the context it suggests a person with mechanical engineering knowledge working in the chemical industry.
In 1880, George E. Davis wrote in a letter to Chemical News "A Chemical Engineer is a person who possesses chemical and mechanical knowledge, and who applies that knowledge to the utilisation, on a manufacturing scale, of chemical action." He proposed the name Society of Chemical En |
https://en.wikipedia.org/wiki/CrossOver%20%28software%29 | CrossOver is a Microsoft Windows compatibility layer available for Linux, macOS, and ChromeOS. This compatibility layer enables many Windows-based applications to run on Linux operating systems, macOS, or ChromeOS.
CrossOver is developed by CodeWeavers and based on Wine, an open-source Windows compatibility layer. CodeWeavers modifies the Wine source code, applies compatibility patches, adds configuration tools that are more user-friendly, automated installation scripts, and provides technical support. All changes made to the Wine source code are covered by the LGPL and publicly available. CodeWeavers maintains an online database listing how well various Windows applications perform under CrossOver.
Versions
CrossOver Linux
CrossOver Linux is the original version of CrossOver. It aims to properly integrate with the GNOME and KDE desktop environments so that Windows applications will run seamlessly on Linux distributions. Before version 6, it was called CrossOver Mac Office. CrossOver Linux was originally offered in Standard and Professional editions. CrossOver Linux Standard was designed for a single user account on a machine. CrossOver Linux Professional provided enhanced deployment and management features for corporate users and multiple user accounts per machine. With the release of CrossOver Linux 11 in 2012, these different editions merged into a single CrossOver Linux product.
CrossOver Mac
In 2005 Apple announced a transition from PowerPC to Intel processors in their computers, which allowed CodeWeavers to develop a Mac OS X version of CrossOver Office called 'CrossOver Mac'
CrossOver Mac was released on January 10, 2007. With the release of CrossOver Mac 7 on June 17, 2008, CrossOver Mac was divided into Standard and Pro editions like CrossOver Linux. The Standard version included six months of support and upgrades, while the Pro version included one year of support and upgrades, along with a free copy of CrossOver Games. With the release of CrossOver M |
https://en.wikipedia.org/wiki/220%20%28number%29 | 220 (two hundred [and] twenty) is the natural number following 219 and preceding 221.
In mathematics
It is a composite number, with its proper divisors being 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, making it an amicable number with 284. Every number up to 220 may be expressed as a sum of its divisors, making 220 a practical number.
It is the sum of four consecutive primes (47 + 53 + 59 + 61). It is the smallest even number with the property that when represented as a sum of two prime numbers (per Goldbach's conjecture) both of the primes must be greater than or equal to 23.
There are exactly 220 different ways of partitioning 64 = 82 into a sum of square numbers.
It is a tetrahedral number, the sum of the first ten triangular numbers, and a dodecahedral number. If all of the diagonals of a regular decagon are drawn, the resulting figure will have exactly 220 regions.
It is the sum of the sums of the divisors of the first 16 positive integers.
Integers between 221 and 229
221
222
223
224
225
226
227
228
229
Notes
References
Wells, D. (1987). The Penguin Dictionary of Curious and Interesting Numbers (pp. 145 – 147). London: Penguin Group.
Integers |
https://en.wikipedia.org/wiki/Institute%20of%20Radio%20Engineers | The Institute of Radio Engineers (IRE) was a professional organization which existed from 1912 until December 31, 1962. On January 1, 1963, it merged with the American Institute of Electrical Engineers (AIEE) to form the Institute of Electrical and Electronics Engineers (IEEE).
Founding
Following several attempts to form a technical organization of wireless practitioners in 1908–1912, the Institute of Radio Engineers (IRE) was finally established in 1912 in New York City. Among its founding organizations were the Society of Wireless Telegraph Engineers (SWTE) and the Wireless Institute (TWI). At the time, the dominant organization of electrical engineers was the American Institute of Electrical Engineers (AIEE). Many of the founding members of IRE considered AIEE too conservative and too focused on electric power. Moreover, the founders of the IRE sought to establish an international organization (unlike the “American” AIEE), and adopted a tradition of electing some of the IRE's officers from outside the United States.
In the first half of the 20th century, radio communications had experienced great expansion, and the growing professional community of developers and operators of radio systems required standardization, research, and authoritative dissemination of new results among practitioners and researchers. To meet these needs, the IRE established professional journals (most notably the Proceedings of the IRE, established 1913 and edited for 41 years by Alfred N. Goldsmith); participated actively in all aspects of standardization and regulations of the frequency spectrum, modulation techniques, testing methods, and radio equipment; and organized regional and professional groups (starting in 1914 and 1948, respectively) for cooperation and exchange between members. The IRE was a major participant in planning of the Federal Radio Commission (established 1927; later the Federal Communications Commission), and worked in close cooperation with the National E |
https://en.wikipedia.org/wiki/200%20%28number%29 | 200 (two hundred) is the natural number following 199 and preceding 201.
200 is an abundant number, as 265, the sum of its proper divisors, is greater than itself.
The number appears in the Padovan sequence, preceded by 86, 114, 151 (it is the sum of the first two of these).
The sum of Euler's totient function φ(x) over the first twenty-five integers is 200.
200 is the smallest base 10 unprimeable number – it cannot be turned into a prime number by changing just one of its digits to any other digit. It is also a Harshad number.
200 is an Achilles number.
Two hundred is also:
A common ISO-standard film speed for photographic films. However, 200 speed film is being phased out in consumer films in favor of faster films.
A denomination of the euro note. The 200 euro note was designed by Robert Kalina.
200 MeV is the temperature of quark–gluon plasma phase transition.
An HTTP status code indicating a successful connection; the code is "200 OK".
The sum of pounds (or dollars) given in the classical Monopoly game to a player passing Go.
A cholesterol level of 200 and below is considered "desirable level corresponding to lower risk for heart disease".
The exact number of NASCAR Cup Series races won by Richard Petty.
An early AD year.
"200" is the title of an episode of South Park, which is infamous for its portrayal of Muhammed leading to the creators of the show receiving death threats from terrorists.
The North West 200, a motorcycle race held in Northern Ireland.
References
Integers |
https://en.wikipedia.org/wiki/Fresno%20scraper | The Fresno Scraper is a machine pulled by horses used for constructing canals and ditches in sandy soil. The design of the Fresno Scraper forms the basis of most modern earthmoving scrapers, having the ability to scrape and move a quantity of soil, and also to discharge it at a controlled depth, thus quadrupling the volume which could be handled manually.
History
The Fresno scraper was invented in 1883 by James Porteous. Working with farmers in Fresno, California, he had recognised the dependence of the Central San Joaquin Valley on irrigation, and the need for a more efficient means of constructing canals and ditches in the sandy soil. In perfecting the design of his machine, Porteous made several revisions on his own and also traded ideas with William Deidrick, Frank Dusy, and Abijah McCall, who invented and held patents on similar scrapers. Porteous bought the patents held by Deidrick, Dusy, and McCall, gaining sole rights to the Fresno Scraper.
Prior scrapers pushed the soil ahead of them, while the Fresno scraper lifted it into a C-shaped bowl where it could be dragged along with much less friction. By lifting the handle, the operator could cause the scraper to bite deeper. Once soil was gathered, the handle could be lowered to raise the blade off the ground so it could be dragged to a low spot, and dumped by raising the handle very high.
Impact
This design was so revolutionary and economical that it has influenced the design of modern bulldozer blades and earth movers to this day.
Between 1884 and 1910 thousands of Fresno scrapers were produced at the Fresno Agricultural Works which had been formed by Porteous, and used in agriculture and land levelling, as well as road and railroad grading and the construction industry. They played a vital role in the construction of the Panama Canal and later served the US Army in World War I.
It was one of the most important agricultural and civil engineering machines ever made. In 1991 the Fresno Scraper was de |
https://en.wikipedia.org/wiki/ECos | The Embedded Configurable Operating System (eCos) is a free and open-source real-time operating system intended for embedded systems and applications which need only one process with multiple threads. It is designed to be customizable to precise application requirements of run-time performance and hardware needs. It is implemented in the programming languages C and C++ and has compatibility layers and application programming interfaces for Portable Operating System Interface (POSIX) and The Real-time Operating system Nucleus (TRON) variant µITRON. eCos is supported by popular SSL/TLS libraries such as wolfSSL, thus meeting all standards for embedded security.
Design
eCos was designed for devices with memory sizes in the range of a few tens or several hundred kilobytes, or for applications with real-time requirements.
eCos runs on a wide variety of hardware platforms, including ARM, CalmRISC, FR-V, Hitachi H8, IA-32, Motorola 68000, Matsushita AM3x, MIPS, NEC V850, Nios II, PowerPC, SPARC, and SuperH.
The eCos distribution includes RedBoot, an open source application that uses the eCos hardware abstraction layer to provide bootstrap firmware for embedded systems.
History
eCos was initially developed in 1997 by Cygnus Solutions which was later bought by Red Hat. In early 2002, Red Hat ceased development of eCos and laid off the staff of the project. Many of the laid-off staff continued to work on eCos and some formed their own companies providing services for the software. In January 2004, at the request of the eCos developers, Red Hat agreed to transfer the eCos copyrights to the Free Software Foundation in October 2005, a process finally completed in May 2008.
Non-free versions
The eCosPro real-time operating system is a commercial fork of eCos created by eCosCentric which incorporates proprietary software components. It is claimed as a "stable, fully tested, certified and supported version", with additional features that are not released as free software. On |
https://en.wikipedia.org/wiki/Preboot%20Execution%20Environment | In computing, the Preboot eXecution Environment, PXE (most often pronounced as pixie, often called PXE Boot/pixie boot.) specification describes a standardized client–server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industry-standard network protocols such as DHCP and TFTP.
The concept behind the PXE originated in the early days of protocols like BOOTP/DHCP/TFTP, and it forms part of the Unified Extensible Firmware Interface (UEFI) standard. In modern data centers, PXE is the most frequent choice for operating system booting, installation and deployment.
Overview
Since the beginning of computer networks, there has been a persistent need for client systems which can boot appropriate software images, with appropriate configuration parameters, both retrieved at boot time from one or more network servers. This goal requires a client to use a set of pre-boot services, based on industry standard network protocols. Additionally, the Network Bootstrap Program (NBP) which is initially downloaded and run must be built using a client firmware layer (at the device to be bootstrapped via PXE) providing a hardware independent standardized way to interact with the surrounding network booting environment. In this case the availability and subjection to standards are a key factor required to guarantee the network boot process system interoperability.
One of the first attempts in this regard was bootstrap loading using TFTP standard RFC 906, published in 1984, which established the 1981 published Trivial File Transfer Protocol (TFTP) standard RFC 783 to be used as the standard file transfer protocol for bootstrap loading. It was followed shortly after by the Bootstrap Protocol standard RFC 951 (BOOTP), published in 1985, which allowed a disk-less client machine to discover its own IP address, the address of a TFTP se |
https://en.wikipedia.org/wiki/Superspace | Superspace is the coordinate space of a theory exhibiting supersymmetry. In such a formulation, along with ordinary space dimensions x, y, z, ..., there are also "anticommuting" dimensions whose coordinates are labeled in Grassmann numbers rather than real numbers. The ordinary space dimensions correspond to bosonic degrees of freedom, the anticommuting dimensions to fermionic degrees of freedom.
The word "superspace" was first used by John Wheeler in an unrelated sense to describe the configuration space of general relativity; for example, this usage may be seen in his 1973 textbook Gravitation.
Informal discussion
There are several similar, but not equivalent, definitions of superspace that have been used, and continue to be used in the mathematical and physics literature. One such usage is as a synonym for super Minkowski space. In this case, one takes ordinary Minkowski space, and extends it with anti-commuting fermionic degrees of freedom, taken to be anti-commuting Weyl spinors from the Clifford algebra associated to the Lorentz group. Equivalently, the super Minkowski space can be understood as the quotient of the super Poincaré algebra modulo the algebra of the Lorentz group. A typical notation for the coordinates on such a space is with the overline being the give-away that super Minkowski space is the intended space.
Superspace is also commonly used as a synonym for the super vector space. This is taken to be an ordinary vector space, together with additional coordinates taken from the Grassmann algebra, i.e. coordinate directions that are Grassmann numbers. There are several conventions for constructing a super vector space in use; two of these are described by Rogers and DeWitt.
A third usage of the term "superspace" is as a synonym for a supermanifold: a supersymmetric generalization of a manifold. Note that both super Minkowski spaces and super vector spaces can be taken as special cases of supermanifolds.
A fourth, and completely unrelated mean |
https://en.wikipedia.org/wiki/SM-4 | The SM-4 (CM-4) is a PDP-11/40 compatible system, manufactured in the Eastern Bloc in the 1980s. It was very popular in science and technology. They were manufactured in the Soviet Union, Bulgaria and Hungary, beginning in 1975.
The standard configuration includes 128 or 256 KB core memory, tape puncher, two RK-05 removable 2.5 MB disks and two RK-05F fixed disks, two TU-10 drives and Videoton VDT-340 terminals (VT52 non-compatible). The SM-4 processor operates at 900,000 operations per second.
The SM-series also includes the SM-3. The SM-3 lacks floating point processing, similar to DEC's PDP 11/40 and 11/34 models. In early production, ferrite core memory is used. It operates at 200,000 operations per second in register-to-register operation.
Operating systems commonly used include:
RT-11 (Rafos after partial translation)
RSTS/E
RSX-11
DSM-11 (DIAMS after partial translations)
DEMOS and MNOS
The SM-4 was manufactured in seven configurations, numbers SM-1401 through SM-1407.
Similar models include the SM-1420, with semiconductor memory, and the SM-1600, a hybrid of the SM-1420 and the M-6000, a system produced in Minsk.
The main producer of the SM-4 was Minpribor, at a facility in Kyiv, Ukraine, which began production in 1980.
See also
SM EVM
List of Soviet computer systems
References
Minicomputers
Soviet computer systems
PDP-11 |
https://en.wikipedia.org/wiki/SM-1420 | The SM-1420 (CM-1420) is a 16 bit DEC PDP-11/45 minicomputer clone, and the successor to SM-4 in Soviet Bloc countries. Under the direction of Minpribor it was produced in the Soviet Union and Bulgaria from 1983 onwards, and is more than twice as fast as its predecessor. Its closest western counterpart is the DEC PDP-11/45, which means that the Soviet technology trailed 11 years behind compared to the Digital Equipment Corporation equivalent machine.
The standard package includes 256 KiB MOS memory, two RK-06 disks, two TU-10 decks, CM-6315 barrel or DZM-180 dot-matrix printer from Mera Blonie (Poland), VT52 compatible or VTA-2000-15 (BTA 2000-15) VT100 compatible terminals from Mera Elzab.
See also
History of computing in the Soviet Union
List of Soviet computer systems
SM EVM
References
External links
CIA reference aid on Soviet mainframes and minicomputers
Minicomputers
Soviet computer systems
PDP-11 |
https://en.wikipedia.org/wiki/ENEA%20AB | Enea AB is a global information technology company with its headquarters in Kista, Sweden that provides real-time operating systems and consulting services. Enea, which is an abbreviation of Engmans Elektronik Aktiebolag, also produces the OSE operating system.
History
Enea was founded 1968 by Rune Engman as Engmans Elektronik AB. Their first product was an operating system for a defence computer used by the Swedish Air Force. During the 1970s the firm developed compiler technology for the Simula programming language.
During the early days of the European Internet-like connections, Enea employee Björn Eriksen connected Sweden to EUnet using UUCP, and registered enea as the first Swedish domain in April 1983. The domain was later converted to the internet domain enea.se when the network was switched over to TCP and the Swedish top domain .se was created in 1986.
Products
OSE
The ENEA OSE real-time operating system first released in 1985.
The Enea multi core family of real-time operating systems was first released in 2009.
The Enea Operating System Embedded (OSE) is a family of real-time, microkernel, embedded operating system created by Bengt Eliasson for ENEA AB, which at the time was collaborating with Ericsson to develop a multi-core system using Assembly, C, and C++. Enea OSE Multicore Edition is based on the same microkernel architecture. The kernel design that combines the advantages of both traditional asymmetric multiprocessing (AMP) and symmetric multiprocessing (SMP). Enea OSE Multicore Edition offers both AMP and SMP processing in a hybrid architecture. OSE supports many processors, mainly 32-bit. These include the ColdFire, ARM, PowerPC, and MIPS based system on a chip (SoC) devices.
The Enea OSE family features three OSs: OSE (also named OSE Delta) for processors by ARM, PowerPC, and MIPS, OSEck for various DSP's, and OSE Epsilon for minimal devices, written in pure assembly (ARM, ColdFire, C166, M16C, 8051). OSE is a closed-source proprietari |
https://en.wikipedia.org/wiki/Mechanical%20calculator | A mechanical calculator, or calculating machine, is a mechanical device used to perform the basic operations of arithmetic automatically, or (historically) a simulation such as an analog computer or a slide rule. Most mechanical calculators were comparable in size to small desktop computers and have been rendered obsolete by the advent of the electronic calculator and the digital computer.
Surviving notes from Wilhelm Schickard in 1623 reveal that he designed and had built the earliest of the modern attempts at mechanizing calculation. His machine was composed of two sets of technologies: first an abacus made of Napier's bones, to simplify multiplications and divisions first described six years earlier in 1617, and for the mechanical part, it had a dialed pedometer to perform additions and subtractions. A study of the surviving notes shows a machine that would have jammed after a few entries on the same dial, and that it could be damaged if a carry had to be propagated over a few digits (like adding 1 to 999). Schickard abandoned his project in 1624 and never mentioned it again until his death 11 years later in 1635.
Two decades after Schickard's supposedly failed attempt, in 1642, Blaise Pascal decisively solved these particular problems with his invention of the mechanical calculator. Co-opted into his father's labour as tax collector in Rouen, Pascal designed the calculator to help in the large amount of tedious arithmetic required; it was called Pascal's Calculator or Pascaline.
In 1672, Gottfried Leibniz started designing an entirely new machine called the Stepped Reckoner. It used a stepped drum, built by and named after him, the Leibniz wheel, was the first two-motion calculator, the first to use cursors (creating a memory of the first operand) and the first to have a movable carriage. Leibniz built two Stepped Reckoners, one in 1694 and one in 1706. The Leibniz wheel was used in many calculating machines for 200 years, and into the 1970s with the Curta h |
https://en.wikipedia.org/wiki/Needham%E2%80%93Schroeder%20protocol | The Needham–Schroeder protocol is one of the two key transport protocols intended for use over an insecure network, both proposed by Roger Needham and Michael Schroeder. These are:
The Needham–Schroeder Symmetric Key Protocol, based on a symmetric encryption algorithm. It forms the basis for the Kerberos protocol. This protocol aims to establish a session key between two parties on a network, typically to protect further communication.
The Needham–Schroeder Public-Key Protocol, based on public-key cryptography. This protocol is intended to provide mutual authentication between two parties communicating on a network, but in its proposed form is insecure.
The symmetric protocol
Here, Alice initiates the communication to Bob . is a server trusted by both parties. In the communication:
and are identities of Alice and Bob respectively
is a symmetric key known only to and
is a symmetric key known only to and
and are nonces generated by and respectively
is a symmetric, generated key, which will be the session key of the session between and
The protocol can be specified as follows in security protocol notation:
Alice sends a message to the server identifying herself and Bob, telling the server she wants to communicate with Bob.
The server generates and sends back to Alice a copy encrypted under for Alice to forward to Bob and also a copy for Alice. Since Alice may be requesting keys for several different people, the nonce assures Alice that the message is fresh and that the server is replying to that particular message and the inclusion of Bob's name tells Alice who she is to share this key with.
Alice forwards the key to Bob who can decrypt it with the key he shares with the server, thus authenticating the data.
Bob sends Alice a nonce encrypted under to show that he has the key.
Alice performs a simple operation on the nonce, re-encrypts it and sends it back verifying that she is still alive and that she holds the key.
Attacks on the pro |
https://en.wikipedia.org/wiki/The%20Emperor%27s%20New%20Mind | The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics is a 1989 book by the mathematical physicist Sir Roger Penrose.
Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.
Most of the book is spent reviewing, for the scientifically-minded lay-reader, a plethora of interrelated subjects such as Newtonian physics, special and general relativity, the philosophy and limitations of mathematics, quantum physics, cosmology, and the nature of time. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic". Only the later portions of the book address the thesis directly.
Overview
Penrose states that his ideas on the nature of consciousness are speculative, and his thesis is considered erroneous by experts in the fields of philosophy, computer science, and robotics.
The Emperor's New Mind attacks the claims of artificial intelligence using the physics of computing: Penrose notes that the present home of computing lies more in the tangible world of classical mechanics than in the imponderable realm of quantum mechanics. The modern computer is a deterministic system that for the most part simply executes algorithms. Penrose shows that, by reconfiguring the boundaries of a billiard table, one might make a computer in which the billiard balls act as message carriers and their interactions act as logical decisions. The billiard-ball computer was first designed some years ago by Edward Fredkin and Tommaso Toffoli of the Massachusetts Institute of Technology.
Reception
Following the publication of the book, Penrose began to collaborate with Stuart Hameroff on a biological a |
https://en.wikipedia.org/wiki/Otway%E2%80%93Rees%20protocol | The Otway–Rees protocol is a computer network authentication protocol designed for use on insecure networks (e.g. the Internet). It allows individuals communicating over such a network to prove their identity to each other while also preventing eavesdropping or replay attacks and allowing for the detection of modification.
The protocol can be specified as follows in security protocol notation, where Alice is authenticating herself to Bob using a server S (M is a session-identifier, NA and NB are nonces):
Note: The above steps do not authenticate B to A.
This is one of the protocols analysed by Burrows, Abadi and Needham in the paper that introduced an early version of Burrows–Abadi–Needham logic.
Attacks on the protocol
There are a variety of attacks on this protocol currently published.
Interception attacks
These attacks leave the intruder with the session key and may exclude one of the parties from the conversation.
Boyd and Mao observe that the original description does not require that S check the plaintext A and B to be the same as the A and B in the two ciphertexts. This allows an intruder masquerading as B to intercept the first message, then send the second message to S constructing the second ciphertext using its own key and naming itself in the plaintext. The protocol ends with A sharing a session key with the intruder rather than B.
Gürgens and Peralta describe another attack which they name an arity attack. In this attack the intruder intercepts the second message and replies to B using the two ciphertexts from message 2 in message 3. In the absence of any check to prevent it, M (or perhaps M,A,B) becomes the session key between A and B and is known to the intruder.
Cole describes both the Gürgens and Peralta arity attack and another attack in his book Hackers Beware. In this the intruder intercepts the first message, removes the plaintext A,B and uses that as message 4 omitting messages 2 and 3. This leaves A communicating with the in |
https://en.wikipedia.org/wiki/Wide%20Mouth%20Frog%20protocol | The Wide-Mouth Frog protocol is a computer network authentication protocol designed for use on insecure networks (the Internet for example). It allows individuals communicating over a network to prove their identity to each other while also preventing eavesdropping or replay attacks, and provides for detection of modification and the prevention of unauthorized reading. This can be proven using Degano.
The protocol was first described under the name "The Wide-mouthed-frog Protocol" in the paper "A Logic of Authentication" (1990), which introduced Burrows–Abadi–Needham logic, and in which it was an "unpublished protocol ... proposed by" coauthor Michael Burrows. The paper gives no rationale for the protocol's whimsical name.
The protocol can be specified as follows in security protocol notation:
A, B, and S are identities of Alice, Bob, and the trusted server respectively
and are timestamps generated by A and S respectively
is a symmetric key known only to A and S
is a generated symmetric key, which will be the session key of the session between A and B
is a symmetric key known only to B and S
Note that to prevent active attacks, some form of authenticated encryption (or message authentication) must be used.
The protocol has several problems:
A global clock is required.
The server S has access to all keys.
The value of the session key is completely determined by A, who must be competent enough to generate good keys.
It can replay messages within the period when the timestamp is valid.
A is not assured that B exists.
The protocol is stateful. This is usually undesired because it requires more functionality and capability from the server. For example, S must be able to deal with situations in which B is unavailable.
See also
Alice and Bob
Kerberos (protocol)
Needham–Schroeder protocol
Neuman–Stubblebine protocol
Otway–Rees protocol
Yahalom (protocol)
References
Computer access control protocols |
https://en.wikipedia.org/wiki/Display%20size | On 2D displays, such as computer monitors and TVs, the display size or viewable image size (VIS) is the physical size of the area where pictures and videos are displayed. The size of a screen is usually described by the length of its diagonal, which is the distance between opposite corners, usually in inches. It is also sometimes called the physical image size to distinguish it from the "logical image size", which describes a screen's display resolution and is measured in pixels.
History
The size of a screen is usually described by the length of its diagonal, which is the distance between opposite corners, usually in inches. It is also sometimes called the physical image size to distinguish it from the "logical image size," which describes a screen's display resolution and is measured in pixels.
The method of measuring screen size by its diagonal was inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, the external diameter of the bulb was used to describe their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the visible rectangle was smaller than the diameter of the tube due to the thickness of the glass surrounding the phosphor screen (which was hidden from the viewer by the casing and bezel). This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3. In the US, when virtually all TV tubes were 4:3, the size of the screen was given as the true screen diagonal with a V following it (this was a requirement in the US market but not elsewhere). In virtually all other markets, the size of the outer diameter of the tube was given. What was a 27V in the US could be a 28" elsewhere. However the V terminology was frequently dropped in US advertising referring |
https://en.wikipedia.org/wiki/Decision%20table | Decision tables are a concise visual representation for specifying which actions to perform depending on given conditions. They are algorithms whose output is a set of actions. The information expressed in decision tables could also be represented as decision trees or in a programming language as a series of if-then-else and switch-case statements.
Overview
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to.
To make them more concise, many decision tables include in their condition alternatives a don't care symbol. This can be a hyphen or blank, although using a blank is discouraged as it may merely indicate that the decision table has not been finished. One of the uses of decision tables is to reveal conditions under which certain input factors are irrelevant on the actions to be taken, allowing these input tests to be skipped and thereby streamlining decision-making procedures.
Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented. Some decision tables use simple true/false values to represent the alternatives to a condition (similar to if-then-else), other tables may use numbered alternatives (similar to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives. In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform).
A decision table is considered balanced or complete if it includes every possible combination of input variables. In other words, balanced decision tables pres |
https://en.wikipedia.org/wiki/McGurk%20effect | The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor-quality auditory information but good-quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.
Background
It was first described in 1976 in a paper by Harry McGurk and John MacDonald, titled "Hearing Lips and Seeing Voices" in Nature (23 December 1976). This effect was discovered by accident when McGurk and his research assistant, MacDonald, asked a technician to dub a video with a different phoneme from the one spoken while conducting a study on how infants perceive language at different developmental stages. When the video was played back, both researchers heard a third phoneme rather than the one spoken or mouthed in the video.
This effect may be experienced when a video of one phoneme's production is dubbed with a sound-recording of a different phoneme being spoken. Often, the perceived phoneme is a third, intermediate phoneme. As an example, the syllables /ba-ba/ are spoken over the lip movements of /ga-ga/, and the perception is of /da-da/. McGurk and MacDonald originally believed that this resulted from the common phonetic and visual properties of /b/ and /g/. Two types of illusion in response to incongruent audiovisual stimuli have been observed: fusions ('ba' auditory and 'ga' visual |
https://en.wikipedia.org/wiki/Derived%20functor | In mathematics, certain functors may be derived to obtain other functors closely related to the original ones. This operation, while fairly abstract, unifies a number of constructions throughout mathematics.
Motivation
It was noted in various quite different settings that a short exact sequence often gives rise to a "long exact sequence". The concept of derived functors explains and clarifies many of these observations.
Suppose we are given a covariant left exact functor F : A → B between two abelian categories A and B. If 0 → A → B → C → 0 is a short exact sequence in A, then applying F yields the exact sequence 0 → F(A) → F(B) → F(C) and one could ask how to continue this sequence to the right to form a long exact sequence. Strictly speaking, this question is ill-posed, since there are always numerous different ways to continue a given exact sequence to the right. But it turns out that (if A is "nice" enough) there is one canonical way of doing so, given by the right derived functors of F. For every i≥1, there is a functor RiF: A → B, and the above sequence continues like so: 0 → F(A) → F(B) → F(C) → R1F(A) → R1F(B) → R1F(C) → R2F(A) → R2F(B) → ... . From this we see that F is an exact functor if and only if R1F = 0; so in a sense the right derived functors of F measure "how far" F is from being exact.
If the object A in the above short exact sequence is injective, then the sequence splits. Applying any additive functor to a split sequence results in a split sequence, so in particular R1F(A) = 0. Right derived functors (for i>0) are zero on injectives: this is the motivation for the construction given below.
Construction and first properties
The crucial assumption we need to make about our abelian category A is that it has enough injectives, meaning that for every object A in A there exists a monomorphism A → I where I is an injective object in A.
The right derived functors of the covariant left-exact functor F : A → B are then defined as follows. Start w |
https://en.wikipedia.org/wiki/Bioremediation | Bioremediation broadly refers to any process wherein a biological system (typically bacteria, microalgae, fungi in mycoremediation, and plants in phytoremediation), living or dead, is employed for removing environmental pollutants from air, water, soil, flue gasses, industrial effluents etc., in natural or artificial settings. The natural ability of organisms to adsorb, accumulate, and degrade common and emerging pollutants has attracted the use of biological resources in treatment of contaminated environment. In comparison to conventional physicochemical treatment methods bioremediation may offer considerable advantages as it aims to be sustainable, eco-friendly, cheap, and scalable.
Most bioremediation is inadvertent, involving native organisms. Research on bioremediation is heavily focused on stimulating the process by inoculation of a polluted site with organisms or supplying nutrients to promote the growth. In principle, bioremediation could be used to reduce the impact of byproducts created from anthropogenic activities, such as industrialization and agricultural processes. Bioremediation could prove less expensive and more sustainable than other remediation alternatives.
UNICEF, power producers, bulk water suppliers and local governments are early adopters of low cost bioremediation, such as aerobic bacteria tablets which are simply dropped into water.
While organic pollutants are susceptible to biodegradation, heavy metals are not degraded, but rather oxidized or reduced. Typical bioremediations involves oxidations. Oxidations enhance the water-solubility of organic compounds and their susceptibility to further degradation by further oxidation and hydrolysis. Ultimately biodegradation converts hydrocarbons to carbon dioxide and water. For heavy metals, bioremediation offers few solutions. Metal-containing pollutant can be removed or reduced with varying bioremediation techniques. The main challenge to bioremediations is rate: the processes are slow.
B |
https://en.wikipedia.org/wiki/Robert%20Fano | Roberto Mario "Robert" Fano (11 November 1917 – 13 July 2016) was an Italian-American computer scientist and professor of electrical engineering and computer science at the Massachusetts Institute of Technology. He became a student and working lab partner to Claude Shannon, whom he admired zealously and assisted in the early years of Information Theory.
Early life and education
Fano was born in Turin, Italy in 1917 to a Jewish family and grew up in Turin. Fano's father was the mathematician Gino Fano, his older brother was the physicist Ugo Fano, and Giulio Racah was a cousin. Fano studied engineering as an undergraduate at the School of Engineering of Torino (Politecnico di Torino) until 1939, when he emigrated to the United States as a result of anti-Jewish legislation passed under Benito Mussolini. He received his S.B. in electrical engineering from MIT in 1941, and upon graduation joined the staff of the MIT Radiation Laboratory. After World War II, Fano continued on to complete his Sc.D. in electrical engineering from MIT in 1947. His thesis, titled "Theoretical Limitations on the Broadband Matching of Arbitrary Impedances", was supervised by Ernst Guillemin.
Career
Fano's career spans three areas, microwave systems, information theory, and computer science.
Fano joined the MIT faculty in 1947 to what was then called the Department of Electrical Engineering. Between 1950 and 1953, he led the Radar Techniques Group at Lincoln Laboratory. In 1954, Fano was made an IEEE Fellow for "contributions in the field of information theory and microwave filters". He was elected to the American Academy of Arts and Sciences in 1958, to the National Academy of Engineering in 1973, and to the National Academy of Sciences in 1978.
Fano was known principally for his work on information theory. He developed Shannon–Fano coding in collaboration with Claude Shannon, and derived the Fano inequality. He also invented the Fano algorithm and postulated the Fano metric.
In the |
https://en.wikipedia.org/wiki/Catastrophe%20theory | In mathematics, catastrophe theory is a branch of bifurcation theory in the study of dynamical systems; it is also a particular special case of more general singularity theory in geometry.
Bifurcation theory studies and classifies phenomena characterized by sudden shifts in behavior arising from small changes in circumstances, analysing how the qualitative nature of equation solutions depends on the parameters that appear in the equation. This may lead to sudden and dramatic changes, for example the unpredictable timing and magnitude of a landslide.
Catastrophe theory originated with the work of the French mathematician René Thom in the 1960s, and became very popular due to the efforts of Christopher Zeeman in the 1970s. It considers the special case where the long-run stable equilibrium can be identified as the minimum of a smooth, well-defined potential function (Lyapunov function).
Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. However, examined in a larger parameter space, catastrophe theory reveals that such bifurcation points tend to occur as part of well-defined qualitative geometrical structures.
In the late 1970s, applications of catastrophe theory to areas outside its scope began to be criticized, especially in biology and social sciences. Zahler and Sussmann, in a 1977 article in Nature, referred to such applications as being "characterised by incorrect reasoning, far-fetched assumptions, erroneous consequences, and exaggerated claims". As a result, catastrophe theory has become less popular in applications.
Elementary catastrophes
Catastrophe theory analyzes degenerate critical points of the potential function — points where not just the first derivative, but one or more higher derivatives of the potential function are also zero. These are called the germs of the catas |
https://en.wikipedia.org/wiki/Compound%20interest | Compound interest is the addition of interest to the principal sum of a loan or deposit, or in other words, interest on principal plus interest. It is the result of reinvesting interest, or adding it to the loaned capital rather than paying it out, or requiring payment from borrower, so that interest in the next period is then earned on the principal sum plus previously accumulated interest. Compound interest is standard in finance and economics.
Compound interest is contrasted with simple interest, where previously accumulated interest is not added to the principal amount of the current period, so there is no compounding. The simple annual interest rate is the interest amount per period, multiplied by the number of periods per year. The simple annual interest rate is also known as the nominal interest rate (not to be confused with the interest rate not adjusted for inflation, which goes by the same name).
Compounding frequency
The compounding frequency is the number of times per year (or rarely, another unit of time) the accumulated interest is paid out, or capitalized (credited to the account), on a regular basis. The frequency could be yearly, half-yearly, quarterly, monthly, weekly, daily, or continuously (or not at all, until maturity).
For example, monthly capitalization with interest expressed as an annual rate means that the compounding frequency is 12, with time periods measured in months.
The effect of compounding depends on:
The nominal interest rate which is applied and
The frequency interest is compounded.
Annual equivalent rate
The nominal rate cannot be directly compared between loans with different compounding frequencies. Both the nominal interest rate and the compounding frequency are required in order to compare interest-bearing financial instruments.
To help consumers compare retail financial products more fairly and easily, many countries require financial institutions to disclose the annual compound interest rate on deposits or advan |
https://en.wikipedia.org/wiki/Cyclic%20order | In mathematics, a cyclic order is a way to arrange a set of objects in a circle. Unlike most structures in order theory, a cyclic order is not modeled as a binary relation, such as "". One does not say that east is "more clockwise" than west. Instead, a cyclic order is defined as a ternary relation , meaning "after , one reaches before ". For example, [June, October, February], but not [June, February, October], cf. picture. A ternary relation is called a cyclic order if it is cyclic, asymmetric, transitive, and connected. Dropping the "connected" requirement results in a partial cyclic order.
A set with a cyclic order is called a cyclically ordered set or simply a cycle. Some familiar cycles are discrete, having only a finite number of elements: there are seven days of the week, four cardinal directions, twelve notes in the chromatic scale, and three plays in rock-paper-scissors. In a finite cycle, each element has a "next element" and a "previous element". There are also cyclic orders with infinitely many elements, such as the oriented unit circle in the plane.
Cyclic orders are closely related to the more familiar linear orders, which arrange objects in a line. Any linear order can be bent into a circle, and any cyclic order can be cut at a point, resulting in a line. These operations, along with the related constructions of intervals and covering maps, mean that questions about cyclic orders can often be transformed into questions about linear orders. Cycles have more symmetries than linear orders, and they often naturally occur as residues of linear structures, as in the finite cyclic groups or the real projective line.
Finite cycles
A cyclic order on a set with elements is like an arrangement of on a clock face, for an -hour clock. Each element in has a "next element" and a "previous element", and taking either successors or predecessors cycles exactly once through the elements as .
There are a few equivalent ways to state this definition. A cyclic |
https://en.wikipedia.org/wiki/Magic%20pushbutton | The magic pushbutton is a common anti-pattern in graphical user interfaces.
At its core, the anti-pattern consists of a system partitioned into two parts: user interface and business logic, that are coupled through a single point, clicking the "magic pushbutton" or submitting a form of data. As it is a single point interface, this interface becomes over-complicated to implement. The temporal coupling of these units is a major problem: every interaction in the user interface must happen before the pushbutton is pressed, business logic can only be applied after the button was pressed. Cohesion of each unit also tends to be poor: features are bundled together whether they warrant this or not, simply because there is no other structured place in which to put them.
Drawbacks
For users
To a user, a magic pushbutton system appears clumsy and frustrating to use. Business logic is unavailable before the button press, so the user interface appears as a bland form-filling exercise. There is no opportunity for assistance in filling out fields, or for offering drop-down lists of acceptable values. In particular, it is impossible to provide assistance with later fields, based on entries already placed in earlier fields. For instance, a choice from a very large list of insurance claim codes might be filtered to a much smaller list, if the user has already selected Home/Car/Pet insurance, or if they have already entered their own identification and so the system can determine the set of risks for which they're actually covered, omitting the obscure policies that are now known to be irrelevant for this transaction.
One of the most off-putting aspects of a magic pushbutton is its tendency for the user interaction to proceed by entering a large volume of data, then having it rejected for some unexpected reason. This is particularly poor design when it is combined with the infamous "Redo from scratch" messages of older systems. Even where a form is returned with the entered data p |
https://en.wikipedia.org/wiki/Urn%20problem | In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another,
or some other properties. A number of important variations are described below.
An urn model is either a set of probabilities that describe events within an urn problem, or it is a probability distribution, or a family of such distributions, of random variables associated with urn problems.
History
In Ars Conjectandi (1713), Jacob Bernoulli considered the problem of determining, given a number of pebbles drawn from an urn, the proportions of different colored pebbles within the urn. This problem was known as the inverse probability problem, and was a topic of research in the eighteenth century, attracting the attention of Abraham de Moivre and Thomas Bayes.
Bernoulli used the Latin word urna, which primarily means a clay vessel, but is also the term used in ancient Rome for a vessel of any kind for collecting ballots or lots; the present-day Italian word for ballot box is still urna. Bernoulli's inspiration may have been lotteries, elections, or games of chance which involved drawing balls from a container, and it has been asserted that elections in medieval and renaissance Venice, including that of the doge, often included the choice of electors by lot, using balls of different colors drawn from an urn.
Basic urn model
In this basic urn model in probability theory, the urn contains x white and y black balls, well-mixed together. One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.
Possible questions that can be answered in this model are:
Can I infer the proportion of white and black balls from n observations? With what deg |
https://en.wikipedia.org/wiki/Birch%20and%20Swinnerton-Dyer%20conjecture | In mathematics, the Birch and Swinnerton-Dyer conjecture (often called the Birch–Swinnerton-Dyer conjecture) describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. , only special cases of the conjecture have been proven.
The modern formulation of the conjecture relates arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1, and the first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K .
The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.
Background
proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated.
If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve.
If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points.
Although Mordell's theorem s |
https://en.wikipedia.org/wiki/Meissel%E2%80%93Mertens%20constant | The Meissel–Mertens constant (named after Ernst Meissel and Franz Mertens), also referred to as Mertens constant, Kronecker's constant, Hadamard–de la Vallée-Poussin constant or the prime reciprocal constant, is a mathematical constant in number theory, defined as the limiting difference between the harmonic series summed only over the primes and the natural logarithm of the natural logarithm:
Here γ is the Euler–Mascheroni constant, which has an analogous definition involving a sum over all integers (not just the primes).
The value of M is approximately
M ≈ 0.2614972128476427837554268386086958590516... .
Mertens' second theorem establishes that the limit exists.
The fact that there are two logarithms (log of a log) in the limit for the Meissel–Mertens constant may be thought of as a consequence of the combination of the prime number theorem and the limit of the Euler–Mascheroni constant.
In popular culture
The Meissel-Mertens constant was used by Google when bidding in the Nortel patent auction. Google posted three bids based on mathematical numbers: $1,902,160,540 (Brun's constant), $2,614,972,128 (Meissel–Mertens constant), and $3.14159 billion (π).
See also
Divergence of the sum of the reciprocals of the primes
Prime zeta function
References
External links
On the remainder in a series of Mertens (postscript file)
Mathematical constants |
https://en.wikipedia.org/wiki/Enigmail | Enigmail is a data encryption and decryption extension for Mozilla Thunderbird and the Postbox that provides OpenPGP public key e-mail encryption and signing. Enigmail works under Microsoft Windows, Unix-like, and Mac OS X operating systems. Enigmail can operate with other mail clients compatible with PGP/MIME and inline PGP such as: Microsoft Outlook with Gpg4win package installed, Gnome Evolution, KMail, Claws Mail, Gnus, Mutt. Its cryptographic functionality is handled by GNU Privacy Guard.
In their default configuration, Thunderbird and SeaMonkey provide e-mail encryption and signing using S/MIME, which relies on X.509 keys provided by a centralised certificate authority. Enigmail adds an alternative mechanism where cooperating users can instead use keys provided by a web of trust, which relies on multiple users to endorse the authenticity of the sender's and recipient's credentials. In principle this enhances security, since it does not rely on a centralised entity which might be compromised by security failures or engage in malpractice due to commercial interests or pressure from the jurisdiction in which it resides.
Enigmail was first released in 2001 by Ramalingam Saravanan, and since 2003 maintained by Patrick Brunschwig. Both Enigmail and GNU Privacy Guard are free, open-source software. Enigmail with Thunderbird is now the most popular PGP setup.
Enigmail has announced its support for the new "pretty Easy privacy" (p≡p) encryption scheme in a joint Thunderbird extension to be released in December 2015. As of June 2016 the FAQ note it will be available in Q3 2016.
Enigmail also supports Autocrypt exchange of cryptographic keys since version 2.0.
In October 2019, the developers of Thunderbird announced built-in support for encryption and signing based on OpenPGP Thunderbird 78 to replace the Enigmail add-on. The background is a change in the code base of Thunderbird, removing support for legacy add-ons. Since this would require a rewrite from scratch |
https://en.wikipedia.org/wiki/Sun%20Enterprise | Sun Enterprise is a range of UNIX server computers produced by Sun Microsystems from 1996 to 2001. The line was launched as the Sun Ultra Enterprise series; the Ultra prefix was dropped around 1998. These systems are based on the 64-bit UltraSPARC microprocessor architecture and related to the contemporary Ultra series of computer workstations. Like the Ultra series, they run Solaris. Various models, from single-processor entry-level servers to large high-end multiprocessor servers were produced. The Enterprise brand was phased out in favor of the Sun Fire model line from 2001 onwards.
Ultra workstation-derived servers
The first UltraSPARC-I-based servers produced by Sun, launched in 1995, are the UltraServer 1 and UltraServer 2. These are server configurations of the Ultra 1 and Ultra 2 workstations respectively. These were later renamed Ultra Enterprise 1 and Ultra Enterprise 2 for consistency with other server models. Later these were joined by the Ultra Enterprise 150, which comprises an Ultra 1 motherboard in a tower-style enclosure with 12 internal disk bays.
In 1998, Sun launched server configurations of the UltraSPARC-IIi-based Ultra 5 and Ultra 10 workstations, called the Enterprise Ultra 5S and Enterprise Ultra 10S respectively.
Entry-level servers
The Sun Enterprise 450 is a rack-mountable entry-level multiprocessor server launched in 1997, capable of up to four UltraSPARC II processors. The Sun Enterprise 250 is a two-processor version launched in 1998. These were later joined by the Enterprise 220R and Enterprise 420R rack-mount servers in 1999. The 220R and 420R models are respectively based on the motherboards of the Ultra 60 and Ultra 80 workstations. The 250 was replaced by the Sun Fire V250, the 450 by the Sun Fire V880. The 220R was superseded by the Sun Fire 280R and the 420R by the Sun Fire V480.
Ultra Enterprise XX00 mid-range servers
In 1996, Sun replaced the SPARCserver 1000E and SPARCcenter 2000E models with the Ultra Enterprise 3000 |
https://en.wikipedia.org/wiki/Legendre%27s%20constant | Legendre's constant is a mathematical constant occurring in a formula constructed by Adrien-Marie Legendre to approximate the behavior of the prime-counting function . The value that corresponds precisely to its asymptotic behavior is now known to be 1.
Examination of available numerical data for known values of led Legendre to an approximating formula.
Legendre constructed in 1808 the formula
where (), as giving an approximation of with a "very satisfying precision".
Today, one defines the value of such that
which is solved by putting
provided that this limit exists.
Not only is it now known that the limit exists, but also that its value is equal to somewhat less than Legendre's Regardless of its exact value, the existence of the limit implies the prime number theorem.
Pafnuty Chebyshev proved in 1849 that if the limit B exists, it must be equal to 1. An easier proof was given by Pintz in 1980.
It is an immediate consequence of the prime number theorem, under the precise form with an explicit estimate of the error term
(for some positive constant a, where O(…) is the big O notation), as proved in 1899 by Charles de La Vallée Poussin, that B indeed is equal to 1. (The prime number theorem had been proved in 1896, independently by Jacques Hadamard and La Vallée Poussin, but without any estimate of the involved error term).
Being evaluated to such a simple number has made the term Legendre's constant mostly only of historical value, with it often (technically incorrectly) being used to refer to Legendre's first guess 1.08366... instead.
References
External links
Conjectures about prime numbers
Mathematical constants
1 (number)
Integers
Analytic number theory |
https://en.wikipedia.org/wiki/Khinchin%27s%20constant | In number theory, Aleksandr Yakovlevich Khinchin proved that for almost all real numbers x, coefficients ai of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x and is known as Khinchin's constant.
That is, for
it is almost always true that
where is Khinchin's constant
(with denoting the product over all sequence terms).
Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose. Among the numbers whose continued fraction expansions apparently do have this property (based on numerical evidence) are π, the Euler-Mascheroni constant γ, Apéry's constant ζ(3), and Khinchin's constant itself. However, this is unproven.
Among the numbers x whose continued fraction expansions are known not to have this property are rational numbers, roots of quadratic equations (including the golden ratio Φ and the square roots of integers), and the base of the natural logarithm e.
Khinchin is sometimes spelled Khintchine (the French transliteration of Russian Хинчин) in older mathematical literature.
Sketch of proof
The proof presented here was arranged by Czesław Ryll-Nardzewski and is much simpler than Khinchin's original proof which did not use ergodic theory.
Since the first coefficient a0 of the continued fraction of x plays no role in Khinchin's theorem and since the rational numbers have Lebesgue measure zero, we are reduced to the study of irrational numbers in the unit interval, i.e., those in . These numbers are in bijection with infinite continued fractions of the form [0; a1, a2, ...], which we simply write [a1, a2, ...], where a1, a2, ... are positive integers. Define a transformation T:I → I by
The transformation T is called the Gauss–Kuzmin–Wirsing operator. For every Borel subset E of I, we also define the Gauss–Kuzmin measure of E
Then μ is a probability measure on the σ-algebra of Borel subsets of I. The measure μ is equ |
https://en.wikipedia.org/wiki/Plant%20nutrition | Plant nutrition is the study of the chemical elements and compounds necessary for plant growth and reproduction, plant metabolism and their external supply. In its absence the plant is unable to complete a normal life cycle, or that the element is part of some essential plant constituent or metabolite. This is in accordance with Justus von Liebig’s law of the minimum. The total essential plant nutrients include seventeen different elements: carbon, oxygen and hydrogen which are absorbed from the air, whereas other nutrients including nitrogen are typically obtained from the soil (exceptions include some parasitic or carnivorous plants).
Plants must obtain the following mineral nutrients from their growing medium:
the macronutrients: nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg)
the micronutrients (or trace minerals): iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo), nickel (Ni)
These elements stay beneath soil as salts, so plants absorb these elements as ions. The macronutrients are taken-up in larger quantities; hydrogen, oxygen, nitrogen and carbon contribute to over 95% of a plant's entire biomass on a dry matter weight basis. Micronutrients are present in plant tissue in quantities measured in parts per million, ranging from 0.1 to 200 ppm, or less than 0.02% dry weight.
Most soil conditions across the world can provide plants adapted to that climate and soil with sufficient nutrition for a complete life cycle, without the addition of nutrients as fertilizer. However, if the soil is cropped it is necessary to artificially modify soil fertility through the addition of fertilizer to promote vigorous growth and increase or sustain yield. This is done because, even with adequate water and light, nutrient deficiency can limit growth and crop yield.
History
Carbon, hydrogen and oxygen are the basic nutrients plants receive from air and water. Justus von Liebig proved in 1840 tha |
https://en.wikipedia.org/wiki/Automatic%20meter%20reading | Automatic meter reading (AMR) is the technology of automatically collecting consumption, diagnostic, and status data from water meter or energy metering devices (gas, electric) and transferring that data to a central database for billing, troubleshooting, and analyzing.
This technology mainly saves utility providers the expense of periodic trips to each physical location to read a meter. Another advantage is that billing can be based on near real-time consumption rather than on estimates based on past or predicted consumption. This timely information coupled with analysis can help both utility providers and customers better control the use and production of electric energy, gas usage, or water consumption.
AMR technologies include handheld, mobile and network technologies based on telephony platforms (wired and wireless), radio frequency (RF), or powerline transmission.
Technologies
Touch technology
With touch-based AMR, a meter reader carries a handheld computer or data collection device with a wand or probe. The device automatically collects the readings from a meter by touching or placing the read probe in close proximity to a reading coil enclosed in the touchpad. When a button is pressed, the probe sends an interrogate signal to the touch module to collect the meter reading. The software in the device matches the serial number to one in the route database, and saves the meter reading for later download to a billing or data collection computer. Since the meter reader still has to go to the site of the meter, this is sometimes referred to as "on-site" AMR. Another form of contact reader uses a standardized infrared port to transmit data. Protocols are standardized between manufacturers by such documents as ANSI C12.18 or IEC 61107.
AMR hosting
AMR hosting is a back-office solution which allows a user to track their electricity, water, or gas consumption over the Internet. All data is collected in near real-time, and is stored in a database by data acquisitio |
https://en.wikipedia.org/wiki/Common%20cause%20and%20special%20cause%20%28statistics%29 | Common and special causes are the two distinct origins of variation in a process, as defined in the statistical thinking and methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while "special causes" are unusual, not previously observed, non-quantifiable variation.
The distinction is fundamental in philosophy of statistics and philosophy of probability, with different treatment of these issues being a classic issue of probability interpretations, being recognised and discussed as early as 1703 by Gottfried Leibniz; various alternative names have been used over the years. The distinction has been particularly important in the thinking of economists Frank Knight, John Maynard Keynes and G. L. S. Shackle.
Origins and concepts
In 1703, Jacob Bernoulli wrote to Gottfried Leibniz to discuss their shared interest in applying mathematics and probability to games of chance. Bernoulli speculated whether it would be possible to gather mortality data from gravestones and thereby calculate, by their existing practice, the probability of a man currently aged 20 years outliving a man aged 60 years. Leibniz replied that he doubted this was possible:
This captures the central idea that some variation is predictable, at least approximately in frequency. This common-cause variation is evident from the experience base. However, new, unanticipated, emergent or previously neglected phenomena (e.g. "new diseases") result in variation outside the historical experience base. Shewhart and Deming argued that such special-cause variation is fundamentally unpredictable in frequency of occurrence or in severity.
John Maynard Keynes emphasised the importance of special-cause variation when he wrote:
Definitions
Common-cause variations
Common-cause variation is characterised by:
Phenomena constantly active within the system;
Variation predictable probabilistically;
Irregula |
https://en.wikipedia.org/wiki/Tashkent%20Tower | The Tashkent Television Tower () is a tower, located in Tashkent, Uzbekistan and is the twelfth tallest tower in the world. Construction started in 1978 and it began operation six years later, on 15 January 1985. It was the forth tallest tower in the world from 1985 to 1991. Moreover, the decision to construct the Tashkent Tower or TV-Tower of Uzbekistan was made on 1 September 1971 in order to spread the TV and radio signals to all over the Uzbekistan. It is of a vertical cantilever structure, and is constructed out of steel. Its architectural design is a product of the Terxiev, Tsarucov & Semashko firm.
The tower has an observation deck located above the ground. It is second tallest structure in Central Asia after Ekibastuz GRES-2 Power Station in Ekibastuz, Kazakhstan. It also belongs to the World Federation of Great Towers.
Use
The main purposes of the tower are radio and TV-transmission. The signal reaches the farthest points of Tashkent Province and some of the south regions of Kazakhstan. The tower is also used for communication between governmental departments, and organizations. The tower also serves as a complex hydrometeorological station.
See also
List of tallest structures in Uzbekistan
List of tallest structures in Central Asia
List of the world's tallest structures
List of tallest towers in the world
List of tallest freestanding structures in the world
List of tallest freestanding steel structures
Lattice tower
References
External links
Online Journal About Tower
Towers built in the Soviet Union
Towers in Uzbekistan
Buildings and structures in Tashkent
Tourist attractions in Tashkent
Towers completed in 1985
Towers with revolving restaurants
Radio masts and towers
Observation towers
Lattice towers |
https://en.wikipedia.org/wiki/Daniel%20Quillen | Daniel Gray "Dan" Quillen (June 22, 1940 – April 30, 2011) was an American mathematician. He is known for being the "prime architect" of higher algebraic K-theory, for which he was awarded the Cole Prize in 1975 and the Fields Medal in 1978.
From 1984 to 2006, he was the Waynflete Professor of Pure Mathematics at Magdalen College, Oxford.
Education and career
Quillen was born in Orange, New Jersey, and attended Newark Academy. He entered Harvard University, where he earned both his AB, in 1961, and his PhD in 1964; the latter completed under the supervision of Raoul Bott, with a thesis in partial differential equations. He was a Putnam Fellow in 1959.
Quillen obtained a position at the Massachusetts Institute of Technology after completing his doctorate. He also spent a number of years at several other universities. He visited France twice: first as a Sloan Fellow in Paris, during the academic year 1968–69, where he was greatly influenced by Grothendieck, and then, during 1973–74, as a Guggenheim Fellow. In 1969–70, he was a visiting member of the Institute for Advanced Study in Princeton, where he came under the influence of Michael Atiyah.
In 1978, Quillen received a Fields Medal at the International Congress of Mathematicians held in Helsinki.
From 1984 to 2006, he was the Waynflete Professor of Pure Mathematics at Magdalen College, Oxford.
Quillen retired at the end of 2006. He died from complications of Alzheimer's disease on April 30,
2011, aged 70, in Florida.
Mathematical contributions
Quillen's best known contribution (mentioned specifically in his Fields medal citation) was his formulation of higher algebraic K-theory in 1972. This new tool, formulated in terms of homotopy theory, proved to be successful in formulating and solving problems in algebra, particularly in ring theory and module theory. More generally, Quillen developed tools (especially his theory of model categories) that allowed algebro-topological tools to be applied in other context |
https://en.wikipedia.org/wiki/United%20States%20Board%20on%20Geographic%20Names | The United States Board on Geographic Names (BGN) is a federal body operating under the United States Secretary of the Interior. The purpose of the board is to establish and maintain uniform usage of geographic names throughout the federal government of the United States.
History
On January 8, 1890, Thomas Corwin Mendenhall, superintendent of the US Coast and Geodetic Survey Office, wrote to 10 noted geographers "to suggest the organization of a Board made up of representatives from the different Government services interested, to which may be referred any disputed question of geographical orthography." President Benjamin Harrison signed executive order 28 on September 4, 1890, establishing the Board on Geographical Names. "To this Board shall be referred all unsettled questions concerning geographic names. The decisions of the Board are to be accepted [by federal departments] as the standard authority for such matters." The board was given authority to resolve all unsettled questions concerning geographic names. Decisions of the board were accepted as binding by all departments and agencies of the federal government.
The board has since undergone several name changes. In 1934, it was transferred to the Department of the Interior.
The Advisory Committee on Antarctic Names was established in 1943 as the Special Committee on Antarctic Names (SCAN). In 1963, the Advisory Committee on Undersea Features was started for standardization of names of undersea features.
Its present form derives from a 1947 law, Public Law 80-242.
Operation
The 1969 BGN publication Decisions on Geographic Names in the United States stated the agency's chief purpose as:
The board has developed principles, policies, and procedures governing the use of domestic and foreign geographic names, including underseas. The BGN also deals with names of geographical features in Antarctica via its Advisory Committee on Antarctic Names.
The Geographic Names Information System, developed by the BGN in |
https://en.wikipedia.org/wiki/Daniel%20Glazman | Daniel Glazman is a JavaScript programmer, best known for his work on Mozilla's Editor and Composer components and Nvu, a standalone version of the Mozilla Composer, created for Linspire Corporation. He lives in France.
Glazman studied at , graduating in 1989, and Télécom ParisTech, graduating in 1991. He started work at Grif SA, a software company specialising in SGML editors. He worked at the Research & Development Centre of Electricité de France from 1994 to 2000, holding several positions from research engineer to team manager. During 2000 he also worked at Amazon.fr and Halogen before joining Netscape Communications Corporation to work on Mozilla's CSS rendering, Editor and Composer.
Glazman was involved in the standardization of HTML 4 and CSS 2 and remains active in W3C's CSS Working Group. He was appointed co-chairman of the CSS Working Group in April 2008.
He runs his own company, Disruptive Innovations, which produces the latest iteration of his work on web editors, BlueGriffon.
From September 2013-June 2014 he worked at the Samsung Open Source Group.
References
External links
Daniel Glazman's home page
Disruptive Innovations
BlueGriffon
Mozilla developers
Free software programmers
Open source people
Netscape people
Computer programmers
Web developers
World Wide Web Consortium
Living people
École Polytechnique alumni
Year of birth missing (living people) |
https://en.wikipedia.org/wiki/Register%20renaming | In computer architecture, register renaming is a technique that abstracts logical registers from physical registers.
Every logical register has a set of physical registers associated with it.
When a machine language instruction refers to a particular logical register, the processor transposes this name to one specific physical register on the fly.
The physical registers are opaque and cannot be referenced directly but only via the canonical names.
This technique is used to eliminate false data dependencies arising from the reuse of registers by successive instructions that do not have any real data dependencies between them.
The elimination of these false data dependencies reveals more instruction-level parallelism in an instruction stream, which can be exploited by various and complementary techniques such as superscalar and out-of-order execution for better performance.
Problem approach
In a register machine, programs are composed of instructions which operate on values.
The instructions must name these values in order to distinguish them from one another.
A typical instruction might say: “add and and put the result in ”.
In this instruction, , and are the names of storage locations.
In order to have a compact instruction encoding, most processor instruction sets have a small set of special locations which can be referred to by special names: registers.
For example, the x86 instruction set architecture has 8 integer registers, x86-64 has 16, many RISCs have 32, and IA-64 has 128.
In smaller processors, the names of these locations correspond directly to elements of a register file.
Different instructions may take different amounts of time; for example, a processor may be able to execute hundreds of instructions while a single load from the main memory is in progress.
Shorter instructions executed while the load is outstanding will finish first, thus the instructions are finishing out of the original program order.
Out-of-order execution has been used in |
https://en.wikipedia.org/wiki/Hematocrit | The hematocrit () (Ht or HCT), also known by several other names, is the volume percentage (vol%) of red blood cells (RBCs) in blood, measured as part of a blood test. The measurement depends on the number and size of red blood cells. It is normally 40.7–50.3% for males and 36.1–44.3% for females. It is a part of a person's complete blood count results, along with hemoglobin concentration, white blood cell count and platelet count.
Because the purpose of red blood cells is to transfer oxygen from the lungs to body tissues, a blood sample's hematocrit—the red blood cell volume percentage—can become a point of reference of its capability of delivering oxygen. Hematocrit levels that are too high or too low can indicate a blood disorder, dehydration, or other medical conditions. An abnormally low hematocrit may suggest anemia, a decrease in the total amount of red blood cells, while an abnormally high hematocrit is called polycythemia. Both are potentially life-threatening disorders.
Names
There are other names for the hematocrit, such as packed cell volume (PCV), volume of packed red cells (VPRC), or erythrocyte volume fraction (EVF). The term hematocrit (or haematocrit in British English) comes from the Ancient Greek words (, "blood") and (, "judge"), and hematocrit means "to separate blood". It was coined in 1891 by Swedish physiologist Magnus Blix as haematokrit, modeled after lactokrit.
Measurement methods
With modern lab equipment, the hematocrit can be calculated by an automated analyzer or directly measured, depending on the analyzer manufacturer. Calculated hematocrit is determined by multiplying the red cell count by the mean cell volume. The hematocrit is slightly more accurate, as the PCV includes small amounts of blood plasma trapped between the red cells. An estimated hematocrit as a percentage may be derived by tripling the hemoglobin concentration in g/dL and dropping the units.
The packed cell volume (PCV) can be determined by centrifuging EDTA-t |
https://en.wikipedia.org/wiki/Variable-frequency%20oscillator | A variable frequency oscillator (VFO) in electronics is an oscillator whose frequency can be tuned (i.e., varied) over some range. It is a necessary component in any tunable radio transmitter and in receivers that works by the superheterodyne principle. The oscillator controls the frequency to which the apparatus is tuned.
Purpose
In a simple superheterodyne receiver, the incoming radio frequency signal (at frequency ) from the antenna is mixed with the VFO output signal tuned to , producing an intermediate frequency (IF) signal that can be processed downstream to extract the modulated information. Depending on the receiver design, the IF signal frequency is chosen to be either the sum of the two frequencies at the mixer inputs (up-conversion), or more commonly, the difference frequency (down-conversion), .
In addition to the desired IF signal and its unwanted image (the mixing product of opposite sign above), the mixer output will also contain the two original frequencies, and and various harmonic combinations of the input signals. These undesired signals are rejected by the IF filter. If a double balanced mixer is employed, the input signals appearing at the mixer outputs are greatly attenuated, reducing the required complexity of the IF filter.
The advantage of using a VFO as a heterodyning oscillator is that only a small portion of the radio receiver (the sections before the mixer such as the preamplifier) need to have a wide bandwidth. The rest of the receiver can be finely tuned to the IF frequency.
In a direct-conversion receiver, the VFO is tuned to the same frequency as the incoming radio frequency and Hz. Demodulation takes place at baseband using low-pass filters and amplifiers.
In a radio frequency (RF) transmitter, VFOs are often used to tune the frequency of the output signal, often indirectly through a heterodyning process similar to that described above. Other uses include chirp generators for radar systems where the VFO is swept rapidly thr |
https://en.wikipedia.org/wiki/Lectin | Lectins are carbohydrate-binding proteins that are highly specific for sugar groups that are part of other molecules, so cause agglutination of particular cells or precipitation of glycoconjugates and polysaccharides. Lectins have a role in recognition at the cellular and molecular level and play numerous roles in biological recognition phenomena involving cells, carbohydrates, and proteins. Lectins also mediate attachment and binding of bacteria, viruses, and fungi to their intended targets.
Lectins are found in many foods. Some foods, such as beans and grains, need to be cooked, fermented or sprouted to reduce lectin content. Some lectins are beneficial, such as CLEC11A, which promotes bone growth, while others may be powerful toxins such as ricin.
Lectins may be disabled by specific mono- and oligosaccharides, which bind to ingested lectins from grains, legumes, nightshade plants, and dairy; binding can prevent their attachment to the carbohydrates within the cell membrane. The selectivity of lectins means that they are useful for analyzing blood type, and they have been researched for potential use in genetically engineered crops to transfer pest resistance.
Etymology
William C. Boyd alone and then together with Elizabeth Shapleigh introduced the term "lectin" in 1954 from the Latin word lectus, "chosen" (from the verb legere, to choose or pick out).
Biological functions
Lectins may bind to a soluble carbohydrate or to a carbohydrate moiety that is a part of a glycoprotein or glycolipid. They typically agglutinate certain animal cells and/or precipitate glycoconjugates. Most lectins do not possess enzymatic activity.
Animals
Lectins have these functions in animals:
The regulation of cell adhesion
The regulation of glycoprotein synthesis
The regulation of blood protein levels
The binding of soluble extracellular and intercellular glycoproteins
As a receptor on the surface of mammalian liver cells for the recognition of galactose residues, which resul |
https://en.wikipedia.org/wiki/Dot-decimal%20notation | Dot-decimal notation is a presentation format for numerical data. It consists of a string of decimal numbers, using the full stop (dot) as a separation character.
A common use of dot-decimal notation is in information technology where it is a method of writing numbers in octet-grouped base-10 (decimal) numbers. In computer networking, Internet Protocol Version 4 (IPv4) addresses are commonly written using the quad-dotted notation of four decimal integers, ranging from 0 to 255 each.
IPv4 address
In computer networking, the notation is associated with the specific use of quad-dotted notation to represent IPv4 addresses and used as a synonym for dotted-quad notation. Dot-decimal notation is a presentation format for numerical data expressed as a string of decimal numbers each separated by a full stop. For example, the hexadecimal number 0xFF000000 may be expressed in dot-decimal notation as 255.0.0.0.
An IPv4 address has 32 bits. For purposes of representation, the bits may be divided into four octets written in decimal numbers, ranging from 0 to 255, concatenated as a character string with full stop delimiters between each number. This octet-grouped dotted-decimal format may more specifically be called "dotted octet" format, or a "dotted quad address".
For example, the address of the loopback interface, usually assigned the host name localhost, is 127.0.0.1. It consists of the four octets, written in binary notation: 01111111, 00000000, 00000000, and 00000001. The 32-bit number is represented in hexadecimal notation as 0x7F000001.
No formal specification of this textual IP address representation exists. The first mention of this format in RFC documents was in RFC 780 for the Mail Transfer Protocol published May 1981, in which the IP address was supposed to be enclosed in brackets or represented as a 32-bit decimal integer prefixed by a pound sign. A table in RFC 790 (Assigned Numbers) used the dotted decimal format, zero-padding each number to three digits. RFC |
https://en.wikipedia.org/wiki/Andrew%20File%20System | The Andrew File System (AFS) is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project. Originally named "Vice", "Andrew" refers to Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.
Features
AFS has several benefits over traditional networked file systems, particularly in the areas of security and scalability. One enterprise AFS deployment at Morgan Stanley exceeds 25,000 clients. AFS uses Kerberos for authentication, and implements access control lists on directories for users and groups. Each client caches files on the local filesystem for increased speed on subsequent requests for the same file. This also allows limited filesystem access in the event of a server crash or a network outage.
AFS uses the Weak Consistency model. Read and write operations on an open file are directed only to the locally cached copy. When a modified file is closed, the changed portions are copied back to the file server. Cache consistency is maintained by callback mechanism. When a file is cached, the server makes a note of this and promises to inform the client if the file is updated by someone else. Callbacks are discarded and must be re-established after any client, server, or network failure, including a timeout. Re-establishing a callback involves a status check and does not require re-reading the file itself.
A consequence of the file locking strategy is that AFS does not support large shared databases or record updating within files shared between client systems. This was a deliberate design decision based on the perceived needs of the university computing environment. For example, in the original email system for the Andrew Project, the Andrew Message System, a single file per message is used, like maildir, rather than a single file per mailbox, like mbox. See A |
https://en.wikipedia.org/wiki/Boundary%20value%20problem | In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions.
Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems. The analysis of these problems, in the linear case, involves the eigenfunctions of a differential operator.
To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed.
Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions (solutions to Laplace's equation); the solution was given by the Dirichlet's principle.
Explanation
Boundary value problems are similar to initial value problems. A boundary value problem has conditions specified at the extremes ("boundaries") of the independent variable in the equation whereas an initial value problem has all of the conditions specified at the same value of the independent variable (and that value is at the lower boundary of the domain, thus the term "initial" value). A boundary value is a data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component.
For example, if the independent variable is time over the domain [0,1], a boundary value problem would specify values for at both and , wh |
https://en.wikipedia.org/wiki/SPNEGO | Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO), often pronounced "spenay-go", is a GSSAPI "pseudo mechanism" used by client-server software to negotiate the choice of security technology. SPNEGO is used when a client application wants to authenticate to a remote server, but neither end is sure what authentication protocols the other supports. The pseudo-mechanism uses a protocol to determine what common GSSAPI mechanisms are available, selects one and then dispatches all further security operations to it. This can help organizations deploy new security mechanisms in a phased manner.
SPNEGO's most visible use is in Microsoft's "HTTP Negotiate" authentication extension. It was first implemented in Internet Explorer 5.01 and IIS 5.0 and provided single sign-on capability later marketed as Integrated Windows Authentication. The negotiable sub-mechanisms included NTLM and Kerberos, both used in Active Directory. The HTTP Negotiate extension was later implemented with similar support in:
Mozilla 1.7 beta
Mozilla Firefox 0.9
Konqueror 3.3.1
Google Chrome 6.0.472
History
19 February 1996 – Eric Baize and Denis Pinkas publish the Internet Draft Simple GSS-API Negotiation Mechanism (draft-ietf-cat-snego-01.txt).
17 October 1996 – The mechanism is assigned the object identifier 1.3.6.1.5.5.2 and is abbreviated snego.
25 March 1997 – Optimistic piggybacking of one mechanism's initial token is added. This saves a round trip.
22 April 1997 – The "preferred" mechanism concept is introduced. The draft standard's name is changed from just "Simple" to "Simple and Protected" (spnego).
16 May 1997 – Context flags are added (delegation, mutual auth, etc.). Defenses are provided against attacks on the new "preferred" mechanism.
22 July 1997 – More context flags are added (integrity and confidentiality).
18 November 1998 – The rules of selecting the common mechanism are relaxed. Mechanism preference is integrated into the mechanism list.
4 March 1998 – An optim |
https://en.wikipedia.org/wiki/Gerontology | Gerontology ( ) is the study of the social, cultural, psychological, cognitive, and biological aspects of aging. The word was coined by Ilya Ilyich Mechnikov in 1903, from the Greek (), meaning "old man", and (), meaning "study of". The field is distinguished from geriatrics, which is the branch of medicine that specializes in the treatment of existing disease in older adults. Gerontologists include researchers and practitioners in the fields of biology, nursing, medicine, criminology, dentistry, social work, physical and occupational therapy, psychology, psychiatry, sociology, economics, political science, architecture, geography, pharmacy, public health, housing, and anthropology.
The multidisciplinary nature of gerontology means that there are a number of sub-fields which overlap with gerontology. There are policy issues, for example, involved in government planning and the operation of nursing homes, investigating the effects of an aging population on society, and the design of residential spaces for older people that facilitate the development of a sense of place or home. Dr. Lawton, a behavioral psychologist at the Philadelphia Geriatric Center, was among the first to recognize the need for living spaces designed to accommodate the elderly, especially those with Alzheimer's disease. As an academic discipline the field is relatively new. The USC Leonard Davis School of Gerontology created the first PhD, master's and bachelor's degree programs in gerontology in 1975.
History
In the medieval Islamic world, several physicians wrote on issues related to Gerontology. Avicenna's The Canon of Medicine (1025) offered instruction for the care of the aged, including diet and remedies for problems including constipation. Arabic physician Ibn Al-Jazzar Al-Qayrawani (Algizar, c. 898–980) wrote on the aches and conditions of the elderly. His scholarly work covers sleep disorders, forgetfulness, how to strengthen memory, and causes of mortality. Ishaq ibn Hunayn (died 910 |
https://en.wikipedia.org/wiki/Short-time%20Fourier%20transform | The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.
Forward STFT
Continuous-time STFT
Simply, in the continuous-time case, the function to be transformed is multiplied by a window function which is nonzero for only a short period of time. The Fourier transform (a one-dimensional function) of the resulting signal is taken, then the window is slid along the time axis until the end resulting in a two-dimensional representation of the signal. Mathematically, this is written as:
where is the window function, commonly a Hann window or Gaussian window centered around zero, and is the signal to be transformed (note the difference between the window function and the frequency ). is essentially the Fourier transform of , a complex function representing the phase and magnitude of the signal over time and frequency. Often phase unwrapping is employed along either or both the time axis, , and frequency axis, , to suppress any jump discontinuity of the phase result of the STFT. The time index is normally considered to be "slow" time and usually not expressed in as high resolution as time . Given that the STFT is essentially a Fourier transform times a window function, the STFT is also called windowed Fourier transform or time-dependent Fourier transform.
Disc |
https://en.wikipedia.org/wiki/Usenet%20Death%20Penalty | On Usenet, the Usenet Death Penalty (UDP) is a final penalty that may be issued against Internet service providers or single users who produce too much spam or fail to adhere to Usenet standards. It is named after the death penalty (the state-sanctioned killing of a person as punishment for a crime), as it causes the banned user or provider to be unable to use Usenet, essentially "killing" their service. Messages that fall under the jurisdiction of a Usenet Death Penalty will be cancelled. Cancelled messages are deleted from Usenet servers and not allowed to propagate. This causes users on the affected ISP to be unable to post to Usenet, and it puts pressure on the ISP to change their policies. Notable cases include actions taken against UUNET, CompuServe, and Excite@Home.
Types
There are three types of Usenet Death Penalty:
Active: with an active UDP, messages that fall under the UDP will be automatically cancelled by third parties or their agents, such as by using cancelbots.
Passive: with a passive UDP, messages that fall under the UDP will simply be ignored and will not spread.
Partial: a partial UDP applies only to a certain subset of newsgroups, not the entire Usenet newsgroup hierarchy.
To be effective, the UDP must be supported by a large number of servers, or the majority of the major transit servers. Otherwise, the articles will propagate throughout the smaller, slower peerings.
UDPs are not casual acts. They are announced beforehand, only after the owner of the offending server has been contacted and given several chances to correct the perceived problem. Since the effects on the users of a server under a UDP can be significant, if the users want to post, the impact of a UDP can induce the operators of an offending server to address problems quickly.
UDPs have been issued against America Online, BBN Planet, CompuServe, Erols Internet, Netcom, TIAC and UUNET.
History
The first UDP software was written by Karl Kleinpaste in 1990, though the |
https://en.wikipedia.org/wiki/Positional%20notation | Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string.
The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits.
Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers.
The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe.
History
Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous. |
https://en.wikipedia.org/wiki/Clonal%20anergy | In immunology, anergy is a lack of reaction by the body's defense mechanisms to foreign substances, and consists of a direct induction of peripheral lymphocyte tolerance. An individual in a state of anergy often indicates that the immune system is unable to mount a normal immune response against a specific antigen, usually a self-antigen. Lymphocytes are said to be anergic when they fail to respond to their specific antigen. Anergy is one of three processes that induce tolerance, modifying the immune system to prevent self-destruction (the others being clonal deletion and immunoregulation).
Mechanism
This phenomenon was first described in B lymphocytes by Gustav Nossal and termed "clonal anergy." The clones of B lymphocytes in this case can still be found alive in the circulation, but are ineffective at mounting immune responses. Later Ronald Schwartz and Marc Jenkins described a similar process operating in the T lymphocyte. Many viruses (HIV being the most extreme example) seem to exploit the immune system's use of tolerance induction to evade the immune system, though the suppression of specific antigens is done by fewer pathogens (notably Mycobacterium leprae).
At the cellular level, "anergy" is the inability of an immune cell to mount a complete response against its target. In the immune system, circulating cells called lymphocytes form a primary army that defends the body against pathogenic viruses, bacteria and parasites. There are two major kinds of lymphocytes - the T lymphocyte and the B lymphocyte. Among the millions of lymphocytes in the human body, only a few actually are specific for any particular infectious agent. At the time of infection, these few cells must be recruited and allowed to multiply rapidly. This process – called "clonal expansion" – allows the body to quickly mobilise an army of clones, as and when required. Such immune response is anticipatory and its specificity is assured by pre-existing clones of lymphocytes, which expand in resp |
https://en.wikipedia.org/wiki/John%20L.%20Hennessy | John Leroy Hennessy (born September 22, 1952) is an American computer scientist, academician and businessman who serves as chairman of Alphabet Inc. Hennessy is one of the founders of MIPS Computer Systems Inc. as well as Atheros and served as the tenth President of Stanford University. Hennessy announced that he would step down in the summer of 2016. He was succeeded as president by Marc Tessier-Lavigne. Marc Andreessen called him "the godfather of Silicon Valley."
Along with David Patterson, Hennessy was a recipient of the 2017 Turing Award for their work in developing the reduced instruction set computer (RISC) architecture, which is now used in 99% of new computer chips.
Early life and education
Hennessy was raised in Huntington, New York, as one of six children. His father was an aerospace engineer, and his mother was a teacher before raising her children. He is of Irish-Catholic descent, with some of his ancestors arriving in America during the potato famine in the 19th century.
He earned his bachelor's degree in electrical engineering from Villanova University, and his master's degree and Doctor of Philosophy in computer science from Stony Brook University.
Career and research
Hennessy became a Stanford faculty member in 1977. In 1981, he began the MIPS project to investigate RISC processors, and in 1984, he used his sabbatical year to found MIPS Computer Systems Inc. to commercialize the technology developed by his research. In 1987, he became the Willard and Inez Kerr Bell Endowed Professor of Electrical Engineering and Computer Science.
Hennessy served as director of Stanford's Computer System Laboratory (1989–93), a research center run by Stanford's Electrical Engineering and Computer Science departments. He was chair of the Department of Computer Science (1994–96) and Dean of the School of Engineering (1996–99).
In 1999, Stanford President Gerhard Casper appointed Hennessy to succeed Condoleezza Rice as Provost of Stanford University. When Casper |
https://en.wikipedia.org/wiki/Dependent%20and%20independent%20variables | Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function), on the values of other variables. Independent variables, in turn, are not seen as depending on any other variable in the scope of the experiment in question. In this sense, some common independent variables are time, space, density, mass, fluid flow rate, and previous values of some observed value of interest (e.g. human population size) to predict future values (the dependent variable).
Of the two, it is always the dependent variable whose variation is being studied, by altering inputs, also known as regressors in a statistical context. In an experiment, any variable that can be attributed a value without attributing a value to any other variable is called an independent variable. Models and experiments test the effects that the independent variables have on the dependent variables. Sometimes, even if their influence is not of direct interest, independent variables may be included for other reasons, such as to account for their potential confounding effect.
In pure mathematics
In mathematics, a function is a rule for taking an input (in the simplest case, a number or set of numbers) and providing an output (which may also be a number). A symbol that stands for an arbitrary input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable. The most common symbol for the input is , and the most common symbol for the output is ; the function itself is commonly written .
It is possible to have multiple independent variables or multiple dependent variables. For instance, in multivariable calculus, one often encounters functions of the form , where is a dependent variable and and are independent variables. Functions with multiple outputs are often ref |
https://en.wikipedia.org/wiki/Memcached | Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the libevent library.
Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.
Memcached has no internal mechanism to track misses which may happen. However, some third party utilities provide this functionality.
Memcached was first developed by Brad Fitzpatrick for his website LiveJournal, on May 22, 2003. It was originally written in Perl, then later rewritten in C by Anatoly Vorobey, then employed by LiveJournal. Memcached is now used by many other systems, including YouTube, Reddit, Facebook, Pinterest, Twitter, Wikipedia, and Method Studios. Google App Engine, Google Cloud Platform, Microsoft Azure, IBM Bluemix and Amazon Web Services also offer a Memcached service through an API.
Software architecture
The system uses a client–server architecture. The servers maintain a key–value associative array; the clients populate this array and query it by key. Keys are up to 250 bytes long and values can be at most 1 megabyte in size.
Clients use client-side libraries to contact the servers which, by default, expose their service at port 11211. Both TCP and UDP are supported. Each client knows all servers; the servers do not communicate with each other. If a client wishes to set or read the value cor |
https://en.wikipedia.org/wiki/Audio%20editing%20software | Audio editing software is any software or computer program which allows editing and generating audio data. Audio editing software can be implemented completely or partly as a library, as a computer application, as a web application, or as a loadable kernel module. Wave editors are digital audio editors. There are many sources of software available to perform this function. Most can edit music, apply effects and filters, and adjust stereo channels.
A digital audio workstation (DAW) is software-based and typically comprises multiple software suite components, all accessible through a unified graphical user interface. DAWs are used for recording or producing music, sound effects and more.
Music software capabilities
Audio editing software typically offer the following features:
The ability to import and export various audio file formats for editing.
Record audio from one or more inputs and store recordings in the computer's memory as digital audio.
Edit the start time, stop time, and duration of any sound on the audio timeline.
Fade into or out of a clip (e.g. an S-fade out during applause after a performance), or between clips (e.g. crossfading between takes).
Mix multiple sound sources/tracks, combine them at various volume levels and pan from channel to channel to one or more output tracks
Apply simple or advanced effects or filters, including amplification, normalization, limiting, panning, compression, expansion, flanging, reverb, audio noise reduction, and equalization to change the audio.
Playback sound (often after being mixed) that can be sent to one or more outputs, such as speakers, additional processors, or a recording medium
Conversion between different audio file formats, or between different sound quality levels.
Typically these tasks can be performed in a manner that is non-linear. Audio editors may process the audio data non-destructively in real-time, or destructively as an "off-line" process, or a hybrid with some real-time effects and some offli |
https://en.wikipedia.org/wiki/Electric%20ray | The electric rays are a group of rays, flattened cartilaginous fish with enlarged pectoral fins, composing the order Torpediniformes . They are known for being capable of producing an electric discharge, ranging from 8 to 220 volts, depending on species, used to stun prey and for defense. There are 69 species in four families.
Perhaps the best known members are those of the genus Torpedo. The torpedo undersea weapon is named after it. The name comes from the Latin , 'to be stiffened or paralyzed', from the effect on someone who touches the fish.
Description
Electric rays have a rounded pectoral disc with two moderately large rounded-angular (not pointed or hooked) dorsal fins (reduced in some Narcinidae), and a stout muscular tail with a well-developed caudal fin. The body is thick and flabby, with soft loose skin with no dermal denticles or thorns. A pair of kidney-shaped electric organs are at the base of the pectoral fins. The snout is broad, large in the Narcinidae, but reduced in all other families. The mouth, nostrils, and five pairs of gill slits are underneath the disc.
Electric rays are found from shallow coastal waters down to at least deep. They are sluggish and slow-moving, propelling themselves with their tails, not by using their pectoral fins as other rays do. They feed on invertebrates and small fish. They lie in wait for prey below the sand or other substrate, using their electricity to stun and capture it.
Relationship to humans
History of research
The electrogenic properties of electric rays have been known since antiquity, although their nature was not understood. The ancient Greeks used electric rays to numb the pain of childbirth and operations. In his dialogue Meno, Plato has the character Meno accuse Socrates of "stunning" people with his puzzling questions, in a manner similar to the way the torpedo fish stuns with electricity. Scribonius Largus, a Roman physician, recorded the use of torpedo fish for treatment of headaches and gout |
https://en.wikipedia.org/wiki/Midpoint | In geometry, the midpoint is the middle point of a line segment. It is equidistant from both endpoints, and it is the centroid both of the segment and of the endpoints. It bisects the segment.
Formula
The midpoint of a segment in n-dimensional space whose endpoints are and is given by
That is, the ith coordinate of the midpoint (i = 1, 2, ..., n) is
Construction
Given two points of interest, finding the midpoint of the line segment they determine can be accomplished by a compass and straightedge construction. The midpoint of a line segment, embedded in a plane, can be located by first constructing a lens using circular arcs of equal (and large enough) radii centered at the two endpoints, then connecting the cusps of the lens (the two points where the arcs intersect). The point where the line connecting the cusps intersects the segment is then the midpoint of the segment. It is more challenging to locate the midpoint using only a compass, but it is still possible according to the Mohr-Mascheroni theorem.
Geometric properties involving midpoints
Circle
The midpoint of any diameter of a circle is the center of the circle.
Any line perpendicular to any chord of a circle and passing through its midpoint also passes through the circle's center.
The butterfly theorem states that, if is the midpoint of a chord of a circle, through which two other chords and are drawn, then and intersect chord at and respectively, such that is the midpoint of .
Ellipse
The midpoint of any segment which is an area bisector or perimeter bisector of an ellipse is the ellipse's center.
The ellipse's center is also the midpoint of a segment connecting the two foci of the ellipse.
Hyperbola
The midpoint of a segment connecting a hyperbola's vertices is the center of the hyperbola.
Triangle
The perpendicular bisector of a side of a triangle is the line that is perpendicular to that side and passes through its midpoint. The three perpendicular bisectors of a triangle's th |
https://en.wikipedia.org/wiki/Exotic%20option | In finance, an exotic option is an option which has features making it more complex than commonly traded vanilla options. Like the more general exotic derivatives they may have several triggers relating to determination of payoff. An exotic option may also include a non-standard underlying instrument, developed for a particular client or for a particular market. Exotic options are more complex than options that trade on an exchange, and are generally traded over the counter.
Etymology
The term "exotic option" was popularized by Mark Rubinstein's 1990 working paper (published 1992, with Eric Reiner) "Exotic Options", with the term based either on exotic wagers in horse racing, or due to the use of international terms such as "Asian option", suggesting the "exotic Orient".
Journalist Brian Palmer used the "successful $1 bet on the superfecta" in the 2010 Kentucky Derby that "paid a whopping $101,284.60" as an example of the controversial high-risk, high-payout exotic bets that were observed by track-watchers since the 1970s in his article about why we use the term exotic for certain types of financial instrument. Palmer compared these horse racing bets to the controversial emerging exotic financial instruments that concerned then-chairman of the Federal Reserve Paul Volcker in 1980. He argued that just as the exotic wagers survived the media controversy so will the exotic options.
In 1987, Bankers Trust's Mark Standish and David Spaughton were in Tokyo on business when "they developed the first commercially used pricing formula for options linked to the average price of crude oil." They called this exotic option the Asian option, because they were in Asia.
Development
Exotic options are often created by financial engineers and rely on complex models to attempt to price them.
Features
A straight call or put option, either American or European, would be considered a non-exotic or vanilla option. There are two general types of exotic options: path-independent a |
https://en.wikipedia.org/wiki/Animatronics | The term animatronics refers to mechatronic puppets. They are a modern variant of the automaton and are often used for the portrayal of characters in films and in theme park attractions.
It is a multidisciplinary field integrating puppetry, anatomy and mechatronics. Animatronic figures can be implemented with both computer and human control, including teleoperation. Motion actuators are often used to imitate muscle movements and create realistic motions. Figures are usually encased in body shells and flexible skins made of hard or soft plastic materials and finished with colors, hair, feathers and other components to make them more lifelike. Animatronics stem from a long tradition of mechanical automata powered by hydraulics, pneumatics and clockwork. Greek mythology and ancient Chinese writings mention early examples of automata. The oldest extant automaton is dated to the 16th century.
Before the term "animatronics" became common, they were usually referred to as "robots". Since then, robots have become known as more practical programmable machines that do not necessarily resemble living creatures. Robots (or other artificial beings) designed to convincingly resemble humans are known as "androids". The term Animatronics is a portmanteau of animate and electronics. The term Audio-Animatronics was coined by Walt Disney in 1961 when he started developing animatronics for entertainment and film. Audio-Animatronics does not differentiate between animatronics and androids.
Autonomatronics was also defined by Disney Imagineers to describe more advanced Audio-Animatronic technology featuring cameras and complex sensors to process and respond to information in the character's environment.
History
Timeline
2005: Engineered Arts produces the first version of their animatronic actor, RoboThespian
Modern attractions
The first animatronics characters shown to the public were a dog and a horse, as separate attractions at the 1939 New York World's Fai |
https://en.wikipedia.org/wiki/Logical%20volume%20management | In computer storage, logical volume management or LVM provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes to store volumes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions (or block devices in general) into larger virtual partitions that administrators can re-size or move, potentially without interrupting system use.
Volume management represents just one of many forms of storage virtualization; its implementation takes place in a layer in the device-driver stack of an operating system (OS) (as opposed to within storage devices or in a network).
Design
Most volume-manager implementations share the same basic design. They start with physical volumes (PVs), which can be either hard disks, hard disk partitions, or Logical Unit Numbers (LUNs) of an external storage device. Volume management treats each PV as being composed of a sequence of chunks called physical extents (PEs). Some volume managers (such as that in HP-UX and Linux) have PEs of a uniform size; others (such as that in Veritas) have variably-sized PEs that can be split and merged at will.
Normally, PEs simply map one-to-one to logical extents (LEs). With mirroring, multiple PEs map to each LE. These PEs are drawn from a physical volume group (PVG), a set of same-sized PVs which act similarly to hard disks in a RAID1 array. PVGs are usually laid out so that they reside on different disks or data buses for maximum redundancy.
The system pools LEs into a volume group (VG). The pooled LEs can then be concatenated together into virtual disk partitions called logical volumes or LVs. Systems can use LVs as raw block devices just like disk partitions: creating mountable file systems on them, or using them as swap storage.
Striped LVs allocate each successive LE from a different PV; depending on the size of the LE, this can improve performance on large sequential reads by bringing to bear t |
https://en.wikipedia.org/wiki/Branching%20factor | In computing, tree data structures, and game theory, the branching factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated.
For example, in chess, if a "node" is considered to be a legal position, the average branching factor has been said to be about 35, and a statistical analysis of over 2.5 million games revealed an average of 31. This means that, on average, a player has about 31 to 35 legal moves at their disposal at each turn. By comparison, the average branching factor for the game Go is 250.
Higher branching factors make algorithms that follow every branch at every node, such as exhaustive brute force searches, computationally more expensive due to the exponentially increasing number of nodes, leading to combinatorial explosion.
For example, if the branching factor is 10, then there will be 10 nodes one level down from the current position, 102 (or 100) nodes two levels down, 103 (or 1,000) nodes three levels down, and so on. The higher the branching factor, the faster this "explosion" occurs. The branching factor can be cut down by a pruning algorithm.
The average branching factor can be quickly calculated as the number of non-root nodes (the size of the tree, minus one; or the number of edges) divided by the number of non-leaf nodes (the number of nodes with children).
See also
k-ary tree
Outdegree
Hierarchy
Hierarchical organization
References
Trees (data structures)
Analysis of algorithms
Combinatorial game theory
Computer chess |
https://en.wikipedia.org/wiki/Path%20integral%20formulation | The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path integral allows one to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals), than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has proven to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away.
The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all poss |
https://en.wikipedia.org/wiki/Double-byte%20character%20set | A double-byte character set (DBCS) is a character encoding in which either all characters (including control characters) are encoded in two bytes, or merely every graphic character not representable by an accompanying single-byte character set (SBCS) is encoded in two bytes (Han characters would generally comprise most of these two-byte characters). A DBCS supports national languages that contain many unique characters or symbols (the maximum number of characters that can be represented with one byte is 256 characters, while two bytes can represent up to 65,536 characters). Examples of such languages include Japanese and Chinese. Korean Hangul does not contain as many characters, but KS X 1001 supports both Hangul and Hanja, and uses two bytes per character.
In CJK (Chinese/Japanese/Korean) computing
The term DBCS traditionally refers to a character encoding where each graphic character is encoded in two bytes.
In an 8-bit code, such as Big-5 or Shift JIS, a character from the DBCS is represented with a lead (first) byte with the most significant bit set (i.e., being greater than seven bits), and paired up with a single-byte character-set (SBCS). For the practical reason of maintaining compatibility with unmodified, off-the-shelf software, the SBCS is associated with half-width characters and the DBCS with full-width characters. In a 7-bit code such as ISO-2022-JP, escape sequences or shift codes are used to switch between the SBCS and DBCS.
Sometimes, the use of the term "DBCS" can imply an underlying structure that does not comply with ISO 2022. For example, "DBCS" can sometimes mean a double-byte encoding that is specifically not Extended Unix Code (EUC).
This original meaning of DBCS is different from what some consider correct usage today. Some insist that these character encodings be properly called multi-byte character sets (MBCS) or variable-width encodings, because character encodings such as EUC-JP, EUC-KR, EUC-TW, GB 18030, and UTF-8 use more than two |
https://en.wikipedia.org/wiki/List%20of%20plants%20used%20in%20herbalism | This is an alphabetical list of plants used in herbalism.
Phytochemicals possibly involved in biological functions are the basis of herbalism, and may be grouped as:
primary metabolites, such as carbohydrates and fats found in all plants
secondary metabolites serving a more specific function.
For example, some secondary metabolites are toxins used to deter predation, and others are pheromones used to attract insects for pollination. Secondary metabolites and pigments may have therapeutic actions in humans, and can be refined to produce drugs; examples are quinine from the cinchona, morphine and codeine from the poppy, and digoxin from the foxglove.
In Europe, apothecaries stocked herbal ingredients as traditional medicines. In the Latin names for plants created by Linnaeus, the word officinalis indicates that a plant was used in this way. For example, the marsh mallow has the classification Althaea officinalis, as it was traditionally used as an emollient to soothe ulcers. Pharmacognosy is the study of plant sources of phytochemicals.
Some modern prescription drugs are based on plant extracts rather than whole plants. The phytochemicals may be synthesized, compounded or otherwise transformed to make pharmaceuticals. Examples of such derivatives include aspirin, which is chemically related to the salicylic acid found in white willow. The opium poppy is a major industrial source of opiates, including morphine. Few traditional remedies, however, have translated into modern drugs, although there is continuing research into the efficacy and possible adaptation of traditional herbal treatments.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Databases
See also
Chinese classic herbal formula
History of birth control
List of branches of alternative medicine
List of culinary herbs and spices
List of herbs with known adverse effects
Materia Medica
Medicinal mushrooms
Medicinal plants of the American West
Medicinal plants traditi |
https://en.wikipedia.org/wiki/Mammalogy | In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals.
Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things.
Research purposes
Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures.
Mammalogist
A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep |
https://en.wikipedia.org/wiki/OpenSSL | OpenSSL is a software library for applications that provide secure communications over computer networks against eavesdropping, and identify the party at the other end. It is widely used by Internet servers, including the majority of HTTPS websites.
OpenSSL contains an open-source implementation of the SSL and TLS protocols. The core library, written in the C programming language, implements basic cryptographic functions and provides various utility functions. Wrappers allowing the use of the OpenSSL library in a variety of computer languages are available.
The OpenSSL Software Foundation (OSF) represents the OpenSSL project in most legal capacities including contributor license agreements, managing donations, and so on. OpenSSL Software Services (OSS) also represents the OpenSSL project for support contracts.
OpenSSL is available for most Unix-like operating systems (including Linux, macOS, and BSD), Microsoft Windows and OpenVMS.
Project history
The OpenSSL project was founded in 1998 to provide a free set of encryption tools for the code used on the Internet. It is based on a fork of SSLeay by Eric Andrew Young and Tim Hudson, which unofficially ended development on December 17, 1998, when Young and Hudson both went to work for RSA Security. The initial founding members were Mark Cox, Ralf Engelschall, Stephen Henson, Ben Laurie, and Paul Sutton.
, the OpenSSL management committee consisted of seven people and there are seventeen developers with commit access (many of whom are also part of the OpenSSL management committee). There are only two full-time employees (fellows) and the remainder are volunteers.
The project has a budget of less than $1 million USD per year and relies primarily on donations. Development of TLS 1.3 was sponsored by Akamai.
Major version releases
Algorithms
OpenSSL supports a number of different cryptographic algorithms:
Ciphers
AES, Blowfish, Camellia, Chacha20, Poly1305, SEED, CAST-128, DES, IDEA, RC2, RC4, RC5, Triple D |
https://en.wikipedia.org/wiki/Equation%20of%20time | The equation of time describes the discrepancy between two kinds of solar time. The word equation is used in the medieval sense of "reconciliation of a difference". The two times that differ are the apparent solar time, which directly tracks the diurnal motion of the Sun, and mean solar time, which tracks a theoretical mean Sun with uniform motion along the celestial equator. Apparent solar time can be obtained by measurement of the current position (hour angle) of the Sun, as indicated (with limited accuracy) by a sundial. Mean solar time, for the same place, would be the time indicated by a steady clock set so that over the year its differences from apparent solar time would have a mean of zero.
The equation of time is the east or west component of the analemma, a curve representing the angular offset of the Sun from its mean position on the celestial sphere as viewed from Earth. The equation of time values for each day of the year, compiled by astronomical observatories, were widely listed in almanacs and ephemerides.
The equation of time can be approximated by a sum of two sine waves (see explanation below):
[minutes]
In plain text format:
EoT = -7.659sin(6.24004077 + 0.01720197(365*(y-2000) + d)) + 9.863sin( 2 (6.24004077 + 0.01720197 (365*(y-2000) + d)) + 3.5932 ) [minutes]
A less precise but more compact and simpler form is:
EoT = 9.87 sin(2B°) - 7.67 sin(B° + 78.7°)
where B = 360° (d - 81) / 365.
With arguments expressed in radians:
EoT = 9.87 sin(2B * π /180) - 7.67 sin((B° + 78.7°) * π /180)
d represents the number of days since January 1 of the current year.
The concept
During a year the equation of time varies as shown on the graph; its change from one year to the next is slight. Apparent time, and the sundial, can be ahead (fast) by as much as 16 min 33 s (around 3 November), or behind (slow) by as much as 14 min 6 s (around 11 February). The equation of time has zeros near 15 April, 13 June, 1 September, and 25 December. Ignoring very s |
https://en.wikipedia.org/wiki/Moving%20frame | In mathematics, a moving frame is a flexible generalization of the notion of an ordered basis of a vector space often used to study the extrinsic differential geometry of smooth manifolds embedded in a homogeneous space.
Introduction
In lay terms, a frame of reference is a system of measuring rods used by an observer to measure the surrounding space by providing coordinates. A moving frame is then a frame of reference which moves with the observer along a trajectory (a curve). The method of the moving frame, in this simple example, seeks to produce a "preferred" moving frame out of the kinematic properties of the observer. In a geometrical setting, this problem was solved in the mid 19th century by Jean Frédéric Frenet and Joseph Alfred Serret. The Frenet–Serret frame is a moving frame defined on a curve which can be constructed purely from the velocity and acceleration of the curve.
The Frenet–Serret frame plays a key role in the differential geometry of curves, ultimately leading to a more or less complete classification of smooth curves in Euclidean space up to congruence. The Frenet–Serret formulas show that there is a pair of functions defined on the curve, the torsion and curvature, which are obtained by differentiating the frame, and which describe completely how the frame evolves in time along the curve. A key feature of the general method is that a preferred moving frame, provided it can be found, gives a complete kinematic description of the curve.
In the late 19th century, Gaston Darboux studied the problem of constructing a preferred moving frame on a surface in Euclidean space instead of a curve, the Darboux frame (or the trièdre mobile as it was then called). It turned out to be impossible in general to construct such a frame, and that there were integrability conditions which needed to be satisfied first.
Later, moving frames were developed extensively by Élie Cartan and others in the study of submanifolds of more general homogeneous spaces |
https://en.wikipedia.org/wiki/Windows%20Management%20Instrumentation | Windows Management Instrumentation (WMI) consists of a set of extensions to the Windows Driver Model that provides an operating system interface through which instrumented components provide information and notification. WMI is Microsoft's implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the Distributed Management Task Force (DMTF).
WMI allows scripting languages (such as VBScript or Windows' PowerShell) to manage Microsoft Windows personal computers and servers, both locally and remotely. WMI comes preinstalled in Windows 2000 through Windows 11 OSes. It is available as a download for Windows NT and Windows 95 to Windows 98.
Microsoft also provides a command-line interface to WMI called Windows Management Instrumentation Command-line (WMIC). However, WMIC is deprecated starting with Windows 10, version 21H1, Windows 11 and Windows Server 2022.
Purpose of WMI
The purpose of WMI is to define a proprietary set of environment-independent specifications which allow management information to be shared between management applications. WMI prescribes enterprise management standards and related technologies for Windows that work with existing management standards, such as Desktop Management Interface (DMI) and SNMP. WMI complements these other standards by providing a uniform model. This model represents the managed environment through which management data from any source can be accessed in a common way.
Development process
Because WMI abstracts the manageable entities with CIM and a collection of providers, the development of a provider implies several steps. The major steps can be summarized as follows:
Create the manageable entity model
Define a model
Implement the model
Create the WMI provider
Determine the provider type to implement
Determine the hosting model of the provider
Create the provider template with the ATL wizard
Implement the code logic in the provider
Register the provider with WMI |
https://en.wikipedia.org/wiki/Commitment%20scheme | A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.
A way to visualize a commitment scheme is to think of a sender as putting a message in a locked box, and giving the box to a receiver. The message in the box is hidden from the receiver, who cannot open the lock themselves. Since the receiver has the box, the message inside cannot be changed—merely revealed if the sender chooses to give them the key at some later time.
Interactions in a commitment scheme take place in two phases:
the commit phase during which a value is chosen and committed to
the reveal phase during which the value is revealed by the sender, then the receiver verifies its authenticity
In the above metaphor, the commit phase is the sender putting the message in the box, and locking it. The reveal phase is the sender giving the key to the receiver, who uses it to open the box and verify its contents. The locked box is the commitment, and the key is the proof.
In simple protocols, the commit phase consists of a single message from the sender to the receiver. This message is called the commitment. It is essential that the specific value chosen cannot be known by the receiver at that time (this is called the hiding property). A simple reveal phase would consist of a single message, the opening, from the sender to the receiver, followed by a check performed by the receiver. The value chosen during the commit phase must be the only one that the sender can compute and that validates during the reveal phase (this is called the bindi |
https://en.wikipedia.org/wiki/Cryptographic%20hash%20function | A cryptographic hash function (CHF) is a hash algorithm (a map of an arbitrary binary string to a binary string with a fixed size of bits) that has special properties desirable for a cryptographic application:
the probability of a particular -bit output result (hash value) for a random input string ("message") is (as for any good hash), so the hash value can be used as a representative of the message;
finding an input string that matches a given hash value (a pre-image) is unfeasible, assuming all input strings are equally likely. The resistance to such search is quantified as security strength, a cryptographic hash with bits of hash value is expected to have a preimage resistance strength of bits. However, if the space of possible inputs is significantly smaller than , or if it can be ordered by likelihood, then the hash value can serve as an oracle, allowing efficient search of the limited or ordered input space. A common example is the use of a standard fast hash function to obscure user passwords in storage. If an attacker can obtain the hashes of a set of passwords, they can test each hash value against lists of common passwords and all possible combinations of short passwords and typically recover a large fraction of the passwords themselves. See #Attacks on hashed passwords.
A second preimage resistance strength, with the same expectations, refers to a similar problem of finding a second message that matches the given hash value when one message is already known;
finding any pair of different messages that yield the same hash value (a collision) is also unfeasible, a cryptographic hash is expected to have a collision resistance strength of bits (lower due to the birthday paradox).
Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerpr |
https://en.wikipedia.org/wiki/Prefetch%20input%20queue | Fetching the instruction opcodes from program memory well in advance is known as prefetching and it is served by using a prefetch input queue (PIQ). The pre-fetched instructions are stored in a queue. The fetching of opcodes well in advance, prior to their need for execution, increases the overall efficiency of the processor boosting its speed. The processor no longer has to wait for the memory access operations for the subsequent instruction opcode to complete. This architecture was prominently used in the Intel 8086 microprocessor.
Introduction
Pipelining was brought to the forefront of computing architecture design during the 1960s due to the need for faster and more efficient computing. Pipelining is the broader concept and most modern processors load their instructions some clock cycles before they execute them. This is achieved by pre-loading machine code from memory into a prefetch input queue.
This behavior only applies to von Neumann computers (that is, not Harvard architecture computers) that can run self-modifying code and have some sort of instruction pipelining. Nearly all modern high-performance computers fulfill these three requirements.
Usually, the prefetching behavior of the PIQ is invisible to the programming model of the CPU. However, there are some circumstances where the behavior of PIQ is visible, and needs to be taken into account by the programmer.
When an x86 processor changes mode from real mode to protected mode and vice versa, the PIQ has to be flushed, or else the CPU will continue to translate the machine code as if it were written in its last mode. If the PIQ is not flushed, the processor might translate its codes wrong and generate an invalid instruction exception.
When executing self-modifying code, a change in the processor code immediately in front of the current location of execution might not change how the processor interprets the code, as it is already loaded into its PIQ. It simply executes its old copy already loaded |
https://en.wikipedia.org/wiki/Herbal%20medicine | Herbal medicine (also called herbalism, phytomedicine or phytotherapy) is the study of pharmacognosy and the use of medicinal plants, which are a basis of traditional medicine. With worldwide research into pharmacology, some herbal medicines have been translated into modern remedies, such as the anti-malarial group of drugs called artemisinin isolated from Artemisia annua, a herb that was known in Chinese medicine to treat fever. There is limited scientific evidence for the safety and efficacy of many plants used in 21st century herbalism, which generally does not provide standards for purity or dosage. The scope of herbal medicine sometimes include fungal and bee products, as well as minerals, shells and certain animal parts.
Paraherbalism describes alternative and pseudoscientific practices of using unrefined plant or animal extracts as unproven medicines or health-promoting agents. Paraherbalism relies on the belief that preserving various substances from a given source with less processing is safer or more effective than manufactured products, a concept for which there is no evidence.
History
Archaeological evidence indicates that the use of medicinal plants dates back to the Paleolithic age, approximately 60,000 years ago. Written evidence of herbal remedies dates back over 5,000 years to the Sumerians, who compiled lists of plants. Some ancient cultures wrote about plants and their medical uses in books called herbals. In ancient Egypt, herbs are mentioned in Egyptian medical papyri, depicted in tomb illustrations, or on rare occasions found in medical jars containing trace amounts of herbs. In ancient Egypt, the Ebers papyrus dates from about 1550 BC, and covers more than 700 compounds, mainly of plant origin. The earliest known Greek herbals came from Theophrastus of Eresos who, in the 4th century BC, wrote in Greek Historia Plantarum, from Diocles of Carystus who wrote during the 3rd century BC, and from Krateuas who wrote in the 1st century BC. Only a f |
https://en.wikipedia.org/wiki/Duck%20typing | Duck typing in computer programming is an application of the duck test—"If it walks like a duck and it quacks like a duck, then it must be a duck"—to determine whether an object can be used for a particular purpose. With nominative typing, an object is of a given type if it is declared as such (or if a type's association with the object is inferred through mechanisms such as object inheritance). With duck typing, an object is of a given type if it has all methods and properties required by that type. Duck typing may be viewed as a usage-based structural equivalence between a given object and the requirements of a type.
Example
This simple example in Python 3 demonstrates how any object may be used in any context until it is used in a way that it does not support.
class Duck:
def swim(self):
print("Duck swimming")
def fly(self):
print("Duck flying")
class Whale:
def swim(self):
print("Whale swimming")
for animal in [Duck(), Whale()]:
animal.swim()
animal.fly()
Output:
Duck swimming
Duck flying
Whale swimming
AttributeError: 'Whale' object has no attribute 'fly'
If it can be assumed that anything that can swim is a duck because ducks can swim, a whale could be considered a duck; however, if it is also assumed that a duck must be capable of flying, the whale will not be considered a duck.
In statically typed languages
In some statically-typed languages such as Boo and D, class type checking can be specified to occur at runtime rather than at compile time.
Comparison with other type systems
Structural type systems
Duck typing is similar to, but distinct from, structural typing. Structural typing is a static typing system that determines type compatibility and equivalence by a type's structure, whereas duck typing is dynamic and determines type compatibility by only that part of a type's structure that is accessed during runtime.
The TypeScript, Elm and Python languages support structural typing to varying degrees.
|
https://en.wikipedia.org/wiki/SGI%20Indigo%C2%B2%20and%20Challenge%20M | The SGI Indigo2 (stylized as "Indigo2") and the SGI Challenge M are Unix workstations which were designed and sold by SGI from 1992 to 1997.
The Indigo2, codenamed "Fullhouse", is a desktop workstation. The Challenge M is a server which differs from the Indigo2 only by a slightly differently colored and badged case, and the absence of graphics and sound hardware. Both systems are based on the MIPS processors, with EISA bus and SGI proprietary GIO64 expansion bus via a riser card.
The Indigo preceded the Indigo2, which is succeeded by the Octane.
Overview
Indigo2 desktop workstations have two models: the teal Indigo2 and the purple IMPACT. Both have identical looking cases except color, and sub-model case badging. The CPU types, the amount of RAM, and graphics capabilities, depend on the model or sub-model variation. Power Indigo2 is a version of the teal Indigo2, with the R8000 chipset and strong floating-point arithmetic performance. The later IMPACT Indigo2 workstation model has more computational and visualization power, especially due to the introduction of the R10000 series RISC CPU and IMPACT graphics.
CPU
All Indigo2 models use one of four distinct MIPS CPU variants: the 100 to 250 MHz MIPS R4000 and R4400, and the Quantum Effect Devices R4600 (IP22 mainboard); the 75 MHz MIPS R8000 (IP26 mainboard); and the 175 to 195 MHz R10000 (IP28 mainboard), which are featured in the last produced Indigo2 model, the IMPACT10000. Each microprocessor family differs in clock frequency, and in primary and secondary cache capacity.
RAM
All Indigo2 motherboards have 12 SIMM slots, for standard 36-bit parity 72-pin fast page mode SIMM memory modules seated in groups of four. Indigo2 can be expanded to a thermal specification maximum of either 384 MB or 512 MB RAM. The design of the memory control logic in R10000 machines support up to 1 GB RAM, but the thermal output of older generation of DRAM chips necessitate the 512 MB limit. With newer, higher-density and smaller s |
https://en.wikipedia.org/wiki/MOS%20Technology%20VIC-II | The VIC-II (Video Interface Chip II), specifically known as the MOS Technology 6567/6566/8562/8564 (NTSC versions), 6569/8565/8566 (PAL), is the microchip tasked with generating Y/C video signals (combined to composite video in the RF modulator) and DRAM refresh signals in the Commodore 64 and Commodore 128 home computers.
Succeeding the original MOS Technology VIC used in the VIC-20, the VIC-II was one of the key custom chips in the Commodore 64 (the other being the MOS Technology 6581 sound chip).
Development history
The VIC-II chip was designed primarily by Al Charpentier and Charles Winterble at MOS Technology, Inc. as a successor to the MOS Technology 6560 "VIC". The team at MOS Technology had previously failed to produce two graphics chips named MOS Technology 6562 for the Commodore TOI computer, and MOS Technology 6564 for the Color PET, due to memory speed constraints.
In order to construct the VIC-II, Charpentier and Winterble made a market survey of current home computers and video games, listing up the current features, and what features they wanted to have in the VIC-II. The idea of adding sprites came from the TI-99/4A computer and its TMS9918 graphics coprocessor. The idea to support collision detection came from the Mattel Intellivision. The Atari 800 was also mined for desired features, particularly bitmap mode, which was a desired goal of the MOS team as all of Commodore's principal home computer rivals had bitmap graphics while the VIC-20 only had redefinable characters. About 3/4 of the chip surface is used for the sprite functionality.
The chip was partly laid out using electronic design automation tools from Applicon (now a part of UGS Corp.), and partly laid out manually on vellum paper. The design was partly debugged by fabricating chips containing small subsets of the design, which could then be tested separately. This was easy since MOS Technology had both its research and development lab and semiconductor plant at the same location. The |
https://en.wikipedia.org/wiki/Amoeba%20%28operating%20system%29 | Amoeba is a distributed operating system developed by Andrew S. Tanenbaum and others at the Vrije Universiteit Amsterdam. The aim of the Amoeba project was to build a timesharing system that makes an entire network of computers appear to the user as a single machine. Development at the Vrije Universiteit was stopped: the source code of the latest version (5.3) was last modified on 30 July 1996.
The Python programming language was originally developed for this platform.
Overview
The goal of the Amoeba project was to construct an operating system for networks of computers that would present the network to the user as if it were a single machine. An Amoeba network consists of a number of workstations connected to a "pool" of processors, and executing a program from a terminal causes it to run on any of the available processors, with the operating system providing load balancing. Unlike the contemporary Sprite, Amoeba does not support process migration.
The workstations would typically function as networked terminals only. Aside from workstations and processors, additional machines operate as servers for files, directory services, TCP/IP communications etc.
Amoeba is a microkernel-based operating system. It offers multithreaded programs and a remote procedure call (RPC) mechanism for communication between threads, potentially across the network; even kernel-threads use this RPC mechanism for communication. Each thread is assigned a 48-bit number called its "port", which serves as its unique, network-wide "address" for communication.
The user interface and APIs of Amoeba were modeled after Unix and compliance with the POSIX standard was partially implemented; some of the Unix emulation code consists of utilities ported over from Tanenbaum's other operating system, MINIX. Early versions used a "homebrew" window system, which the Amoeba authors considered "faster ... in our view, cleaner ... smaller and much easier to understand", but version 4.0 uses the X Window Sys |
https://en.wikipedia.org/wiki/Gyrator | A gyrator is a passive, linear, lossless, two-port electrical network element proposed in 1948 by Bernard D. H. Tellegen as a hypothetical fifth linear element after the resistor, capacitor, inductor and ideal transformer. Unlike the four conventional elements, the gyrator is non-reciprocal. Gyrators permit network realizations of two-(or-more)-port devices which cannot be realized with just the conventional four elements. In particular, gyrators make possible network realizations of isolators and circulators. Gyrators do not however change the range of one-port devices that can be realized. Although the gyrator was conceived as a fifth linear element, its adoption makes both the ideal transformer and either the capacitor or inductor redundant. Thus the number of necessary linear elements is in fact reduced to three. Circuits that function as gyrators can be built with transistors and op-amps using feedback.
Tellegen invented a circuit symbol for the gyrator and suggested a number of ways in which a practical gyrator might be built.
An important property of a gyrator is that it inverts the current–voltage characteristic of an electrical component or network. In the case of linear elements, the impedance is also inverted. In other words, a gyrator can make a capacitive circuit behave inductively, a series LC circuit behave like a parallel LC circuit, and so on. It is primarily used in active filter design and miniaturization.
Behaviour
An ideal gyrator is a linear two port device which couples the current on one port to the voltage on the other and vice versa. The instantaneous currents and instantaneous voltages are related by
where is the gyration resistance of the gyrator.
The gyration resistance (or equivalently its reciprocal the gyration conductance) has an associated direction indicated by an arrow on the schematic diagram. By convention, the given gyration resistance or conductance relates the voltage on the port at the head of the arrow to the c |
https://en.wikipedia.org/wiki/Marine%20One | Marine One is the call sign of any United States Marine Corps aircraft carrying the president of the United States. It usually denotes a helicopter operated by Marine Helicopter Squadron One (HMX-1) "Nighthawks", consisting of either the large Sikorsky VH-3D Sea King or the newer, smaller VH-60N "White Hawk". Both helicopters are called "White Tops" because of their livery. Any Marine Corps aircraft carrying the vice president of the United States without the president has the call sign Marine Two.
History
The first use of a helicopter to transport the president was in 1957, when President Dwight D. Eisenhower traveled on a Bell UH-13J Sioux. The president wanted a quick way to reach his summer home, in Pennsylvania. Using Air Force One would have been impractical over such a short distance, and there was no airfield near his home with a paved runway to support fixed-wing aircraft, so Eisenhower instructed his staff to investigate other modes of transport and a Sikorsky UH-34 Seahorse helicopter was commissioned. The early aircraft lacked the amenities of its modern successors, such as air conditioning and an aircraft lavatory for use in flight.
In 1958, the H-13 was replaced by the Sikorsky H-34, which was succeeded in 1961 by the VH-3A.
Not long after helicopters for presidential transport were introduced, presidential aides asked the Marine Corps to investigate using the White House South Lawn for landing. There was ample room, and the protocol was established. Until 1976, the Marine Corps shared the responsibility of helicopter transportation for the president with the United States Army. Army helicopters used the call sign Army One while the president was on board.
The VH-3D entered service in 1978. The VH-60N entered service in 1987 and has served alongside the VH-3D. Improvements were made to both models of helicopter after their introduction, to take advantage of technological developments and to meet new mission requirements. By about 2001, it was cle |
https://en.wikipedia.org/wiki/Fractint | Fractint (originally FRACT386) is a freeware computer program to render and display many kinds of fractals. The program originated on MS-DOS, then ported to the Atari ST, Linux, and Macintosh. During the early 1990s, Fractint was the definitive fractal generating program for personal computers.
The name is a portmanteau of fractal and integer, since the first versions of Fractint used only integer arithmetic (also known as fixed-point arithmetic), for faster rendering on computers without math coprocessors. Since then, floating-point arithmetic and arbitrary-precision arithmetic modes have been added.
FractInt can draw most kinds of fractals that have appeared in the literature. It also has a few "fractal types" that are not strictly speaking fractals, but may be more accurately described as display hacks. These include cellular automata.
Development
Fractint originally appeared in 1988 as FRACT386, a computer program for rendering fractals very quickly on the Intel 80386 processor using integer arithmetic. The 80386 requires the Intel 80387 coprocessor to natively support floating point math, which was rare in IBM PC compatibles.
Early versions of FRACT386 were written by Bert Tyler, who based it on a Mandelbrot generator for a TI-based processor that used integer math and decided to try programming something similar for his 386 machine.
In February 1989, the program was renamed Fractint. In July 1990, it was ported to the Atari ST, with the math routines rewritten in Motorola 68000 assembly language by Howard Chu.
See also
Fractal art
Fractal-generating software
References
Further reading
Michael Frame, Benoit Mandelbrot, Fractals, Graphics, and Mathematics Education, Volume 58 of Mathematical Association of America Notes, Cambridge University Press, 2002, , pp. 57–59 (and used throughout the book)
External links
Mirror of now-gone spanky.triumf.ca Fractint site
Fractint Development Team WWW pages via the Wayback Machine
1988 software
Fractal soft |
https://en.wikipedia.org/wiki/Mpemba%20effect | The Mpemba effect is the name given to the observation that a liquid (typically water) which is initially hot can freeze faster than the same liquid which begins cold, under otherwise similar conditions. There is disagreement about its theoretical basis and the parameters required to produce the effect.
The Mpemba effect is named after Tanzanian scientist Erasto Bartholomeo Mpemba, who described it in 1963 as a secondary school student. The initial discovery and observations of the effect originate in ancient times; Aristotle said that it was common knowledge.
Definition
The phenomenon, when taken to mean "hot water freezes faster than cold", is difficult to reproduce or confirm because it is ill-defined. Monwhea Jeng proposed a more precise wording: "There exists a set of initial parameters, and a pair of temperatures, such that given two bodies of water identical in these parameters, and differing only in initial uniform temperatures, the hot one will freeze sooner." Even with Jeng's definition, it is not clear whether "freezing" refers to the point at which water forms a visible surface layer of ice, the point at which the entire volume of water becomes a solid block of ice, or when the water reaches . Jeng's definition suggests simple ways in which the effect might be observed, such as if a warmer temperature melts the frost on a cooling surface, thereby increasing thermal conductivity between the cooling surface and the water container. Alternatively, the Mpemba effect may not be evident in situations and under circumstances that at first seem to qualify.
Observations
Historical context
Various effects of heat on the freezing of water were described by ancient scientists, including Aristotle: "The fact that the water has previously been warmed contributes to its freezing quickly: for so it cools sooner. Hence many people, when they want to cool water quickly, begin by putting it in the sun." Aristotle's explanation involved antiperistasis: "...the suppose |
https://en.wikipedia.org/wiki/Blood%20smear | A blood smear, peripheral blood smear or blood film is a thin layer of blood smeared on a glass microscope slide and then stained in such a way as to allow the various blood cells to be examined microscopically. Blood smears are examined in the investigation of hematological (blood) disorders and are routinely employed to look for blood parasites, such as those of malaria and filariasis.
Preparation
A blood smear is made by placing a drop of blood on one end of a slide, and using a spreader slide to disperse the blood over the slide's length. The aim is to get a region, called a monolayer, where the cells are spaced far enough apart to be counted and differentiated. The monolayer is found in the "feathered edge" created by the spreader slide as it draws the blood forward.
The slide is left to air dry, after which the blood is fixed to the slide by immersing it briefly in methanol. The fixative is essential for good staining and presentation of cellular detail. After fixation, the slide is stained to distinguish the cells from each other.
Routine analysis of blood in medical laboratories is usually performed on blood films stained with Romanowsky stains such as Wright's stain, Giemsa stain, or Diff-Quik. Wright-Giemsa combination stain is also a popular choice. These stains allow for the detection of white blood cell, red blood cell, and platelet abnormalities. Hematopathologists often use other specialized stains to aid in the differential diagnosis of blood disorders.
After staining, the monolayer is viewed under a microscope using magnification up to 1000x. Individual cells are examined and their morphology is characterized and recorded.
Clinical significance
Blood smear examination is usually performed in conjunction with a complete blood count in order to investigate abnormal results or confirm results that the automated analyzer has flagged as unreliable.
Microscopic examination of the shape, size, and coloration of red blood cells is useful for determin |
https://en.wikipedia.org/wiki/Cold%20Spring%20Harbor%20Laboratory | Cold Spring Harbor Laboratory (CSHL) is a private, non-profit institution with research programs focusing on cancer, neuroscience, plant biology, genomics, and quantitative biology.
It is one of 68 institutions supported by the Cancer Centers Program of the U.S. National Cancer Institute (NCI) and has been an NCI-designated Cancer Center since 1987. The Laboratory is one of a handful of institutions that played a central role in the development of molecular genetics and molecular biology.
It has been home to eight scientists who have been awarded the Nobel Prize in Physiology or Medicine. CSHL is ranked among the leading basic research institutions in molecular biology and genetics, with Thomson Reuters ranking it #1 in the world. CSHL was also ranked #1 in research output worldwide by Nature. The Laboratory is led by Bruce Stillman, a biochemist and cancer researcher.
Since its inception in 1890, the institution's campus on the North Shore of Long Island has also been a center of biology education. Current CSHL educational programs serve professional scientists, doctoral students in biology, teachers of biology in the K–12 system, and students from the elementary grades through high school. In the past 10 years, CSHL conferences & courses have drawn over 81,000 scientists and students to the main campus. For this reason, many scientists consider CSHL a "crossroads of biological science." Since 2009 CSHL has partnered with the Suzhou Industrial Park in Suzhou, China to create Cold Spring Harbor Asia which annually draws some 3,000 scientists to its meetings and courses. The Cold Spring Harbor Laboratory School of Biological Sciences, formerly the Watson School of Biological Sciences, was founded in 1999.
In 2015, CSHL announced a strategic affiliation with the nearby Northwell Health to advance cancer therapeutics research, develop a new clinical cancer research unit at Northwell Health in Lake Success, NY, to support early-phase clinical studies of new canc |
https://en.wikipedia.org/wiki/Teleprompter | A teleprompter, also known as an autocue, is a display device that prompts the person speaking with an electronic visual text of a speech or script.
Using a teleprompter is similar to using cue cards. The screen is in front of, and usually below, the lens of a professional video camera, and the words on the screen are reflected to the eyes of the presenter using a sheet of clear glass or other beam splitter, so that they are read by looking directly at the lens position, but are not imaged by the lens. Light from the performer passes through the front side of the glass into the lens, while a shroud surrounding the lens and the back side of the glass prevents unwanted light from entering the lens. Optically this works in a very similar way to the Pepper's ghost illusion from classic theatre: an image viewable from one angle but not another.
Because the speaker can look straight at the lens while reading the script, the teleprompter creates the illusion that the speaker has memorized the speech or is speaking spontaneously, looking directly into the camera lens. Notes or cue cards, on the other hand, require the presenter to look at them instead of at the lens, which can cause the speaker to appear distracted, depending on the degree of deflection from the natural line of sight to the camera lens, and how long the speaker needs to glance away to glean the next speaking point; speakers who can internalize a full sentence or paragraph in a single short glance timed to natural breaks in the spoken cadence will create only a small or negligible impression of distraction.
The technology has continued to develop, including the following iterations:
first mechanical paper roll teleprompters — used by television presenters and speakers at U.S. political conventions in 1952
dual glass teleprompters — used by TV presenters and for U.S. conventions in 1964
computer-based rolls of 1982 and the four-prompter system for U.S. conventions — added a large off-stage confidence |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.