source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Micromonosporaceae | Micromonosporaceae is a family of bacteria of the class Actinomycetia. They are gram-positive, spore-forming soil organisms that form a true mycelium.
Genera
Micromonosporaceae comprises the following genera:
Actinocatenispora Thawai et al. 2006
Actinoplanes Couch 1950 (Approved Lists 1980)
Actinorhabdospora Mingma et al. 2016
Allocatelliglobosispora Lee and Lee 2011
Allorhizocola Sun et al. 2019
Asanoa Lee and Hah 2002
Catellatospora Asano and Kawamoto 1986
Catelliglobosispora Ara et al. 2008
Catenuloplanes Yokota et al. 1993
Couchioplanes Tamura et al. 1994
Dactylosporangium Thiemann et al. 1967 (Approved Lists 1980)
Hamadaea Ara et al. 2008
Krasilnikovia Ara and Kudo 2007
Longispora Matsumoto et al. 2003
Luedemannella Ara and Kudo 2007
Mangrovihabitans Liu et al. 2017
Micromonospora Ørskov 1923 (Approved Lists 1980)
"Natronosporangium" Sorokin et al. 2022
Phytohabitans Inahashi et al. 2010
Phytomonospora Li et al. 2011
Pilimelia Kane 1966 (Approved Lists 1980)
Planosporangium Wiese et al. 2008
Plantactinospora Qin et al. 2009
Polymorphospora Tamura et al. 2006
Pseudosporangium Ara et al. 2008
Rhizocola Matsumoto et al. 2014
Rugosimonospora Monciardini et al. 2009
Salinispora Maldonado et al. 2005
"Solwaraspora" Magarvey et al. 2004
Spirilliplanes Tamura et al. 1997
Virgisporangium corrig. Tamura et al. 2001
"Wangella" Jia et al. 2013
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN). The phylogeny is based on whole-genome analysis.
Notes
References
Micromonosporaceae
Soil biology |
https://en.wikipedia.org/wiki/Erik%20Demaine | Erik D. Demaine (born February 28, 1981) is a Canadian-American professor of computer science at the Massachusetts Institute of Technology and a former child prodigy.
Early life and education
Demaine was born in Halifax, Nova Scotia, to mathematician and sculptor Martin L. Demaine and Judy Anderson. From the age of 7, he was identified as a child prodigy and spent time traveling across North America with his father. He was home-schooled during that time span until entering university at the age of 12.
Demaine completed his bachelor's degree at 14 years of age at Dalhousie University in Canada, and completed his PhD at the University of Waterloo by the time he was 20 years old.
Demaine's PhD dissertation, a work in the field of computational origami, was completed at the University of Waterloo under the supervision of Anna Lubiw and Ian Munro. This work was awarded the Canadian Governor General's Gold Medal from the University of Waterloo and the NSERC Doctoral Prize (2003) for the best PhD thesis and research in Canada. Some of the work from this thesis was later incorporated into his book Geometric Folding Algorithms on the mathematics of paper folding published with Joseph O'Rourke in 2007.
Professional accomplishments
Demaine joined the faculty of the Massachusetts Institute of Technology (MIT) in 2001 at age 20, reportedly the youngest professor in the history of MIT, and was promoted to full professorship in 2011. Demaine is a member of the Theory of Computation group at MIT Computer Science and Artificial Intelligence Laboratory.
Mathematical origami artwork by Erik and Martin Demaine was part of the Design and the Elastic Mind exhibit at the Museum of Modern Art in 2008, and has been included in the MoMA permanent collection. That same year, he was one of the featured artists in Between the Folds, an international documentary film about origami practitioners which was later broadcast on PBS television. In connection with a 2012 exhibit, three of his cu |
https://en.wikipedia.org/wiki/Automated%20attendant | In telephony, an automated attendant (also auto attendant, auto-attendant, autoattendant, automatic phone menus, AA, or virtual receptionist) allows callers to be automatically transferred to an extension without the intervention of an operator/receptionist. Many AAs will also offer a simple menu system ("for sales, press 1, for service, press 2," etc.). An auto attendant may also allow a caller to reach a live operator by dialing a number, usually "0". Typically the auto attendant is included in a business's phone system such as a PBX, but some services allow businesses to use an AA without such a system. Modern AA services (which now overlap with more complicated interactive voice response or IVR systems) can route calls to mobile phones, VoIP virtual phones, other AAs/IVRs, or other locations using traditional land-line phones or voice message machines.
Feature description
Telephone callers will recognize an automated attendant system as one that greets calls incoming to an organization with a recorded greeting of the form, "Thank you for calling .... If you know your party's extension, you may dial it any time during this message." Callers who have a touch tone (DTMF) phone can dial an extension number or, in most cases, wait for operator ("attendant") assistance. Since the telephone network does not transmit the DC signals from rotary dial telephones (except for audible clicks), callers who have rotary dial phones have to wait for assistance.
On a purely technical level it could be argued that an automated attendant is a very simple kind of IVR, however in the telecom industry the terms IVR and auto attendant are generally considered distinct. An automated attendant serves a very specific purpose (replace live operator and route calls), whereas an IVR can perform all sorts of functions (telephone banking, account inquiries, etc.).
An AA will often include a directory which will allow a caller to dial by name in order to find a user on a system. There i |
https://en.wikipedia.org/wiki/Geodetic%20datum | A geodetic datum or geodetic system (also: geodetic reference datum, geodetic reference system, or geodetic reference frame) is a global datum reference or reference frame for precisely representing the position of locations on Earth or other planetary bodies by means of geodetic coordinates. Datums are crucial to any technology or technique based on spatial location, including geodesy, navigation, surveying, geographic information systems, remote sensing, and cartography. A horizontal datum is used to measure a location across the Earth's surface, in latitude and longitude or another coordinate system; a vertical datum is used to measure the elevation or depth relative to a standard origin, such as mean sea level (MSL). Since the rise of the global positioning system (GPS), the ellipsoid and datum WGS 84 it uses has supplanted most others in many applications. The WGS 84 is intended for global use, unlike most earlier datums.
Before GPS, there was no precise way to measure the position of a location that was far from universal reference points, such as from the Prime Meridian at the Greenwich Observatory for longitude, from the Equator for latitude, or from the nearest coast for sea level. Astronomical and chronological methods have limited precision and accuracy, especially over long distances. Even GPS requires a predefined framework on which to base its measurements, so WGS 84 essentially functions as a datum, even though it is different in some particulars from a traditional standard horizontal or vertical datum.
A standard datum specification (whether horizontal or vertical) consists of several parts: a model for Earth's shape and dimensions, such as a reference ellipsoid or a geoid; an origin at which the ellipsoid/geoid is tied to a known (often monumented) location on or inside Earth (not necessarily at 0 latitude 0 longitude); and multiple control points that have been precisely measured from the origin and monumented. Then the coordinates of other pl |
https://en.wikipedia.org/wiki/Tyranny%20of%20numbers | The tyranny of numbers was a problem faced in the 1960s by computer engineers. Engineers were unable to increase the performance of their designs due to the huge number of components involved. In theory, every component needed to be wired to every other component (or at least many other components) and were typically strung and soldered by hand. In order to improve performance, more components would be needed, and it seemed that future designs would consist almost entirely of wiring.
History
The first known recorded use of the term in this context was made by the Vice President of Bell Labs in an article celebrating the 10th anniversary of the invention of the transistor, for the "Proceedings of the IRE" (Institute of Radio Engineers), June 1958 . Referring to the problems many designers were having, he wrote:
At the time, computers were typically built up from a series of "modules", each module containing the electronics needed to perform a single function. A complex circuit like an adder would generally require several modules working in concert. The modules were typically built on printed circuit boards of a standardized size, with a connector on one edge that allowed them to be plugged into the power and signaling lines of the machine, and were then wired to other modules using twisted pair or coaxial cable.
Since each module was relatively custom, modules were assembled and soldered by hand or with limited automation. As a result, they suffered major reliability problems. Even a single bad component or solder joint could render the entire module inoperative. Even with properly working modules, the mass of wiring connecting them together was another source of construction and reliability problems. As computers grew in complexity, and the number of modules increased, the complexity of making a machine actually work grew more and more difficult. This was the "tyranny of numbers".
It was precisely this problem that Jack Kilby was thinking about while working |
https://en.wikipedia.org/wiki/Tomographic%20reconstruction | Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security.
This article applies in general to reconstruction methods for all kinds of tomography, but some of the terms and physical descriptions refer directly to the reconstruction of X-ray computed tomography.
Introducing formula
The projection of an object, resulting from the tomographic measurement process at a given angle , is made up of a set of line integrals (see Fig. 1). A set of many such projections under different angles organized in 2D is called sinogram (see Fig. 3). In X-ray CT, the line integral represents the total attenuation of the beam of x-rays as it travels in a straight line through the object. As mentioned above, the resulting image is a 2D (or 3D) model of the attenuation coefficient. That is, we wish to find the image . The simplest and easiest way to visualise the method of scanning is the system of parallel projection, as used in the first scanners. For this discussion we consider the data to be collected as a series of parallel rays, at position , across a projection at angle . This is repeated for various angles. Attenuation occurs exponentially in tissue:
where is the attenuation coefficient as a function of position. Therefore, generally the total attenuation of a ray at position , on the projection at angle , is given by the line integral:
Using the coordinate system of Figure 1, the value of onto which the point will be projected |
https://en.wikipedia.org/wiki/Protest%20art | Protest art is the creative works produced by activists and social movements. It is a traditional means of communication, utilized by a cross section of collectives and the state to inform and persuade citizens. Protest art helps arouse base emotions in their audiences, and in return may increase the climate of tension and create new opportunities to dissent. Since art, unlike other forms of dissent, take few financial resources, less financially able groups and parties can rely more on performance art and street art as an affordable tactic.
Protest art acts as an important tool to form social consciousness, create networks, operate accessibly, and be cost-effective. Social movements produce such works as the signs, banners, posters, and other printed materials used to convey a particular cause or message. Often, such art is used as part of demonstrations or acts of civil disobedience. These works tend to be ephemeral, characterized by their portability and disposability, and are frequently not authored or owned by any one person. The various peace symbols, and the raised fist are two examples that highlight the democratic ownership of these signs.
Protest art also includes (but is not limited to) performance, site-specific installations, graffiti and street art, and crosses the boundaries of Visual arts genres, media, and disciplines.
While some protest art is associated with trained and professional artists, an extensive knowledge of art is not required to take part in protest art. Protest artists frequently bypass the art-world institutions and commercial gallery system in an attempt to reach a wider audience. Furthermore, protest art is not limited to one region or country, but is rather a method that is used around the world.
There are many politically charged pieces of fine art — such as Picasso's Guernica, some of Norman Carlberg's Vietnam war-era work, or Susan Crile's images of torture at Abu Ghraib.
History
It is difficult to establish a history f |
https://en.wikipedia.org/wiki/Electro-galvanic%20oxygen%20sensor | An electro-galvanic fuel cell is an electrochemical device which consumes a fuel to produce an electrical output by a chemical reaction. One form of electro-galvanic fuel cell based on the oxidation of lead is commonly used to measure the concentration of oxygen gas in underwater diving and medical breathing gases.
Electronically monitored or controlled diving rebreather systems, saturation diving systems, and many medical life-support systems use galvanic oxygen sensors in their control circuits to directly monitor oxygen partial pressure during operation. They are also used in oxygen analysers in recreational, technical diving and surface supplied mixed gas diving to analyse the proportion of oxygen in a nitrox, heliox or trimix breathing gas before a dive.
These cells are lead/oxygen galvanic cells where oxygen molecules are dissociated and reduced to hydroxyl ions at the cathode. The ions diffuse through the electrolyte and oxidize the lead anode. A current proportional to the rate of oxygen consumption is generated when the cathode and anode are electrically connected through a resistor
Function
The cell reaction for a lead/oxygen cell is: 2Pb + O2 → 2PbO, made up of the cathode reaction: O2 + 2H2O + 4e− → 4OH−, and anode reaction: 2Pb + 4OH− → 2PbO + 2H2O + 4e−.
The cell current is proportional to the rate of oxygen reduction at the cathode, but this is not linearly dependent on the partial pressure of oxygen in the gas to which the cell is exposed: Linearity is achieved by placing a diffusion barrier between the gas and the cathode, which limits the amount of gas reaching the cathode to an amount that can be fully reduced without significant delay, making the partial pressure in the immediate vicinity of the electrode close to zero. As a result of this the amount of oxygen reaching the electrode follows Fick's laws of diffusion and is proportional to the partial pressure in the gas beyond the membrane. This makes the current proportional to PO2.
The loa |
https://en.wikipedia.org/wiki/Philips%20SAA1099 | The Philips SAA1099 sound generator is a 6-voice sound chip used by some 1980s devices.
It can produce several different waveforms by locking the volume envelope generator to the frequency generator, and also has a noise generator with 3 preset frequencies which can be locked to the frequency generator for greater range. It can output audio in fully independent stereo.
Uses
The following sound cards and computers used the SAA1099:
Silicon Graphics IRIS Professional 4D and IRIS Power 4D machines, released in 1987 and 1988, used the SAA1099 on the IO2 and IO3 board for sound generation. Although this feature was almost never documented or used, the chip is present and usable if addressed directly.
The Creative Music System (C/MS) by Creative Labs, released in 1987, and also marketed at RadioShack as the Game Blaster, released in 1988. These devices contain two SAA1099 chips, for twelve voices.
The Creative Sound Blaster 1.0 card released in 1989 (and 1.5 and 2.0 as an optional addon), included the SAA1099 chips, in addition to the OPL2 chip (aka YM3812), which became much more popular.
The British-made SAM Coupé computer released in 1989, with a single SAA1099 on the motherboard.
Various arcade games and the System 5 family used the SAA1099.
References
External links
Documentation
SAA1099 emulator for Windows and a few demo tunes
SAA1099 emulation library
The Old SGI audio
Sound chips |
https://en.wikipedia.org/wiki/Universal%20coefficient%20theorem | In algebraic topology, universal coefficient theorems establish relationships between homology groups (or cohomology groups) with different coefficients. For instance, for every topological space , its integral homology groups:
completely determine its homology groups with coefficients in , for any abelian group :
Here might be the simplicial homology, or more generally the singular homology. The usual proof of this result is a pure piece of homological algebra about chain complexes of free abelian groups. The form of the result is that other coefficients may be used, at the cost of using a Tor functor.
For example it is common to take to be , so that coefficients are modulo 2. This becomes straightforward in the absence of 2-torsion in the homology. Quite generally, the result indicates the relationship that holds between the Betti numbers of and the Betti numbers with coefficients in a field . These can differ, but only when the characteristic of is a prime number for which there is some -torsion in the homology.
Statement of the homology case
Consider the tensor product of modules . The theorem states there is a short exact sequence involving the Tor functor
Furthermore, this sequence splits, though not naturally. Here is the map induced by the bilinear map .
If the coefficient ring is , this is a special case of the Bockstein spectral sequence.
Universal coefficient theorem for cohomology
Let be a module over a principal ideal domain (e.g., or a field.)
There is also a universal coefficient theorem for cohomology involving the Ext functor, which asserts that there is a natural short exact sequence
As in the homology case, the sequence splits, though not naturally.
In fact, suppose
and define:
Then above is the canonical map:
An alternative point-of-view can be based on representing cohomology via Eilenberg–MacLane space where the map takes a homotopy class of maps from to to the corresponding homomorphism induced in homology. Th |
https://en.wikipedia.org/wiki/X86%20virtualization | x86 virtualization is the use of hardware-assisted virtualization capabilities on an x86/x86-64 CPU.
In the late 1990s x86 virtualization was achieved by complex software techniques, necessary to compensate for the processor's lack of hardware-assisted virtualization capabilities while attaining reasonable performance. In 2005 and 2006, both Intel (VT-x) and AMD (AMD-V) introduced limited hardware virtualization support that allowed simpler virtualization software but offered very few speed benefits. Greater hardware support, which allowed substantial speed improvements, came with later processor models.
Software-based virtualization
The following discussion focuses only on virtualization of the x86 architecture protected mode.
In protected mode the operating system kernel runs at a higher privilege such as ring 0, and applications at a lower privilege such as ring 3. In software-based virtualization, a host OS has direct access to hardware while the guest OSs have limited access to hardware, just like any other application of the host OS. One approach used in x86 software-based virtualization to overcome this limitation is called ring deprivileging, which involves running the guest OS at a ring higher (lesser privileged) than 0.
Three techniques made virtualization of protected mode possible:
Binary translation is used to rewrite certain ring 0 instructions in terms of ring 3 instructions, such as POPF, that would otherwise fail silently or behave differently when executed above ring 0, making the classic trap-and-emulate virtualization impossible. To improve performance, the translated basic blocks need to be cached in a coherent way that detects code patching (used in VxDs for instance), the reuse of pages by the guest OS, or even self-modifying code.
A number of key data structures used by a processor need to be shadowed. Because most operating systems use paged virtual memory, and granting the guest OS direct access to the MMU would mean loss of control |
https://en.wikipedia.org/wiki/Fetal%20position | Fetal position (British English: also foetal) is the positioning of the body of a prenatal fetus as it develops. In this position, the back is curved, the head is bowed, and the limbs are bent and drawn up to the torso. A compact position is typical for fetuses. Many newborn mammals, especially rodents, remain in a fetal position well after birth.
This type of compact position is used in the medical profession to minimize injury to the neck and chest.
Some people assume a fetal position when sleeping, especially when the body becomes cold. In some cultures bodies have been buried in fetal position.
Sometimes, when a person has suffered extreme physical or psychological trauma (including massive stress), they will assume a similar compact position in which the back is curved forward, the legs are brought up as tightly against the abdomen as possible, the head is bowed as close to the abdomen as possible, and the arms are wrapped around the head to prevent further trauma.
This type of position has been observed in drug addicts, who enter the position when experiencing withdrawal. Sufferers of anxiety are also known to assume the same type of position during panic attacks.
Assuming this type of position and playing dead is often recommended as a strategy to end a bear attack.
See also
Neutral body posture
Position (obstetrics)
References
Anatomy
Infancy
Human positions |
https://en.wikipedia.org/wiki/Forwarder | A forwarder is a forestry vehicle that carries big felled logs from the stump to a roadside landing. Unlike a skidder, a forwarder carries logs clear of the ground, which can reduce soil impacts but tends to limit the size of the logs it can move. Forwarders are typically employed together with harvesters in cut-to-length logging operations.
Load capacity
Forwarders are commonly categorised on their load carrying capabilities. The smallest are trailers designed for towing behind all-terrain vehicles which can carry a load between 1 and 3 tonnes. Agricultural self-loading trailers designed to be towed by farm tractors can handle load weights up to around 12 to 15 tonnes. Light weight purpose-built machines utilised in commercial logging and early thinning operations can handle payloads of up to 8 tonnes. Medium-sized forwarders used in clearfells and later thinnings carry between 12 and 16 tonnes. The largest class specialized for clearfells handles up to 25 tonnes. Forwarders also carry their load at least 2 feet above the ground.
Manufacturers
Barko Hydraulics、LLC。。
Caterpillar Inc.
John Deere (Timberjack)
EcoLog
Fabtek
HSM
HSM (Hohenloher Spezial Maschinenbau GmbH, Germany)
Komatsu Forest (Valmet)
Kronos
Logset
Malwa
Neuson Forest
PM Pfanzelt Maschinenbau
Ponsse
Rottne
Strojirna Novotny
Tigercat
Timber Pro
Zanello
諸 ⁇ 。
External links
Engineering vehicles
Log transport
Forestry equipment |
https://en.wikipedia.org/wiki/Mincemeat | Mincemeat is a mixture of chopped dried fruit, distilled spirits and spices, and often beef suet, usually used as a pie or pastry filling. Mincemeat formerly contained meat, notably beef or venison. Many modern recipes replace the suet with vegetable shortening. Mincemeat is found in the Anglosphere.
Etymology
The "mince" in mincemeat comes from the Middle English mincen, and the Old French mincier both traceable to the Vulgar Latin minutiare, meaning chop finely. The word mincemeat is an adaptation of an earlier term minced meat, meaning finely chopped meat. Meat was also a term for food in general, not only animal flesh.
Variants and history
English recipes from the 15th, 16th, and 17th centuries describe a fermented mixture of meat and fruit used as a pie filling. These early recipes included vinegars and wines, but by the 18th century, distilled spirits, frequently brandy, were being used instead. The use of spices like clove, nutmeg, mace and cinnamon was common in late medieval and renaissance meat dishes. The increase of sweetness from added sugar made mincemeat less a savoury dinner course and helped to direct its use toward desserts.
16th-century recipe
Pyes of mutton or beif must be fyne mynced & seasoned with pepper and salte and a lytel saffron to colour it / suet or marrow a good quantitie / a lytell vynegre / pruynes / great reasons / and dates / take the fattest of the broath of powdred beefe. And if you will have paest royall / take butter and yolkes of egges & so to temper the floure to make the paest.
Pies of mutton or beef must be finely minced and seasoned with pepper and salt, and a little saffron to colour it. [Add] a good quantity of suet or marrow, a little vinegar, prunes, raisins and dates. [Put in] the fattest of the broth of salted beef. And, if you want Royal pastry, take butter and egg yolks and [combine them with] flour to make the paste.
In the mid- to late eighteenth century, mincemeat in Europe had become associated with old |
https://en.wikipedia.org/wiki/Style%20sheet%20%28web%20development%29 | A web style sheet is a form of separation of content and presentation for web design in which the markup (i.e., HTML or XHTML) of a webpage contains the page's semantic content and structure, but does not define its visual layout (style). Instead, the style is defined in an external style sheet file using a style sheet language such as CSS or XSLT. This design approach is identified as a "separation" because it largely supersedes the antecedent methodology in which a page's markup defined both style and structure.
The philosophy underlying this methodology is a specific case of separation of concerns.
Benefits
Separation of style and content has advantages, but has only become practical after improvements in popular web browsers' CSS implementations.
Speed
Overall, users experience of a site utilising style sheets will generally be quicker than sites that don’t use the technology. ‘Overall’ as the first page will probably load more slowly – because the style sheet AND the content will need to be transferred. Subsequent pages will load faster because no style information will need to be downloaded – the CSS file will already be in the browser’s cache.
Maintainability
Holding all the presentation styles in one file can reduce the maintenance time and reduces the chance of error, thereby improving presentation consistency. For example, the font color associated with a type of text element may be specified — and therefore easily modified — throughout an entire website simply by changing one short string of characters in a single file. The alternative approach, using styles embedded in each individual page, would require a cumbersome, time consuming, and error-prone edit of every file.
Accessibility
Sites that use CSS with either XHTML or HTML are easier to tweak so that they appear similar in different browsers (Chrome, Internet Explorer, Mozilla Firefox, Opera, Safari, etc.).
Sites using CSS "degrade gracefully" in browsers unable to display graphical content, |
https://en.wikipedia.org/wiki/Asymptotic%20gain%20model | The asymptotic gain model (also known as the Rosenstark method) is a representation of the gain of negative feedback amplifiers given by the asymptotic gain relation:
where is the return ratio with the input source disabled (equal to the negative of the loop gain in the case of a single-loop system composed of unilateral blocks), G∞ is the asymptotic gain and G0 is the direct transmission term. This form for the gain can provide intuitive insight into the circuit and often is easier to derive than a direct attack on the gain.
Figure 1 shows a block diagram that leads to the asymptotic gain expression. The asymptotic gain relation also can be expressed as a signal flow graph. See Figure 2. The asymptotic gain model is a special case of the extra element theorem.
As follows directly from limiting cases of the gain expression, the asymptotic gain G∞ is simply the gain of the system when the return ratio approaches infinity:
while the direct transmission term G0 is the gain of the system when the return ratio is zero:
Advantages
This model is useful because it completely characterizes feedback amplifiers, including loading effects and the bilateral properties of amplifiers and feedback networks.
Often feedback amplifiers are designed such that the return ratio T is much greater than unity. In this case, and assuming the direct transmission term G0 is small (as it often is), the gain G of the system is approximately equal to the asymptotic gain G∞.
The asymptotic gain is (usually) only a function of passive elements in a circuit, and can often be found by inspection.
The feedback topology (series-series, series-shunt, etc.) need not be identified beforehand as the analysis is the same in all cases.
Implementation
Direct application of the model involves these steps:
Select a dependent source in the circuit.
Find the return ratio for that source.
Find the gain G∞ directly from the circuit by replacing the circuit with one corresponding to T = ∞.
Find the ga |
https://en.wikipedia.org/wiki/Foundry%20Networks | Foundry Networks, Inc. was a networking hardware vendor selling high-end Ethernet switches and routers. The company was acquired by Brocade Communications Systems on December 18, 2008.
History
The company was founded in 1996 by Bobby R. Johnson, Jr. and was headquartered in Santa Clara, California, United States. In its first year the company operated under the names Perennium Networks and StarRidge Networks, but by January 1997 the name Foundry Networks was adopted. Foundry Networks had their initial public offering in 1999, during the Internet bubble, with the company reaching a valuation of $9 billion on its first day of trading on NASDAQ with the symbol FDRY.
Foundry Networks designed, manufactured and sold high-end enterprise and service provider switches and routers, as well as wireless, security, and traffic management solutions. It was best known for its Layer 2 & 3 Ethernet switches. Foundry Networks was the first company to build and ship a gigabit Ethernet switch in 1997; to build a Layer 3 switch, also in 1997; to build the first Layer 4-7 switch in 1998 and to include 10 Gigabit Ethernet single connectors in its boxes (since 2001).
Foundry Networks early product lines consisted of the Workgroup, Backbone, and ServerIron products. The TurboIron all GigE switch and then router models were later introduced. Foundry Networks' later product lines consisted of the BigIron, EdgeIron, FastIron, IronPoint, NetIron, SecureIron, and ServerIron. After the early BigIron modular chassis, the Mucho Grande (MG) series chassis were introduced. Later the RX series in 4, 8, 16, and 32 slot versions. The largest and final product, the XMR was a full rack sized switch/router. Their software products included IronView and ServerIron TrafficWorks.
According to a Dell’Oro report published in 1Q2006, Foundry Networks ranked number 4 in a total market share of over US$3,659 million, and its ServerIron application switch ranked first for total port shipments.
Acquisition
O |
https://en.wikipedia.org/wiki/Lenovo | Lenovo Group Limited, often shortened to Lenovo ( , ), is a Chinese multinational technology company specializing in designing, manufacturing, and marketing consumer electronics, personal computers, software, business solutions, and related services. Products manufactured by the company include desktop computers, laptops, tablet computers, smartphones, workstations, servers, supercomputers, data storage devices, IT management software, and smart televisions. Its best-known brands include its ThinkPad business line of laptop computers (acquired from IBM), the IdeaPad, Yoga, and Legion consumer lines of laptop computers, and the IdeaCentre and ThinkCentre lines of desktop computers. As of 2021, Lenovo is the world's largest personal computer vendor by unit sales.
Lenovo has operations in over 60 countries and sells its products in around 180+ countries. It was incorporated in Hong Kong, with global headquarters in Beijing, and Morrisville, North Carolina, United States. and operational centres in Singapore and Morrisville, North Carolina, US. It has research centres in Beijing, Chengdu, Yamato (Kanagawa Prefecture, Japan), Singapore, Shanghai, Shenzhen, and Morrisville, and also has Lenovo NEC Holdings, a joint venture with NEC that produces personal computers for the Japanese market.
History
1984–1993: Founding and early history
Lenovo was founded in Beijing on 1 November 1984 as Legend by a team of engineers led by Liu Chuanzhi and Danny Lui. Initially specializing in televisions, the company migrated towards manufacturing and marketing computers.
Liu Chuanzhi and his group of ten experienced engineers, teaming up with Danny Lui, officially founded Lenovo in Beijing on November 1, 1984, with 200,000 yuan. The Chinese government approved Lenovo's incorporation on the same day. Jia Xufu (贾续福), one of the founders of Lenovo, indicated that the first meeting in preparation for starting the company was held on October 17 the same year. Eleven people, the entirety of t |
https://en.wikipedia.org/wiki/Specific%20orbital%20energy | In the gravitational two-body problem, the specific orbital energy (or vis-viva energy) of two orbiting bodies is the constant sum of their mutual potential energy () and their total kinetic energy (), divided by the reduced mass. According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time:
where
is the relative orbital speed;
is the orbital distance between the bodies;
is the sum of the standard gravitational parameters of the bodies;
is the specific relative angular momentum in the sense of relative angular momentum divided by the reduced mass;
is the orbital eccentricity;
is the semi-major axis.
It is typically expressed in (megajoule per kilogram) or (squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity (parabolic orbit). For a hyperbolic orbit, it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy.
Equation forms for different orbits
For an elliptic orbit, the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides, simplifies to:
where
is the standard gravitational parameter;
is semi-major axis of the orbit.
For a parabolic orbit this equation simplifies to
For a hyperbolic trajectory this specific orbital energy is either given by
or the same as for an ellipse, depending on the convention for the sign of a.
In this case the specific orbital energy is also referred to as characteristic energy (or ) and is equal to the excess specific energy compared to that for a parabolic orbit.
It is related to the hyperbolic excess velocity (the orbital velocity at infinity) by
It is relevant for interplanetary missions.
Thus, if orbital position vector () and orbital velocity vec |
https://en.wikipedia.org/wiki/FTP%20bounce%20attack | FTP bounce attack is an exploit of the FTP protocol whereby an attacker is able to use the command to request access to ports indirectly through the use of the victim machine, which serves as a proxy for the request, similar to an Open mail relay using SMTP.
This technique can be used to port scan hosts discreetly, and to potentially bypass a network's Access-control list to access specific ports that the attacker cannot access through a direct connection, for example with the nmap port scanner.
Nearly all modern FTP server programs are configured by default to refuse commands that would connect to any host but the originating host, thwarting FTP bounce attacks.
See also
Confused deputy problem
References
External links
CERT Advisory on FTP Bounce Attack
CERT Article on FTP Bounce Attack
Original posting describing the attack
File Transfer Protocol
Computer network security |
https://en.wikipedia.org/wiki/Feller%20buncher | A feller buncher is a type of harvester used in logging. It is a motorized vehicle with an attachment that can rapidly gather and cut a tree before felling it.
Feller is a traditional name for someone who cuts down trees, and bunching is the skidding and assembly of two or more trees. A feller buncher performs both of these harvesting functions and consists of a standard heavy equipment base with a tree-grabbing device furnished with a chain-saw, circular saw or a shear—a pinching device designed to cut small trees off at the base. The machine then places the cut tree on a stack suitable for a skidder, forwarder, or yarder for transport to further processing such as delimbing, bucking, loading, or chipping.
Some wheeled feller bunchers lack an articulated arm, and must drive close to a tree to grasp it.
In cut-to-length logging a harvester performs the tasks of a feller buncher and additionally does delimbing and bucking.
Components and Felling attachment
Feller buncher is either tracked or wheeled and has self-levelling cabin and matches with different felling heads. For steep terrain, tracked feller buncher is being used because it provides high level of traction to the steep slope and also has high level of stability. For flat terrain, wheeled feller buncher is more efficient compared to tracked feller buncher. It is common that levelling cabins are matched with both wheeled and tracked feller buncher for steep terrain as it provides operator comfort and helps keeping the standard of tree felling production. The size and type of trees determine which type of felling heads being used.
Types of felling heads
Disc Saw Head – It can provide a high speed of cutting when the head is pushed against the tree. Then, the clamp arms will hold the tree when the tree is almost completed cutting. It is able to cut and gather multiple trees in the felling head. The disc saw head with good ground speed provides high production, which allows it to keep more than one skidde |
https://en.wikipedia.org/wiki/Administrative%20share | Administrative shares are hidden network shares created by the Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system. These shares may not be permanently deleted but may be disabled. Administrative shares cannot be accessed by users without administrative privileges.
Share names
Administrative shares are a collection of automatically shared resources including the following:
Disk volumes: Every disk volume on the system is shared as an administrative share. The name of these shares consists of the drive letters of shared volume plus a dollar sign ($). For example, a system that has volumes C, D and E has three administrative shares named C$, D$ and E$. (NetBIOS is not case sensitive.)
OS folder: The folder in which Windows is installed is shared as admin$
Fax cache: The folder in which faxed pages and cover pages are cached is shared as fax$
IPC shares: This area, which is used for inter-process communication via named pipes and is not part of the file system, is shared as ipc$
Printers folder: This virtual folder, which contains objects that represent installed printers is shared as print$
Domain controller shares: Windows Server family of operating system creates two domain controller-specific shares called sysvol and netlogon which do not have dollar signs ($) appended to their names.
Characteristics
Administrative shares have the following characteristics:
Hidden: The "$" appended to the end of the share name means that it is a hidden share. Windows will not list such shares among those it defines in typical queries by remote clients to obtain the list of shares. One needs to know the name of an administrative share in order to access it. Not every hidden share is an administrative share; in other words, ordinary hidden shares may be created at user's discretion.
Automatically created: Administrative shares are created by Windows, not a network administrator. |
https://en.wikipedia.org/wiki/Node%20%28networking%29 | In telecommunications networks, a node (, ‘knot’) is either a redistribution point or a communication endpoint. The definition of a node depends on the network and protocol layer referred to. A physical network node is an electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communication channel. A passive distribution point such as a distribution frame or patch panel is consequently not a node.
Computer networks
In data communication, a physical network node may either be data communication equipment (DCE) such as a modem, hub, bridge or switch; or data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer.
If the network in question is a local area network (LAN) or wide area network (WAN), every LAN or WAN node that participates on the data link layer must have a network address, typically one for each network interface controller it possesses. Examples are computers, a DSL modem with Ethernet interface and wireless access point. Equipment, such as an Ethernet hub or modem with serial interface, that operates only below the data link layer does not require a network address.
If the network in question is the Internet or an intranet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, some data-link-layer devices such as switches, bridges and wireless access points do not have an IP host address (except sometimes for administrative purposes), and are not considered to be Internet nodes or hosts, but are considered physical network nodes and LAN nodes.
Telecommunications
In the fixed telephone network, a node may be a public or private telephone exchange, a remote concentrator or a computer providing some intelligent network service. In cellular communication, switching points and databases such as the base station controller, home location registe |
https://en.wikipedia.org/wiki/Atom%20%28order%20theory%29 | In the mathematical field of order theory, an element a of a partially ordered set with least element 0 is an atom if 0 < a and there is no x such that 0 < x < a.
Equivalently, one may define an atom to be an element that is minimal among the non-zero elements, or alternatively an element that covers the least element 0.
Atomic orderings
Let <: denote the covering relation in a partially ordered set.
A partially ordered set with a least element 0 is atomic if every element b > 0 has an atom a below it, that is, there is some a such that b ≥ a :> 0. Every finite partially ordered set with 0 is atomic, but the set of nonnegative real numbers (ordered in the usual way) is not atomic (and in fact has no atoms).
A partially ordered set is relatively atomic (or strongly atomic) if for all a < b there is an element c such that a <: c ≤ b or, equivalently, if every interval [a, b] is atomic. Every relatively atomic partially ordered set with a least element is atomic. Every finite poset is relatively atomic.
A partially ordered set with least element 0 is called atomistic (not to be confused with atomic) if every element is the least upper bound of a set of atoms. The linear order with three elements is not atomistic (see Fig. 2).
Atoms in partially ordered sets are abstract generalizations of singletons in set theory (see Fig. 1). Atomicity (the property of being atomic) provides an abstract generalization in the context of order theory of the ability to select an element from a non-empty set.
Coatoms
The terms coatom, coatomic, and coatomistic are defined dually. Thus, in a partially ordered set with greatest element 1, one says that
a coatom is an element covered by 1,
the set is coatomic if every b < 1 has a coatom c above it, and
the set is coatomistic if every element is the greatest lower bound of a set of coatoms.
References
External links
Order theory |
https://en.wikipedia.org/wiki/Wikispecies | Wikispecies is a wiki-based online project supported by the Wikimedia Foundation. Its aim is to create a comprehensive open content catalogue of all species; the project is directed at scientists, rather than at the general public. Jimmy Wales stated that editors are not required to fax in their degrees, but that submissions will have to pass muster with a technical audience. Wikispecies is available under the GNU Free Documentation License and CC BY-SA 3.0.
Started in September 2004, with biologists around the world invited to contribute, the project had grown to a framework encompassing the Linnaean taxonomy with links to Wikipedia articles on individual species by April 2005.
History
Benedikt Mandl coordinated the efforts of several people who were interested in getting involved with the project and contacted potential supporters in the early summer of 2004. Databases were evaluated and the administrators contacted; some of them have agreed on providing their data for Wikispecies. Mandl defined two major tasks:
Figure out how the contents of the data base would need to be presented—by asking experts, potential non-professional users and comparing that with existing databases
Figure out how to do the software, which hardware is required and how to cover the costs—by asking experts, looking for fellow volunteers and potential sponsors
Advantages and disadvantages were widely discussed by the wikimedia-I mailing list. The board of directors of the Wikimedia Foundation voted by 4 to 0 in favor of the establishment of Wikispecies. The project was launched in August 2004 and is hosted at species.wikimedia.org. It was officially merged into a sister project of the Wikimedia Foundation on September 14, 2004.
On October 10, 2006, the project exceeded 75,000 articles.
On May 20, 2007, the project exceeded 100,000 articles
On September 8, 2008, the project exceeded 150,000 articles
On October 23, 2011, the project reached 300,000 articles.
On June 16, 2014, the |
https://en.wikipedia.org/wiki/Data%20hierarchy | Data hierarchy refers to the systematic organization of data, often in a hierarchical form. Data organization involves characters, fields, records, files and so on. This concept is a starting point when trying to see what makes up data and whether data has a structure. For example, how does a person make sense of data such as 'employee', 'name', 'department', 'Marcy Smith', 'Sales Department' and so on, assuming that they are all related? One way to understand them is to see these terms as smaller or larger components in a hierarchy. One might say that Marcy Smith is one of the employees in the Sales Department, or an example of an employee in that Department. The data we want to capture about all our employees, and not just Marcy, is the name, ID number, address etc.
Purpose of the data hierarchy
"Data hierarchy" is a basic concept in data and database theory and helps to show the relationships between smaller and larger components in a database or data file. It is used to give a better sense of understanding about the components of data and how they are related.
It is particularly important in databases with referential integrity, third normal form, or perfect key. "Data hierarchy" is the result of proper arrangement of data without redundancy. Avoiding redundancy eventually leads to proper "data hierarchy" representing the relationship between data, and revealing its relational structure.
Components of the data hierarchy
The components of the data hierarchy are listed below.
A data field holds a single fact or attribute of an entity. Consider a date field, e.g. "19 September 2004". This can be treated as a single date field (e.g. birthdate), or three fields, namely, day of month, month and year.
A record is a collection of related fields. An Employee record may contain a name field(s), address fields, birthdate field and so on.
A file is a collection of related records. If there are 100 employees, then each employee would have a record (e.g. called Emp |
https://en.wikipedia.org/wiki/Kinetochore | A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932.
Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome.
Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis.
Structure of Kinetochore
The kinetochore contains two regions:
an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists t |
https://en.wikipedia.org/wiki/BlueJ | BlueJ is an integrated development environment (IDE) for the Java programming language, developed mainly for educational purposes, but also suitable for small-scale software development. It runs with the help of Java Development Kit (JDK).
BlueJ was developed to support the learning and teaching of object-oriented programming, and its design differs from other development environments as a result. The main screen graphically shows the class structure of an application under development (in a UML-like diagram), and objects can be interactively created and tested. This interaction facility, combined with a clean, simple user interface, allows easy experimentation with objects under development. Object-oriented concepts (classes, objects, communication through method calls) are represented visually and in its interaction design in the interface.
History
The development of BlueJ was started in 1999 by Michael Kölling and John Rosenberg at Monash University, as a successor to the Blue system. BlueJ is an IDE (Integrated Development Environment). Blue was an integrated system with its own programming language and environment, and was a relative of the Eiffel language. BlueJ implements the Blue environment design for the Java programming language.
In March 2009, the BlueJ project became free and open source software, and licensed under GPL-2.0-or-later with the Classpath exception.
BlueJ is currently being maintained by a team at King's College London, England, where Kölling works.
Supported language
BlueJ supports programming in Java and in Stride. Java support has been provided in BlueJ since its inception, while Stride support was added in 2017.
See also
Greenfoot
DrJava
Educational programming language
References
Bibliography
External links
BlueJ textbook
Integrated development environments
Free integrated development environments
Cross-platform free software
Free software programmed in Java (programming language)
Java development tools
Java pl |
https://en.wikipedia.org/wiki/ASCII%20Corporation | was a Japanese publishing company based in Chiyoda, Tokyo. It became a subsidiary of Kadokawa Group Holdings in 2004, and merged with another Kadokawa subsidiary MediaWorks on April 1, 2008, becoming ASCII Media Works. The company published Monthly ASCII as the main publication. ASCII is best known for creating the Derby Stallion video game series, the MSX computer, and the RPG Maker line of programming software.
History
1977–1990: Founding and first projects
ASCII was founded in 1977 by Kazuhiko Nishi and Keiichiro Tsukamoto. Originally the publisher of a magazine with the same name, ASCII, talks between Bill Gates and Nishi led to the creation of Microsoft's first overseas sales office, ASCII Microsoft, in 1978. In 1980, ASCII made 1.2 billion yen of sales from licensing Microsoft BASIC. It was 40 percent of Microsoft's sales, and Nishi became Microsoft's Vice President of Sales for Far East. In 1983, ASCII and Microsoft introduced the MSX, a standardized specification for 8-bit home computers. In 1984, ASCII entered the semiconductor business, followed by a further expansion into commercial online service in 1985 under the brand of ASCII-NET. As the popularity of home video game systems soared in the 1980s, ASCII became active in the development and publishing of software and peripherals for popular consoles such as the Family Computer and Mega Drive. After Microsoft's public stock offering in 1986, Microsoft founded its own Japanese subsidiary, Microsoft Kabushiki Kaisha (MSKK), and dissolved its partnership with ASCII. At around the same time, the company was also obliged to reform itself as a result of its aggressive diversification in the first half of the 1980s. The company went public in 1989.
1989–2000: Satellites and later projects
ASCII's revenue in its fiscal year ending March 1996 was 56 billion yen, broken down by sectors: publications (52.5% or 27.0 billion yen), game entertainment (27.8% or 14.3 billion yen), systems and semiconductors (10.8% or |
https://en.wikipedia.org/wiki/Centre%20for%20Applied%20Cryptographic%20Research | The Centre for Applied Cryptographic Research (CACR) is a group of industrial representatives, professors, and students at the University of Waterloo in Waterloo, Ontario, Canada who work and do research in the field of cryptography.
The CACR aims to facilitate leading-edge cryptographic research, to educate students at postgraduate levels, to host conferences and research visits, and to partner with various industries. It was officially opened on June 19, 1998.
The CACR involves students and professors from four departments at the school: Combinatorics & Optimization, Computer Science, Electrical and Computer Engineering, and Pure Math. It does not have a physical location, but utilizes resources from all the aforementioned departments.
The CACR plays a part in many conferences and workshops, including the following:
CACR Information Security Workshop
Privacy and Security Workshop
Workshop on Elliptic Curve Cryptography (ECC)
Workshop on Selected Areas in Cryptography (SAC)
The CACR includes the following notable faculty:
Scott Vanstone, professor, co-author of the Handbook of Applied Cryptography, founder of Certicom
Alfred Menezes, professor, co-author of the Handbook of Applied Cryptography
Neal Koblitz, adjunct professor, creator of elliptic curve cryptography and hyperelliptic curve cryptography
Doug Stinson, professor, author of Cryptography: Theory and Practice
Ian Goldberg, assistant professor, creator of Off-the-Record Messaging
External links
Centre for Applied Cryptographic Research homepage
University of Waterloo
Cryptography organizations
1998 establishments in Ontario |
https://en.wikipedia.org/wiki/Splenocyte | A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
References
Spleen (anatomy)
Mononuclear phagocytes
Leukocytes
Cell biology |
https://en.wikipedia.org/wiki/Computer%20security%20policy | A computer security policy defines the goals and elements of an organization's computer systems. The definition can be highly formal or informal. Security policies are enforced by organizational policies or security mechanisms. A technical implementation defines whether a computer system is secure or insecure. These formal policy models can be categorized into the core security principles of Confidentiality, Integrity, and Availability. For example, the Bell-La Padula model is a confidentiality policy model, whereas the Biba model is an integrity policy model.
Formal description
If a system is regarded as a finite-state automaton with a set of transitions (operations) that change the system's state, then a security policy can be seen as a statement that partitions these states into authorized and unauthorized ones.
Given this simple definition, one can define a secure system as one that starts in an authorized state and will never enter an unauthorized state.
Formal policy models
Confidentiality policy model
Bell-La Padula model
Integrity policies model
Biba model
Clark-Wilson model
Hybrid policy model
Chinese Wall (Also known as Brewer and Nash model)
Policy languages
To represent a concrete policy, especially for automated enforcement of it, a language representation is needed. There exist a lot of application-specific languages that are closely coupled with the security mechanisms that enforce the policy in that application.
Compared with this abstract policy languages, e.g., the Domain Type Enforcement-Language, is independent of the concrete mechanism.
See also
Anti-virus
Information Assurance - CIA Triad
Firewall (computing)
Protection mechanisms
Separation of protection and security
ITU Global Cybersecurity Agenda
References
Clark, D.D. and Wilson, D.R., 1987, April. A comparison of commercial and military computer security policies. In 1987 IEEE Symposium on Security and Privacy (pp. 184–184). IEEE.
Computer security procedures
|
https://en.wikipedia.org/wiki/PLATO%20%28computer%20system%29 | PLATO (Programmed Logic for Automatic Teaching Operations), also known as Project Plato and Project PLATO, was the first generalized computer-assisted instruction system. Starting in 1960, it ran on the University of Illinois' ILLIAC I computer. By the late 1970s, it supported several thousand graphics terminals distributed worldwide, running on nearly a dozen different networked mainframe computers. Many modern concepts in multi-user computing were first developed on PLATO, including forums, message boards, online testing, email, chat rooms, picture languages, instant messaging, remote screen sharing, and multiplayer video games.
PLATO was designed and built by the University of Illinois and functioned for four decades, offering coursework (elementary through university) to UIUC students, local schools, prison inmates, and other universities. Courses were taught in a range of subjects, including Latin, chemistry, education, music, Esperanto, and primary mathematics. The system included a number of features useful for pedagogy, including text overlaying graphics, contextual assessment of free-text answers, depending on the inclusion of keywords, and feedback designed to respond to alternative answers.
Rights to market PLATO as a commercial product were licensed by Control Data Corporation (CDC), the manufacturer on whose mainframe computers the PLATO IV system was built. CDC President William Norris planned to make PLATO a force in the computer world, but found that marketing the system was not as easy as hoped. PLATO nevertheless built a strong following in certain markets, and the last production PLATO system was in use until 2006.
Innovations
PLATO was either the first or an earlier example of many now-common technologies:
Hardware
. Donald Bitzer
. Donald Bitzer
Display Graphics
storing in downloadable fonts.
.
Online communities
Notesfiles (precursor to newsgroups), 1973.
Term-talk (1:1 chat)
Screen software sharing: , used by instructors t |
https://en.wikipedia.org/wiki/NesC | nesC (pronounced "NES-see") is a component-based, event-driven programming language used to build applications for the TinyOS platform. TinyOS is an operating environment designed to run on embedded devices used in distributed wireless sensor networks. nesC is built as an extension to the C programming language with components "wired" together to run applications on TinyOS. The name nesC is an abbreviation of "network embedded systems C".
Components and interfaces
nesC programs are built out of components, which are assembled ("wired") to form whole programs. Components have internal concurrency in the form of tasks. Threads of control may pass into a component through its interfaces. These threads are rooted either in a task or a hardware interrupt.
Interfaces may be provided or used by components. The provided interfaces are intended to represent the functionality that the component provides to its user, the used interfaces represent the functionality the component needs to perform its job.
In nesC, interfaces are bidirectional: They specify a set of functions to be implemented by the interface's provider (commands) and a set to be implemented by the interface's user (events). This allows a single interface to represent a complex interaction between components (e.g., registration of interest in some event, followed by a callback when that event happens). This is critical because all lengthy commands in TinyOS (e.g. send packet) are non-blocking; their completion is signaled through an event (send done). By specifying interfaces, a component cannot call the send command unless it provides an implementation of the sendDone event. Typically commands call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts.
Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages robust design, and allows for be |
https://en.wikipedia.org/wiki/Cognitive%20radio | A cognitive radio (CR) is a radio that can be programmed and configured dynamically to use the best wireless channels in its vicinity to avoid user interference and congestion. Such a radio automatically detects available channels in wireless spectrum, then accordingly changes its transmission or reception parameters to allow more concurrent wireless communications in a given spectrum band at one location. This process is a form of dynamic spectrum management.
Description
In response to the operator's commands, the cognitive engine is capable of configuring radio-system parameters. These parameters include "waveform, protocol, operating frequency, and networking". This functions as an autonomous unit in the communications environment, exchanging information about the environment with the networks it accesses and other cognitive radios (CRs). A CR "monitors its own performance continuously", in addition to "reading the radio's outputs"; it then uses this information to "determine the RF environment, channel conditions, link performance, etc.", and adjusts the "radio's settings to deliver the required quality of service subject to an appropriate combination of user requirements, operational limitations, and regulatory constraints".
Some "smart radio" proposals combine wireless mesh network—dynamically changing the path messages take between two given nodes using cooperative diversity; cognitive radio—dynamically changing the frequency band used by messages between two consecutive nodes on the path; and software-defined radio—dynamically changing the protocol used by message between two consecutive nodes.
History
The concept of cognitive radio was first proposed by Joseph Mitola III in a seminar at KTH Royal Institute of Technology in Stockholm in 1998 and published in an article by Mitola and Gerald Q. Maguire, Jr. in 1999. It was a novel approach in wireless communications, which Mitola later described as:
The point in which wireless personal digital assistants ( |
https://en.wikipedia.org/wiki/Quad%20Data%20Rate%20SRAM | Quad Data Rate (QDR) SRAM is a type of static RAM computer memory that can transfer up to four words of data in each clock cycle. Like Double Data-Rate (DDR) SDRAM, QDR SRAM transfers data on both rising and falling edges of the clock signal. The main purpose of this capability is to enable reads and writes to occur at high clock frequencies without the loss of bandwidth due to bus-turnaround cycles incurred in DDR SRAM. QDR SRAM uses two clocks, one for read data and one for write data and has separate read and write data buses (also known as Separate I/O), whereas DDR SRAM uses a single clock and has a single common data bus used for both reads and writes (also known as Common I/O). This helps to eliminate problems caused by the propagation delay of the clock wiring, and allows the illusion of concurrent reads and writes (as seen on the bus, although internally the memory still has a conventional single port - operations are pipelined but sequential).
When all data I/O signals are accounted, QDR SRAM is not 2x faster than DDR SRAM but is 100% efficient when reads and writes are interleaved. In contrast, DDR SRAM is most efficient when only one request type is continually repeated, e.g. only read cycles. When write cycles are interleaved with read cycles, one or more cycles are lost for bus turnaround to avoid data contention, which reduces bus efficiency. Most SRAM manufacturers constructed QDR and DDR SRAM using the same physical silicon, differentiated by a post-manufacturing selection (e.g. blowing a fuse on chip).
QDR SRAM was designed for high-speed communications and networking applications, where data throughput is more important than cost, power efficiency or density. The technology was created by Micron and Cypress, later followed by IDT, then NEC, Samsung and Renesas. Quad Data Rate II+ Memory is currently being designed by Cypress Semiconductor for Radiation Hardened Environments.
I/O
Clock inputs
4 clock lines:
Input clock:
K
not-K, or /K
Ou |
https://en.wikipedia.org/wiki/Irreducibility%20%28mathematics%29 | In mathematics, the concept of irreducibility is used in several ways.
A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field.
In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial.
In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations. Similarly, an irreducible module is another name for a simple module.
Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure.
In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space.
A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.) A detailed definition is given here.
Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state.
In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball. Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of |
https://en.wikipedia.org/wiki/Class%20function | In mathematics, especially in the fields of group theory and representation theory of groups, a class function is a function on a group G that is constant on the conjugacy classes of G. In other words, it is invariant under the conjugation map on G. Such functions play a basic role in representation theory.
Characters
The character of a linear representation of G over a field K is always a class function with values in K. The class functions form the center of the group ring K[G]. Here a class function f is identified with the element .
Inner products
The set of class functions of a group G with values in a field K form a K-vector space. If G is finite and the characteristic of the field does not divide the order of G, then there is an inner product defined on this space defined by where |G| denotes the order of G and bar is conjugation in the field K. The set of irreducible characters of G forms an orthogonal basis, and if K is a splitting field for G, for instance if K is algebraically closed, then the irreducible characters form an orthonormal basis.
In the case of a compact group and K = C the field of complex numbers, the notion of Haar measure allows one to replace the finite sum above with an integral:
When K is the real numbers or the complex numbers, the inner product is a non-degenerate Hermitian bilinear form.
See also
Brauer's theorem on induced characters
References
Jean-Pierre Serre, Linear representations of finite groups, Graduate Texts in Mathematics 42, Springer-Verlag, Berlin, 1977.
Group theory |
https://en.wikipedia.org/wiki/Electrodermal%20activity | Electrodermal activity (EDA) is the property of the human body that causes continuous variation in the electrical characteristics of the skin. Historically, EDA has also been known as skin conductance, galvanic skin response (GSR), electrodermal response (EDR), psychogalvanic reflex (PGR), skin conductance response (SCR), sympathetic skin response (SSR) and skin conductance level (SCL). The long history of research into the active and passive electrical properties of the skin by a variety of disciplines has resulted in an excess of names, now standardized to electrodermal activity (EDA).
The traditional theory of EDA holds that skin resistance varies with the state of sweat glands in the skin. Sweating is controlled by the sympathetic nervous system, and skin conductance is an indication of psychological or physiological arousal. If the sympathetic branch of the autonomic nervous system is highly aroused, then sweat gland activity also increases, which in turn increases skin conductance. In this way, skin conductance can be a measure of emotional and sympathetic responses. More recent research and additional phenomena (resistance, potential, impedance, electrochemical skin conductance, and admittance, sometimes responsive and sometimes apparently spontaneous) suggest that EDA is more complex than it seems, and research continues into the source and significance of EDA.
History
In 1849, Dubois-Reymond in Germany first observed that human skin was electrically active. He immersed the limbs of his subjects in a zinc sulfate solution and found that electric current flowed between a limb with muscles contracted and one that was relaxed. He therefore attributed his EDA observations to muscular phenomena. Thirty years later, in 1878 in Switzerland, Hermann and Luchsinger demonstrated a connection between EDA and sweat glands. Hermann later demonstrated that the electrical effect was strongest in the palms of the hands, suggesting that sweat was an important factor.
Vig |
https://en.wikipedia.org/wiki/Semisimple%20module | In mathematics, especially in the area of abstract algebra known as module theory, a semisimple module or completely reducible module is a type of module that can be understood easily from its parts. A ring that is a semisimple module over itself is known as an Artinian semisimple ring. Some important rings, such as group rings of finite groups over fields of characteristic zero, are semisimple rings. An Artinian ring is initially understood via its largest semisimple quotient. The structure of Artinian semisimple rings is well understood by the Artin–Wedderburn theorem, which exhibits these rings as finite direct products of matrix rings.
For a group-theory analog of the same notion, see Semisimple representation.
Definition
A module over a (not necessarily commutative) ring is said to be semisimple (or completely reducible) if it is the direct sum of simple (irreducible) submodules.
For a module M, the following are equivalent:
M is semisimple; i.e., a direct sum of irreducible modules.
M is the sum of its irreducible submodules.
Every submodule of M is a direct summand: for every submodule N of M, there is a complement P such that .
For the proof of the equivalences, see .
The most basic example of a semisimple module is a module over a field, i.e., a vector space. On the other hand, the ring of integers is not a semisimple module over itself, since the submodule is not a direct summand.
Semisimple is stronger than completely decomposable,
which is a direct sum of indecomposable submodules.
Let A be an algebra over a field K. Then a left module M over A is said to be absolutely semisimple if, for any field extension F of K, is a semisimple module over .
Properties
If M is semisimple and N is a submodule, then N and M/N are also semisimple.
An arbitrary direct sum of semisimple modules is semisimple.
A module M is finitely generated and semisimple if and only if it is Artinian and its radical is zero.
Endomorphism rings
A semisimple modu |
https://en.wikipedia.org/wiki/Convex%20conjugate | In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). It allows in particular for a far reaching generalization of Lagrangian duality.
Definition
Let be a real topological vector space and let be the dual space to . Denote by
the canonical dual pairing, which is defined by
For a function taking values on the extended real number line, its is the function
whose value at is defined to be the supremum:
or, equivalently, in terms of the infimum:
This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes.
Examples
For more examples, see .
The convex conjugate of an affine function is
The convex conjugate of a power function is
The convex conjugate of the absolute value function is
The convex conjugate of the exponential function is
The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers.
Connection with expected shortfall (average value at risk)
See this article for example.
Let F denote a cumulative distribution function of a random variable X. Then (integrating by parts),
has the convex conjugate
Ordering
A particular interpretation has the transform
as this is a nondecreasing rearrangement of the initial function f; in particular, for f nondecreasing.
Properties
The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function.
Order reversing
Declare that if and only if for all The |
https://en.wikipedia.org/wiki/Power%20electronics | Power electronics is the application of electronics to the control and conversion of electric power.
The first high-power electronic devices were made using mercury-arc valves. In modern systems, the conversion is performed with semiconductor switching devices such as diodes, thyristors, and power transistors such as the power MOSFET and IGBT. In contrast to electronic systems concerned with the transmission and processing of signals and data, substantial amounts of electrical energy are processed in power electronics. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g. television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry, a common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs starts from a few hundred watts and ends at tens of megawatts.
The power conversion systems can be classified according to the type of the input and output power:
AC to DC (rectifier)
DC to AC (inverter)
DC to DC (DC-to-DC converter)
AC to AC (AC-to-AC converter)
History
Power electronics started with the development of the mercury arc rectifier. Invented by Peter Cooper Hewitt in 1902, it was used to convert alternating current (AC) into direct current (DC). From the 1920s on, research continued on applying thyratrons and grid-controlled mercury arc valves to power transmission. Uno Lamm developed a mercury valve with grading electrodes making them suitable for high voltage direct current power transmission. In 1933 selenium rectifiers were invented.
Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to actually construct a working device at that time. In 1947, the bipolar point-contact transistor was invented by Walter H. Brattain and John Bardeen under the direction of William Shockley at Bell Labs |
https://en.wikipedia.org/wiki/PhpLDAPadmin | phpLDAPadmin is a web app for administering Lightweight Directory Access Protocol (LDAP) servers. It's written in the PHP programming language, and is licensed under the GNU General Public License. The application is available in 14 languages and supports UTF-8 encoded directory strings.
History
The project began in Fall of 2002 when Dave Smith, a student from Brigham Young University (BYU) and lead developer, needed a robust web application to manage his LDAP servers. Originally, phpLDAPadmin was called DaveDAP, but in August 2003, the name was changed to phpLDAPadmin. Since that time, the software has been downloaded approximately 150 times per day, and is commonly used throughout the world. Two other developers have contributed to the code base: Xavier Renard and Uwe Ebel. Xavier has focused on LDIF imports/exports and Samba software integration. Uwe has focused on internationalizing the application.
In Spring of 2005, Deon George took over maintenance of phpLDAPadmin.
Due to a long period starting from 2016, where no new pull requests have been merged into the master project, and no further releases were made, several forks exist, that implement new compatibilities and functionality. Since spring 2019 new development is going on and many pull requests were merged into the project restoring compatibility with recent PHP releases.
Distributions
The following Linux distributions include phpLDAPadmin in their official software repositories:
Ubuntu
Debian
Gentoo Linux
Arch Linux
It is available in the Extra Packages for Enterprise Linux (EPEL) repository, allowing managed installation to distributions such as Red Hat Enterprise Linux, Fedora, CentOS and Scientific Linux, and is included in the M23 software distribution system, which manages and distributes software for the Debian, Ubuntu, Kubuntu, Xubuntu, Linux Mint, Fedora, CentOS and openSUSE distributions.
It is also available in repositories for FreeBSD, OpenBSD, and Solaris.
References
External li |
https://en.wikipedia.org/wiki/E-Science | E-Science or eScience is computationally intensive science that is carried out in highly distributed network environments, or science that uses immense data sets that require grid computing; the term sometimes includes technologies that enable distributed collaboration, such as the Access Grid. The term was created by John Taylor, the Director General of the United Kingdom's Office of Science and Technology in 1999 and was used to describe a large funding initiative starting in November 2000. E-science has been more broadly interpreted since then, as "the application of computer technology to the undertaking of modern scientific investigation, including the preparation, experimentation, data collection, results dissemination, and long-term storage and accessibility of all materials generated through the scientific process. These may include data modeling and analysis, electronic/digitized laboratory notebooks, raw and fitted data sets, manuscript production and draft versions, pre-prints, and print and/or electronic publications." In 2014, IEEE eScience Conference Series condensed the definition to "eScience promotes innovation in collaborative, computationally- or data-intensive research across all disciplines, throughout the research lifecycle" in one of the working definitions used by the organizers. E-science encompasses "what is often referred to as big data [which] has revolutionized science... [such as] the Large Hadron Collider (LHC) at CERN... [that] generates around 780 terabytes per year... highly data intensive modern fields of science...that generate large amounts of E-science data include: computational biology, bioinformatics, genomics" and the human digital footprint for the social sciences.
Turing Award winner Jim Gray imagined "data-intensive science" or "e-science" as a "fourth paradigm" of science (empirical, theoretical, computational and now data-driven) and asserted that "everything about science is changing because of the impact of informati |
https://en.wikipedia.org/wiki/Giant%20magnetoresistance | Giant magnetoresistance (GMR) is a quantum mechanical magnetoresistance effect observed in multilayers composed of alternating ferromagnetic and non-magnetic conductive layers. The 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg for the discovery of GMR.
The effect is observed as a significant change in the electrical resistance depending on whether the magnetization of adjacent ferromagnetic layers are in a parallel or an antiparallel alignment. The overall resistance is relatively low for parallel alignment and relatively high for antiparallel alignment. The magnetization direction can be controlled, for example, by applying an external magnetic field. The effect is based on the dependence of electron scattering on spin orientation.
The main application of GMR is in magnetic field sensors, which are used to read data in hard disk drives, biosensors, microelectromechanical systems (MEMS) and other devices. GMR multilayer structures are also used in magnetoresistive random-access memory (MRAM) as cells that store one bit of information.
In literature, the term giant magnetoresistance is sometimes confused with colossal magnetoresistance of ferromagnetic and antiferromagnetic semiconductors, which is not related to a multilayer structure.
Formulation
Magnetoresistance is the dependence of the electrical resistance of a sample on the strength of an external magnetic field. Numerically, it is characterized by the value
where R(H) is the resistance of the sample in a magnetic field H, and R(0) corresponds to H = 0. Alternative forms of this expression may use electrical resistivity instead of resistance, a different sign for δH, and are sometimes normalized by R(H) rather than R(0).
The term "giant magnetoresistance" indicates that the value δH for multilayer structures significantly exceeds the anisotropic magnetoresistance, which has a typical value within a few percent.
History
GMR was discovered in 1988 independently by the groups |
https://en.wikipedia.org/wiki/Axiom%20of%20determinacy | In mathematics, the axiom of determinacy (abbreviated as AD) is a possible axiom for set theory introduced by Jan Mycielski and Hugo Steinhaus in 1962. It refers to certain two-person topological games of length ω. AD states that every game of a certain type is determined; that is, one of the two players has a winning strategy.
Steinhaus and Mycielski's motivation for AD was its interesting consequences, and suggested that AD could be true in the smallest natural model L(R) of a set theory, which accepts only a weak form of the axiom of choice (AC) but contains all real and all ordinal numbers. Some consequences of AD followed from theorems proved earlier by Stefan Banach and Stanisław Mazur, and Morton Davis. Mycielski and Stanisław Świerczkowski contributed another one: AD implies that all sets of real numbers are Lebesgue measurable. Later Donald A. Martin and others proved more important consequences, especially in descriptive set theory. In 1988, John R. Steel and W. Hugh Woodin concluded a long line of research. Assuming the existence of some uncountable cardinal numbers analogous to , they proved the original conjecture of Mycielski and Steinhaus that AD is true in L(R).
Types of game that are determined
The axiom of determinacy refers to games of the following specific form:
Consider a subset A of the Baire space ωω of all infinite sequences of natural numbers. Two players, I and II, alternately pick natural numbers
n0, n1, n2, n3, ...
After infinitely many moves, a sequence is generated. Player I wins the game if and only if the sequence generated is an element of A. The axiom of determinacy is the statement that all such games are determined.
Not all games require the axiom of determinacy to prove them determined. If the set A is clopen, the game is essentially a finite game, and is therefore determined. Similarly, if A is a closed set, then the game is determined. It was shown in 1975 by Donald A. Martin that games whose winning set is a Borel set |
https://en.wikipedia.org/wiki/Out-of-order%20execution | In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance central processing units to make use of instruction cycles that would otherwise be wasted. In this paradigm, a processor executes instructions in an order governed by the availability of input data and execution units, rather than by their original order in a program. In doing so, the processor can avoid being idle while waiting for the preceding instruction to complete and can, in the meantime, process the next instructions that are able to run immediately and independently.
History
Out-of-order execution is a restricted form of data flow computation, which was a major research area in computer architecture in the 1970s and early 1980s.
Early use in supercomputers
The first machine to use out-of-order execution was the CDC 6600 (1964), designed by James E. Thornton, which uses a scoreboard to avoid conflicts. It permits an instruction to execute if its source operand (read) addresses aren't to be written to by any unexecuted earlier instruction (true dependency) and the destination (write) address not be an address used by any unexecuted earlier instruction (false dependency). The 6600 lacks the means to avoid stalling an execution unit on false dependencies (write after write (WAW) and write after read (WAR) conflicts, respectively termed first order conflict and third order conflict by Thornton, who termed true dependencies (read after write (RAW)) as second order conflict) because each address has only a single location referable by it. The WAW is worse than WAR for the 6600, because when an execution unit encounters a WAR, the other execution units still receive and execute instructions, but upon a WAW the assignment of instructions to execution units stops, and they can not receive any further instructions until the WAW-causing instruction's destination register has been written to by earlier instruction.
About two years later, |
https://en.wikipedia.org/wiki/Gabor%20Herman | Gabor Tamas Herman is a Hungarian-American professor of computer science. He is Emiritas Professor of Computer Science at The Graduate Center, City University of New York (CUNY) where he was Distinguished Professor until 2017. He is known for his work on computerized tomography. He is a fellow of the Institute of Electrical and Electronics Engineers (IEEE).
Early life and education
Herman studied mathematics at the University of London, receiving his B.Sc. in 1963 and M.Sc. in 1964. In 1966, he received his M.S. in electrical engineering from the University of California, Berkeley, and in 1968 his Ph.D. in mathematics from the University of London.
Career
In 1969, Herman joined the department of computer science at Buffalo State College as an assistant professor. He became an associate professor in 1970 and a full professor in 1974. In 1976, he formed the Medical Image Processing Group. In 1980, he published the first edition of Reconstruction from Projections, his textbook on computerized tomography.
Herman moved the Medical Image Processing Group to the University of Pennsylvania in 1981. He was a professor in the radiology department from 1981 to 2000. In 1991, he was elected fellow of the IEEE. The citation reads: "For contributions to medical imagine, particularly in the theory and development of techniques for the reconstruction and display of computed tomographic images". In 1997, he was elected fellow of the American Institute for Medical and Biological Engineering. The citation reads: "For development implementation and evaluation of methods of reconstruction and 3D display of human organs based on transmitted or emitted radiation."
In 2001, Herman joined the faculty of CUNY as Distinguished Professor in the department of computer science, holding that position until his retirement in 2017. The second edition of his computerized tomography textbook, now titled Fundamentals of Computerized Tomography, was published in 2009.
Scientific Work
Together wit |
https://en.wikipedia.org/wiki/Freeze%20drying | Freeze drying, also known as lyophilization or cryodesiccation, is a low temperature dehydration process that involves freezing the product and lowering pressure, removing the ice by sublimation. This is in contrast to dehydration by most conventional methods that evaporate water using heat.
Because of the low temperature used in processing, the rehydrated product retains much of its original qualities. When solid objects like strawberries are freeze dried the original shape of the product is maintained. If the product to be dried is a liquid, as often seen in pharmaceutical applications, the properties of the final product are optimized by the combination of excipients (i.e., inactive ingredients). Primary applications of freeze drying include biological (e.g., bacteria and yeasts), biomedical (e.g., surgical transplants), food processing (e.g., coffee) and preservation.
History
The Inca were freeze drying potatoes into chuño from the 13th century. The process involved multiple cycles of exposing potatoes to below freezing temperatures on mountain peaks in the Andes during the evening, and squeezing water out and drying them in the sunlight during the day. The Inca people also used the unique climate of the Altiplano to freeze dry meat.
Modern freeze drying began as early as 1890 by Richard Altmann who devised a method to freeze dry tissues (either plant or animal), but went virtually unnoticed until the 1930s. In 1909, L. F. Shackell independently created the vacuum chamber by using an electrical pump. No further freeze drying information was documented until Tival in 1927 and Elser in 1934 had patented freeze drying systems with improvements to freezing and condenser steps.
A significant turning point for freeze drying occurred during World War II when blood plasma and penicillin were needed to treat the wounded in the field. Because of the lack of refrigerated transport, many serum supplies spoiled before reaching their recipients. The freeze-drying process |
https://en.wikipedia.org/wiki/Java%203D | Java 3D is a scene graph-based 3D application programming interface (API) for the Java platform. It runs on top of either OpenGL or Direct3D until version 1.6.0, which runs on top of Java OpenGL (JOGL). Since version 1.2, Java 3D has been developed under the Java Community Process. A Java 3D scene graph is a directed acyclic graph (DAG).
Compared to other solutions, Java 3D is not only a wrapper around these graphics APIs, but an interface that encapsulates the graphics programming using a true object-oriented approach. Here a scene is constructed using a scene graph that is a representation of the objects that have to be shown. This scene graph is structured as a tree containing several elements that are necessary to display the objects. Additionally, Java 3D offers extensive spatialized sound support.
Java 3D and its documentation are available for download separately. They are not part of the Java Development Kit (JDK).
History
Intel, Silicon Graphics, Apple, and Sun all had retained mode scene graph APIs under development in 1996. Since they all wanted to make a Java version, they decided to collaborate in making it. That project became Java 3D. Development was underway already in 1997. A public beta version was released in March 1998. The first version was released in December 1998. From mid-2003 through summer 2004, the development of Java 3D was discontinued. In the summer of 2004, Java 3D was released as a community source project, and Sun and volunteers have since been continuing its development.
On January 29, 2008, it was announced that improvements to Java 3D would be put on hold to produce a 3D scene graph for JavaFX JavaFX with 3D support was eventually released with Java 8. The JavaFX 3D graphics functionality has more or less come to supersede Java 3D.
Since February 28, 2008, the entire Java 3D source code is released under the GPL version 2 license with GPL linking exception.
Since February 10, 2012, Java 3D uses JOGL 2.0 for its hardware acc |
https://en.wikipedia.org/wiki/S%20transform | S transform as a time–frequency distribution was developed in 1994 for analyzing geophysics data. In this way, the S transform is a generalization of the short-time Fourier transform (STFT), extending the continuous wavelet transform and overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations in S transform. Moreover, the S transform doesn't have a cross-term problem and yields a better signal clarity than Gabor transform. However, the S transform has its own disadvantages: the clarity is worse than Wigner distribution function and Cohen's class distribution function.
A fast S transform algorithm was invented in 2010. It reduces the computational complexity from O[N2·log(N)] to O[N·log(N)] and makes the transform one-to-one, where the transform has the same number of points as the source signal or image, compared to storage complexity of N2 for the original formulation. An implementation is available to the research community under an open source license.
A general formulation of the S transform makes clear the relationship to other time frequency transforms such as the Fourier, short time Fourier, and wavelet transforms.
Definition
There are several ways to represent the idea of the S transform. In here, S transform is derived as the phase correction of the continuous wavelet transform with window being the Gaussian function.
S-Transform
Inverse S-Transform
Modified form
Spectrum Form
The above definition implies that the s-transform function can be express as the convolution of and .
Applying the Fourier transform to both and gives
.
Discrete-time S-transform
From the spectrum form of S-transform, we can derive the discrete-time S-transform.
Let , where is the sampling interval and is the sampling frequency.
The Discrete time S-transform can then be expressed as:
Implementation of discrete-time S-transform
Below is the Pseudo code |
https://en.wikipedia.org/wiki/FTAM | FTAM, ISO standard 8571, is the OSI application layer protocol for file transfer, access and management.
The goal of FTAM is to combine into a single protocol both file transfer, similar in concept to the Internet FTP, as well as remote access to open files, similar to NFS. However, like the other OSI protocols, FTAM has not been widely adopted, and the TCP/IP based Internet has become the dominant global network.
The FTAM protocol was used in the German banking sector to transfer clearing information. The Banking Communication Standard (BCS) over FTAM access (short BCS-FTAM) was standardized in the DFÜ-Abkommen (EDI-agreement) enacted in Germany on 15 March 1995. The BCS-FTAM transmission protocol was supposed to be replaced by the Electronic Banking Internet Communication Standard (EBICS) in 2010. The obligatory support for BCS over FTAM was ceased in December 2010.
RFC 1415 provides an FTP-FTAM gateway specification but attempts to define an Internet-scale file transfer protocol have instead focused on Server message block, NFS or Andrew File System as models.
ISO 8571 parts
ISO 8571, Information processing systems — Open Systems Interconnection — File Transfer, Access and Management, is split into five parts:
ISO 8571-1:1988 Part 1: General introduction
ISO 8571-2:1988 Part 2: Virtual Filestore Definition
ISO 8571-3:1988 Part 3: File Service Definition
ISO 8571-4:1988 Part 4: File Protocol Specification
ISO/IEC 8571-5:1990 Part 5: Protocol Implementation Conformance Statement Proforma
References
Networking standards
Computer file systems
ITU-T recommendations
OSI protocols
Network file transfer protocols
File transfer protocols
Application layer protocols |
https://en.wikipedia.org/wiki/Netfilter | Netfilter is a framework provided by the Linux kernel that allows various networking-related operations to be implemented in the form of customized handlers. Netfilter offers various functions and operations for packet filtering, network address translation, and port translation, which provide the functionality required for directing packets through a network and prohibiting packets from reaching sensitive locations within a network.
Netfilter represents a set of hooks inside the Linux kernel, allowing specific kernel modules to register callback functions with the kernel's networking stack. Those functions, usually applied to the traffic in the form of filtering and modification rules, are called for every packet that traverses the respective hook within the networking stack.
History
Rusty Russell started the netfilter/iptables project in 1998; he had also authored the project's predecessor, ipchains. As the project grew, he founded the Netfilter Core Team (or simply coreteam) in 1999. The software they produced (called netfilter hereafter) uses the GNU General Public License (GPL) license, and in March 2000 it was merged into version 2.4.x of the Linux kernel mainline.
In August 2003 Harald Welte became chairman of the coreteam. In April 2004, following a crack-down by the project on those distributing the project's software embedded in routers without complying with the GPL, a German court granted Welte an historic injunction against Sitecom Germany, which refused to follow the GPL's terms (see GPL-related disputes). In September 2007 Patrick McHardy, who led development for past years, was elected as new chairman of the coreteam.
Prior to iptables, the predominant software packages for creating Linux firewalls were ipchains in Linux kernel 2.2.x and ipfwadm in Linux kernel 2.0.x, which in turn was based on BSD's ipfw. Both ipchains and ipfwadm alter the networking code so they can manipulate packets, as Linux kernel lacked a general packets control framewor |
https://en.wikipedia.org/wiki/Telecommunications%20Management%20Network | The Telecommunications Management Network is a protocol model defined by ITU-T for managing open systems in a communications network. It is part of the ITU-T Recommendation series M.3000 and is based on the OSI management specifications in ITU-T Recommendation series X.700.
TMN provides a framework for achieving interconnectivity and communication across heterogeneous operations system and telecommunication networks. To achieve this, TMN defines a set of interface points for elements which perform the actual communications processing (such as a call processing switch) to be accessed by elements, such as management workstations, to monitor and control them. The standard interface allows elements from different manufacturers to be incorporated into a network under a single management control.
For communication between Operations Systems and NEs (Network Elements), it uses the Common management information protocol (CMIP) or Mediation devices when it uses Q3 interface.
The TMN layered organization is used as fundamental basis for the management software of ISDN, B-ISDN, ATM, SDH/SONET and GSM networks. It is not as commonly used for purely packet-switched data networks.
Modern telecom networks offer automated management functions and are run by operations support system (OSS) software. These manage modern telecom networks and provide the data that is needed in the day-to-day running of a telecom network. OSS software is also responsible for issuing commands to the network infrastructure to activate new service offerings, commence services for new customers, and detect and correct network faults.
Architecture
According to ITU-T M.3010 TMN has 3 architectures:
Physical architecture
Security architecture
Logical layered architecture
Logical layers
The framework identifies four logical layers of network management:
Business management Includes the functions related to business aspects, analyzes trends and quality issues, for example, or to provide a bas |
https://en.wikipedia.org/wiki/Line-out%20code | A line-out code is a coded piece of information, used to communicate intentions about a line-out within one team in a rugby union match without giving information away to the other team. A line-out is a manoeuvre used to restart play when the ball has left the pitch. The right to throw in the ball will be awarded to one team or the other but, in theory at least, the throw will be straight down the middle whichever team is making it. The advantage comes from knowing in advance how the throw will be made — whether short and fast to the front of the line or looping slowly to the back.
Encoded information
Receiver selection
The most important piece of information to be encoded is to where in the line the ball is to be thrown. This allows the receiving players to concentrate their effort in lifting the relevant catcher, whereas the opposition must attempt to cover the whole line.
Post-catch action
As well as the length of the throw, some teams will attempt to specify what the catcher should do with the ball when he has it - whether to simply knock it back towards his own team, catch it and then pass it while still up in the air (supported by his team mates) or catch it and bring it down to form a maul. Such a call can only be advice to the catcher, since he may not get a clean catch and the choice of what to do.
Set pieces
Finally, the code may have a means of calling for a specific pre-planned move. This is usually just a particular word - the play won't be used often enough in a match for the opposition to work out what the word means. For example, the code-word "postman" might indicate that the ball is to be caught by the jumper (typically number four) and held briefly while a player from the back of the line-out runs along the line. As he passes the catcher the ball is passed down to him, he continues on to the front of the line, and slips through the gap between the front of the line and the edge of the pitch.
Encoding methods
There are a wealth of different co |
https://en.wikipedia.org/wiki/Wave%20soldering | Wave soldering is a bulk soldering process used for the manufacturing of printed circuit boards. The circuit board is passed over a pan of molten solder in which a pump produces an upwelling of solder that looks like a standing wave. As the circuit board makes contact with this wave, the components become soldered to the board. Wave soldering is used for both through-hole printed circuit assemblies, and surface mount. In the latter case, the components are glued onto the surface of a printed circuit board (PCB) by placement equipment, before being run through the molten solder wave. Wave soldering is mainly used in soldering of through hole components.
As through-hole components have been largely replaced by surface mount components, wave soldering has been supplanted by reflow soldering methods in many large-scale electronics applications. However, there is still significant wave soldering where surface-mount technology (SMT) is not suitable (e.g., large power devices and high pin count connectors), or where simple through-hole technology prevails (certain major appliances).
Wave solder process
There are many types of wave solder machines; however, the basic components and principles of these machines are the same. The basic equipment used during the process is a conveyor that moves the PCB through the different zones, a pan of solder used in the soldering process, a pump that produces the actual wave, the sprayer for the flux and the preheating pad. The solder is usually a mixture of metals. A typical leaded solder is composed of 50% tin, 49.5% lead, and 0.5% antimony. The Restriction of Hazardous Substances Directive (RoHS) has led to an ongoing transition away from 'traditional' leaded solder in modern manufacturing in favor of lead-free alternatives. Both tin-silver-copper and tin-copper-nickel alloys are commonly used, with one common alloy (SN100C) being 99.25% tin, 0.7% copper, 0.05% nickel and <0.01% germanium.
Fluxing
Flux in the wave soldering p |
https://en.wikipedia.org/wiki/List%20of%20theorems%20called%20fundamental | In mathematics, a fundamental theorem is a theorem which is considered to be central and conceptually important for some topic. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus. The names are mostly traditional, so that for example the fundamental theorem of arithmetic is basic to what would now be called number theory. Some of these are classification theorems of objects which are mainly dealt with in the field. For instance, the fundamental theorem of curves describe classification of regular curves in space up to translation and rotation.
Likewise, the mathematical literature sometimes refers to the fundamental lemma of a field. The term lemma is conventionally used to denote a proven proposition which is used as a stepping stone to a larger result, rather than as a useful statement in-and-of itself.
Fundamental theorems of mathematical topics
Fundamental theorem of algebra
Fundamental theorem of algebraic K-theory
Fundamental theorem of arithmetic
Fundamental theorem of Boolean algebra
Fundamental theorem of calculus
Fundamental theorem of calculus for line integrals
Fundamental theorem of curves
Fundamental theorem of cyclic groups
Fundamental theorem of dynamical systems
Fundamental theorem of equivalence relations
Fundamental theorem of exterior calculus
Fundamental theorem of finitely generated abelian groups
Fundamental theorem of finitely generated modules over a principal ideal domain
Fundamental theorem of finite distributive lattices
Fundamental theorem of Galois theory
Fundamental theorem of geometric calculus
Fundamental theorem on homomorphisms
Fundamental theorem of ideal theory in number fields
Fundamental theorem of Lebesgue integral calculus
Fundamental theorem of linear algebra
Fundamental theorem of linear programming
Fundamental theorem of noncommutative algebra
Fundamental theorem of projective geometry
Fundamental theorem of random fields
Fu |
https://en.wikipedia.org/wiki/Squalene | Squalene is an organic compound. It is a triterpenoid with the formula C30H50. It is a colourless oil, although impure samples appear yellow. It was originally obtained from shark liver oil (hence its name, as Squalus is a genus of sharks). An estimated 12% of bodily squalene in humans is found in sebum. Squalene has a role in topical skin lubrication and protection.
Most plants, fungi, and animals produce squalene as biochemical precursor in sterol biosynthesis, including cholesterol and steroid hormones in the human body. It is also an intermediate in the biosynthesis of hopanoids in many bacteria.
Squalene is an important ingredient in some vaccine adjuvants: The Novartis and GlaxoSmithKline adjuvants are called MF59 and AS03, respectively.
Role in triterpenoid synthesis
Squalene is a biochemical precursor to both steroids and hopanoids. For sterols, the squalene conversion begins with oxidation (via squalene monooxygenase) of one of its terminal double bonds, resulting in 2,3-oxidosqualene. It then undergoes an enzyme-catalysed cyclisation to produce lanosterol, which can be elaborated into other steroids such as cholesterol and ergosterol in a multistep process by the removal of three methyl groups, the reduction of one double bond by NADPH and the migration of the other double bond. In many plants, this is then converted into stigmasterol, while in many fungi, it is the precursor to ergosterol.
The biosynthetic pathway is found in many bacteria, and most eukaryotes, though has not been found in Archaea.
Production
Biosynthesis
Squalene is biosynthesised by coupling two molecules of farnesyl pyrophosphate. The condensation requires NADPH and the enzyme squalene synthase.
Industry
Synthetic Squalene is prepared commercially from geranylacetone.
Shark conservation
In 2020, conservationists raised concerns about the potential slaughter of sharks to obtain squalene for a COVID-19 vaccine.
Environmental and other concerns over shark hunting have motivated |
https://en.wikipedia.org/wiki/Apomorphy%20and%20synapomorphy | In phylogenetics, an apomorphy (or derived trait) is a novel character or character state that has evolved from its ancestral form (or plesiomorphy). A synapomorphy is an apomorphy shared by two or more taxa and is therefore hypothesized to have evolved in their most recent common ancestor. In cladistics, synapomorphy implies homology.
Examples of apomorphy are the presence of erect gait, fur, the evolution of three middle ear bones, and mammary glands in mammals but not in other vertebrate animals such as amphibians or reptiles, which have retained their ancestral traits of a sprawling gait and lack of fur. Thus, these derived traits are also synapomorphies of mammals in general as they are not shared by other vertebrate animals.
Etymology
The word —coined by German entomologist Willi Hennig—is derived from the Ancient Greek words (sún), meaning "with, together"; (apó), meaning "away from"; and (morphḗ), meaning "shape, form".
Clade analysis
The concept of synapomorphy depends on a given clade in the tree of life. Cladograms are diagrams that depict evolutionary relationships within groups of taxa. These illustrations are accurate predictive device in modern genetics. They are usually depicted in either tree or ladder form. Synapomorphies then create evidence for historical relationships and their associated hierarchical structure. Evolutionarily, a synapomorphy is the marker for the most recent common ancestor of the monophyletic group consisting of a set of taxa in a cladogram. What counts as a synapomorphy for one clade may well be a primitive character or plesiomorphy at a less inclusive or nested clade. For example, the presence of mammary glands is a synapomorphy for mammals in relation to tetrapods but is a symplesiomorphy for mammals in relation to one another—rodents and primates, for example. So the concept can be understood as well in terms of "a character newer than" (autapomorphy) and "a character older than" (plesiomorphy) the apomorphy: mamm |
https://en.wikipedia.org/wiki/Unix%20time | Current Unix time ()
Unix time is a date and time representation widely used in computing. It measures time by the number of seconds that have elapsed since 00:00:00 UTC on 1 January 1970, the Unix epoch, without adjustments made due to leap seconds. In modern computing, values are sometimes stored with higher granularity, such as microseconds or nanoseconds.
Unix time originated as the system time of Unix operating systems. It has come to be widely used in other computer operating systems, file systems, programming languages, and databases.
Definition
Unix time is currently defined as the number of non-leap seconds which have passed since 00:00:00UTC on Thursday, 1 January 1970, which is referred to as the Unix epoch. Unix time is typically encoded as a signed integer.
The Unix time is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every non-leap second after this. For example, 00:00:00UTC on 1 January 1971 is represented in Unix time as . Negative values, on systems that support them, indicate times before the Unix epoch, with the value decreasing by 1 for every non-leap second before the epoch. For example, 00:00:00UTC on 1 January 1969 is represented in Unix time as . Every day in Unix time consists of exactly seconds.
Unix time is sometimes referred to as Epoch time. This can be misleading since Unix time is not the only time system based on an epoch and the Unix epoch is not the only epoch used by other time systems.
Leap seconds
Unix time differs from both Coordinated Universal Time (UTC) and International Atomic Time (TAI) in its handling of leap seconds. UTC includes leap seconds that adjust for the discrepancy between precise time, as measured by atomic clocks, and solar time, relating to the position of the earth in relation to the sun. International Atomic Time (TAI), in which every day is precisely seconds long, ignores solar time and gradually loses synchronization with the Earth's rotation at a rate of roughly o |
https://en.wikipedia.org/wiki/Biorobotics | Biorobotics is an interdisciplinary science that combines the fields of biomedical engineering, cybernetics, and robotics to develop new technologies that integrate biology with mechanical systems to develop more efficient communication, alter genetic information, and create machines that imitate biological systems.
Cybernetics
Cybernetics focuses on the communication and system of living organisms and machines that can be applied and combined with multiple fields of study such as biology, mathematics, computer science, engineering, and much more.
This discipline falls under the branch of biorobotics because of its combined field of study between biological bodies and mechanical systems. Studying these two systems allow for advanced analysis on the functions and processes of each system as well as the interactions between them.
History
Cybernetic theory is a concept that has existed for centuries, dating back to the era of Plato where he applied the term to refer to the "governance of people". The term cybernetique is seen in the mid 1800s used by physicist André-Marie Ampère. The term cybernetics was popularized in the late 1940s to refer to a discipline that touched on, but was separate, from established disciplines, such as electrical engineering, mathematics, and biology.
Science
Cybernetics is often misunderstood because of the breadth of disciplines it covers. In the early 20th century, it was coined as an interdisciplinary field of study that combines biology, science, network theory, and engineering. Today, it covers all scientific fields with system related processes. The goal of cybernetics is to analyze systems and processes of any system or systems in an attempt to make them more efficient and effective.
Applications
Cybernetics is used as an umbrella term so applications extend to all systems related scientific fields such as biology, mathematics, computer science, engineering, management, psychology, sociology, art, and more. Cybernetics is us |
https://en.wikipedia.org/wiki/Creode | Creode or chreod is a neologistic portmanteau term coined by the English 20th century biologist C. H. Waddington to represent the developmental pathway followed by a cell as it grows to form part of a specialized organ. Combining the Greek roots for "necessary" and "path," the term was inspired by the property of regulation. When development is disturbed by external forces, the embryo attempts to regulate its growth and differentiation by returning to its normal developmental trajectory.
Developmental biology
Waddington used the term along with canalisation and homeorhesis, which describes a system that returns to a steady trajectory, in contrast to homeostasis, which describes a system which returns to a steady state. Waddington explains development with the metaphor of a ball rolling down a hillside, where the hill's contours channel the ball in a particular direction. In the case of a pathway or creode which is deeply carved in the hillside, external disturbance is unlikely to prevent normal development. He notes that creodes tend to have steeper sides earlier in development, when external disturbance rarely suffices to alter the developmental trajectory. Small differences in placement atop the hill can lead to dramatically different results by the time the ball reaches the bottom. This represents the tendency of neighboring regions of the early embryo to develop into different organs with radically different structures. Since intermediate structures rarely exist between organs, each ball that rolls down the hill is "canalised" to a region distinct from other regions, just as an eye, for instance, is distinct from an ear.
Waddington refers to the network of creodes carved into the hillside as an "epigenetic landscape," meaning that the formation of the body depends on not only its genetic makeup but the different ways genes are expressed in different regions of the embryo. He expands his metaphor by describing the underside of the epigenetic landscape. He |
https://en.wikipedia.org/wiki/Privacy%20International | Privacy International (PI) is a UK-based registered charity that defends and promotes the right to privacy across the world. First formed in 1990, registered as a non-profit company in 2002 and as a charity in 2012, PI is based in London. Its current executive director, since 2012, is Dr Gus Hosein.
Formation, background and objectives
During 1990, in response to increasing awareness about the globalization of surveillance, more than a hundred privacy experts and human rights organizations from forty countries took steps to form an international organization for the protection of privacy.
Members of the new body, including computer professionals, academics, lawyers, journalists, jurists, and activists, had a common interest in promoting an international understanding of the importance of privacy and data protection. Meetings of the group, which took the name Privacy International (PI), were held throughout that year in North America, Europe, Asia, and the South Pacific, and members agreed to work toward the establishment of new forms of privacy advocacy at the international level. The initiative was convened and personally funded by British privacy activist Simon Davies who served as director of the organization until June 2012.
At the time, privacy advocacy within the non-government sector was fragmented and regionalized, while at the regulatory level there was little communication between privacy officials outside the European Union. Awareness of privacy issues at the international level was generated primarily through academic publications and international news reports but privacy campaigning at an international level until that time had not been feasible.
While there had for some years existed an annual international meeting of privacy regulators, the formation of Privacy International was the first successful attempt to establish a global focus on this emerging area of human rights. PI evolved as an independent, non-government network with the primary role |
https://en.wikipedia.org/wiki/Pinion | A pinion is a round gear—usually the smaller of two meshed gears—used in several applications, including drivetrain and rack and pinion systems.
Applications
Drivetrain
Drivetrains usually feature a gear known as the pinion, which may vary in different systems, including
the typically smaller gear in a gear drive train (although in the first commercially successful steam locomotive—the Salamanca—the pinion was rather large). In many cases, such as remote controlled toys, the pinion is also the drive gear for a reduction in speed, since electric motors operate at higher speed and lower torque than desirable at the wheels. However the reverse is true in watches, where gear trains commence with a high-torque, low-speed spring and terminate in the fast-and-weak escapement.
the smaller gear that drives in a 90-degree angle towards a crown gear in a differential drive.
the small front sprocket on a chain driven motorcycle.
the clutch bell gear when paired with a centrifugal clutch, in radio-controlled cars with an engine (e.g., nitro).
Rack and pinion
In rack and pinion system, the pinion is the round gear that engages and moves along the linear rack.
See also
List of gear nomenclature
References
Gears |
https://en.wikipedia.org/wiki/Analytic%20proof | In mathematics, an analytic proof is a proof of a theorem in analysis that only makes use of methods from analysis, and which does not predominantly make use of algebraic or geometrical methods. The term was first used by Bernard Bolzano, who first provided a non-analytic proof of his intermediate value theorem and then, several years later provided a proof of the theorem that was free from intuitions concerning lines crossing each other at a point, and so he felt happy calling it analytic (Bolzano 1817).
Bolzano's philosophical work encouraged a more abstract reading of when a demonstration could be regarded as analytic, where a proof is analytic if it does not go beyond its subject matter (Sebastik 2007). In proof theory, an analytic proof has come to mean a proof whose structure is simple in a special way, due to conditions on the kind of inferences that ensure none of them go beyond what is contained in the assumptions and what is demonstrated.
Structural proof theory
In proof theory, the notion of analytic proof provides the fundamental concept that brings out the similarities between a number of essentially distinct proof calculi, so defining the subfield of structural proof theory. There is no uncontroversial general definition of analytic proof, but for several proof calculi there is an accepted notion. For example:
In Gerhard Gentzen's natural deduction calculus the analytic proofs are those in normal form; that is, no formula occurrence is both the principal premise of an elimination rule and the conclusion of an introduction rule;
In Gentzen's sequent calculus the analytic proofs are those that do not use the cut rule.
However, it is possible to extend the inference rules of both calculi so that there are proofs that satisfy the condition but are not analytic. For example, a particularly tricky example of this is the analytic cut rule, used widely in the tableau method, which is a special case of the cut rule where the cut formula is a subform |
https://en.wikipedia.org/wiki/Self-verifying%20theories | Self-verifying theories are consistent first-order systems of arithmetic, much weaker than Peano arithmetic, that are capable of proving their own consistency. Dan Willard was the first to investigate their properties, and he has described a family of such systems. According to Gödel's incompleteness theorem, these systems cannot contain the theory of Peano arithmetic nor its weak fragment Robinson arithmetic; nonetheless, they can contain strong theorems.
In outline, the key to Willard's construction of his system is to formalise enough of the Gödel machinery to talk about provability internally without being able to formalise diagonalisation. Diagonalisation depends upon being able to prove that multiplication is a total function (and in the earlier versions of the result, addition also). Addition and multiplication are not function symbols of Willard's language; instead, subtraction and division are, with the addition and multiplication predicates being defined in terms of these. Here, one cannot prove the sentence expressing totality of multiplication:
where is the three-place predicate which stands for
When the operations are expressed in this way, provability of a given sentence can be encoded as an arithmetic sentence describing termination of an analytic tableau. Provability of consistency can then simply be added as an axiom. The resulting system can be proven consistent by means of a relative consistency argument with respect to ordinary arithmetic.
One can further add any true sentence of arithmetic to the theory while still retaining consistency of the theory.
References
External links
Dan Willard's home page.
Proof theory
Theories of deduction |
https://en.wikipedia.org/wiki/Generalized%20singular%20value%20decomposition | In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
First version: two-matrix decomposition
The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD, are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms. In the following, let , or .
Definition
The generalized singular value decomposition of matrices and iswhere
is unitary,
is unitary,
is unitary,
is unitary,
is real diagonal with positive diagonal, and contains the non-zero singular values of in decreasing order,
,
is real non-negative block-diagonal, where with , , and ,
is real non-negative block-diagonal, where with , , and ,
,
,
,
.
We denote , , , and . While is diagonal, is not always diagonal, because of the leading rectangular zero matrix; instead is "bottom-right-diagonal".
Variations
There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply from the left by where is an arbitrary unitary matrix. We denote
, where is upper-triangular and invertible, and is unitary. Such matrices exist by RQ-decomposition.
. Then is invertible.
Here are some variations of the GSVD:
MATLAB (gsvd):
LAPACK (L |
https://en.wikipedia.org/wiki/Identification%20%28information%29 | For data storage, identification is the capability to find, retrieve, report, change, or delete specific data without ambiguity. This applies especially to information stored in databases. In database normalisation, the process of organizing the fields and tables of a relational database to minimize redundancy and dependency, is the central, defining function of the discipline.
See also
Authentication
Identification (disambiguation)
Forensic profiling
Profiling (information science)
Unique identifier
References
Data modeling |
https://en.wikipedia.org/wiki/Iterative%20reconstruction | Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques.
For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a
better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in
a single reconstruction step. In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.
Basic concepts
The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse
problem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time.
There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections.
Algebraic reconstruction
The Algebraic Reconstruction Technique (ART) was the first iterative reconstruction technique used for computed tomography by Hounsfield.
iterative Sparse Asymptotic Minimum Variance
The iterative Sparse Asymptotic Minimum Variance algorithm is an iterative, parameter-free superresolution tomographic reconstruction method inspired by compressed sensing, with applications in synthetic-aperture radar, computed tomography scan, and magnetic resonance imaging (MRI).
Statistical reconstruction
There are typically five components to statistical iterative image reconstr |
https://en.wikipedia.org/wiki/Sudo | sudo ( or ) is a program for Unix-like computer operating systems that enables users to run programs with the security privileges of another user, by default the superuser. It originally stood for "superuser do", as that was all it did, and it is its most common usage; however, the official Sudo project page lists it as "su 'do'". The current Linux manual pages for su define it as "substitute user", making the correct meaning of sudo "substitute user, do", because sudo can run a command as other users as well.
Unlike the similar command su, users must, by default, supply their own password for authentication, rather than the password of the target user. After authentication, and if the configuration file (typically /etc/sudoers) permits the user access, the system invokes the requested command. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands.
History
Robert Coggeshall and Cliff Spencer wrote the original subsystem around 1980 at the Department of Computer Science at SUNY/Buffalo. Robert Coggeshall brought sudo with him to the University of Colorado Boulder. Between 1986 and 1993, the code and features were substantially modified by the IT staff of the University of Colorado Boulder Computer Science Department and the College of Engineering and Applied Science, including Todd C. Miller. The current version has been publicly maintained by OpenBSD developer Todd C. Miller since 1994, and has been distributed under an ISC-style license since 1999.
In November 2009 Thomas Claburn, in response to concerns that Microsoft had patented sudo, characterized such suspicions as overblown. The claims were narrowly framed to a particular GUI, rather than to the sudo concept.
The logo |
https://en.wikipedia.org/wiki/Derived%20object | In computer programming, derived objects are files (intermediate or not) that are not directly maintained, but get created.
The most typical context is that of compilation, linking, and packaging of source files.
Depending on the revision control (SCM) system, they may be
completely ignored,
managed as second class citizens or
potentially considered the archetype of configuration items.
The second case assumes a reproducible process to produce them. The third case implies that this process is itself being managed, or in practice audited. Currently, only builds are typically audited, but nothing in principle prevents the extension of this to more general patterns of production. Derived objects may then have a real identity. Different instances of the same derived object may be discriminated generically from each other on the basis of their dependency tree.
Version control |
https://en.wikipedia.org/wiki/GameSpy | GameSpy was an American provider of online multiplayer and matchmaking middleware for video games founded in 1999 by Mark Surfas. After the release of a multiplayer server browser for the game, QSpy, Surfas licensed the software under the GameSpy brand to other video game publishers through a newly established company, GameSpy Industries, which also incorporated his Planet Network of video game news and information websites, and GameSpy.com.
GameSpy merged with IGN in 2004; by 2014, its services had been used by over 800 video game publishers and developers since its launch. In August 2012, the GameSpy Industries division (which remained responsible for the GameSpy service) was acquired by mobile video game developer Glu Mobile. IGN (then owned by News Corporation) retained ownership of the GameSpy.com website. In February 2013, IGN's new owner, Ziff Davis, shut down IGN's "secondary" sites, including GameSpy's network. This was followed by the announcement in April 2014 that GameSpy's service platform would be shut down on May 31, 2014.
History
The 1996 release of id Software's video game Quake, one of the first 3D multiplayer action games to allow play over the Internet, furthered the concept of players creating and releasing "mods" or modifications of games. Mark Surfas saw the need for hosting and distribution of these mods and created PlanetQuake, a Quake-related hosting and news site. The massive success of mods catapulted PlanetQuake to huge traffic and a central position in the burgeoning game website scene.
Quake also marked the beginning of the Internet multiplayer real-time action game scene. However, finding a Quake server on the Internet proved difficult, as players could only share IP addresses of known servers between themselves or post them on websites. To solve this problem, a team of three programmers (consisting of Joe "QSpy" Powell, Tim Cook, and Jack "morbid" Matthews) formed Spy Software and created QSpy (or QuakeSpy). This allowed the list |
https://en.wikipedia.org/wiki/Play-by-post%20role-playing%20game | A play-by-post role-playing game (or sim) is an online text-based role-playing game in which players interact with each other and a predefined environment via text. It is a subset of the online role-playing community which caters to both gamers and creative writers. Play-by-post games may be based on other role-playing games, non-game fiction including books, television and movies, or original settings. This activity is closely related to both interactive fiction and collaborative writing. Compared to other roleplaying game formats, this type tends to have the loosest rulesets.
History
Play-by-post roleplaying has its origins on the large computer networks and bulletin board systems of major universities in the United States in the 1980s. It drew heavily upon the traditions of fanzines and off-line role-playing games. The introduction of IRC enabled users to engage in real-time chat-based role-playing and resulted in the establishment of open communities.
Development of forum hosting software and browser-based chat services such as AOL and Yahoo Chat increased the availability of these mediums to the public and improved accessibility to the general public.
Rules
Unlike other forms of online role-playing games such as MUDs or MMORPGs, the events in play-by-post games are rarely handled by software and instead rely on participants or moderators to make decisions or improvise. Players create their own characters and descriptions of events and their surroundings during play. Results of combat, which may include Player versus player encounters, may be determined by chance through dice rolls or software designed to provide a random result. The results of random chance may need to be provided to the players in order to avoid disputes that may be a result of cheating or favoritism. Alternatively a forum may be diceless and rely on cooperation among players to agree on outcomes of events and thus forgo the use of randomisers.
In the latter case, combat and other measure |
https://en.wikipedia.org/wiki/Headshell | A headshell is a head piece designed to be attached to the end of a turntable's or record player's tonearm, which holds the cartridge. Standard catridges are secured to the headshell by a couple of 2.5 mm bolts spaced 1/2" apart. Older, non-metric cartridges used #2 (3/32") bolts.
Some headshells are designed to allow variable weights to be attached. For example, the H4-S Stanton headshell comes with 2g and 4g screw-in weights. Extra weight can be useful to prevent skipping if the DJ is scratching the record.
H-4 Bayonet Mount
Most headshells use a standard H-4 Bayonet Mount, which will fit all S shape tonearms. The bayonet has a standard barrel whose dimensions are 8 mm diameter and 12 mm length, with its four pins connected to the four colour-coded head-shell lead wires.
Headshell lead wires colours
The colour standards for the contact connections are as follows:
White: Left channel cartridge positive.
Blue: Left channel cartridge negative.
Red: Right channel cartridge positive.
Green: Right channel cartridge negative.
References
External links
Audio engineering |
https://en.wikipedia.org/wiki/Callus%20%28cell%20biology%29 | Plant callus (plural calluses or calli) is a growing mass of unorganized plant parenchyma cells. In living plants, callus cells are those cells that cover a plant wound. In biological research and biotechnology callus formation is induced from plant tissue samples (explants) after surface sterilization and plating onto tissue culture medium in vitro (in a closed culture vessel such as a Petri dish). The culture medium is supplemented with plant growth regulators, such as auxin, cytokinin, and gibberellin, to initiate callus formation or somatic embryogenesis. Callus initiation has been described for all major groups of land plants.
Callus induction and tissue culture
Plant species representing all major land plant groups have been shown to be capable of producing callus in tissue culture. A callus cell culture is usually sustained on gel medium. Callus induction medium consists of agar and a mixture of macronutrients and micronutrients for the given cell type. There are several types of basal salt mixtures used in plant tissue culture, but most notably modified Murashige and Skoog medium, White's medium, and woody plant medium. Vitamins are also provided to enhance growth such as Gamborg B5 vitamins. For plant cells, enrichment with nitrogen, phosphorus, and potassium is especially important. Plant callus is usually derived from somatic tissues. The tissues used to initiate callus formation depends on plant species and which tissues are available for explant culture. The cells that give rise to callus and somatic embryos usually undergo rapid division or are partially undifferentiated such as meristematic tissue. In alfalfa, Medicago truncatula, however callus and somatic embryos are derived from mesophyll cells that undergo dedifferentiation. Plant hormones are used to initiate callus growth. After the callus has formed, the concentration of hormones in the medium may be altered to shift the development from callus to root formation, shoot growth or so |
https://en.wikipedia.org/wiki/Actinometer | An actinometer is an instrument that can measure the heating power of radiation. Actinometers are used in meteorology to measure solar radiation as pyranometers, pyrheliometers and net radiometers.
An actinometer is a chemical system or physical device which determines the number of
photons in a beam integrally or per unit time. This name is commonly
applied to devices used in the ultraviolet and visible wavelength ranges.
For example, solutions of iron(III) oxalate can be used as a chemical
actinometer, while bolometers, thermopiles, and photodiodes are physical
devices giving a reading that can be correlated to the number of photons
detected.
History
The actinometer was invented by John Herschel in 1825; he introduced the term actinometer, the first of many uses of the prefix actin for scientific instruments, effects, and processes.
The actinograph is a related device for estimating the actinic power of lighting for photography.
Chemical actinometry
Chemical actinometry involves measuring radiant flux via the yield from a chemical reaction. This process requires a chemical with a known quantum yield and easily analyzed reaction products.
Choosing an actinometer
Potassium ferrioxalate is commonly used, as it is simple to use and sensitive over a wide range of relevant wavelengths (254 nm to 500 nm). Other actinometers include malachite green leucocyanides, vanadium(V)–iron(III) oxalate and monochloroacetic acid, however all of these actinometers undergo dark reactions, that is, they react in the absence of light. This is undesirable since it will have to be corrected for. Organic actinometers like butyrophenone or piperylene are analysed by gas chromatography. Other actinometers are more specific in terms of the range of wavelengths at which quantum yields have been determined. Reinecke's salt K[Cr(NH3)2(NCS)4] reacts in the near-UV region although it is thermally unstable. Uranyl oxalate has been used historically but is very toxic and cumbersome to ana |
https://en.wikipedia.org/wiki/Pyranometer | A pyranometer () is a type of actinometer used for measuring solar irradiance on a planar surface and it is designed to measure the solar radiation flux density (W/m2) from the hemisphere above within a wavelength range 0.3 μm to 3 μm.
A typical pyranometer does not require any power to operate. However, recent technical development includes use of electronics in pyranometers, which do require (low) external power (see heat flux sensor).
Explanation
The solar radiation spectrum that reaches earth's surface extends its wavelength approximately from 300 nm to 2800 nm.
Depending on the type of pyranometer used, irradiance measurements with different degrees of spectral sensitivity will be obtained.
To make a measurement of irradiance, it is required by definition that the response to "beam" radiation varies with the cosine of the angle of incidence. This ensures a full response when the solar radiation hits the sensor perpendicularly (normal to the surface, sun at zenith, 0° angle of incidence), zero response when the sun is at the horizon (90° angle of incidence, 90° zenith angle), and 0.5 at a 60° angle of incidence. It follows that a pyranometer should have a so-called "directional response" or "cosine response" that is as close as possible to the ideal cosine characteristic.
Types
Following the definitions noted in the ISO 9060, three types of pyranometer can be recognized and grouped in two different technologies: thermopile technology and silicon semiconductor technology.
The light sensitivity, known as 'spectral response', depends on the type of pyranometer. The figure here above shows the spectral responses of the three types of pyranometer in relation to the solar radiation spectrum. The solar radiation spectrum represents the spectrum of sunlight that reaches the Earth's surface at sea level, at midday with A.M. (air mass) = 1.5.
The latitude and altitude influence this spectrum. The spectrum is influenced also by aerosol and pollution.
Thermopile py |
https://en.wikipedia.org/wiki/PCI%20configuration%20space | PCI configuration space is the underlying way that the Conventional PCI, PCI-X and PCI Express perform auto configuration of the cards inserted into their bus.
Overview
PCI devices have a set of registers referred to as configuration space and PCI Express introduces extended configuration space for devices. Configuration space registers are mapped to memory locations. Device drivers and diagnostic software must have access to the configuration space, and operating systems typically use APIs to allow access to device configuration space. When the operating system does not have access methods defined or APIs for memory mapped configuration space requests, the driver or diagnostic software has the burden to access the configuration space in a manner that is compatible with the operating system's underlying access rules. In all systems, device drivers are encouraged to use APIs provided by the operating system to access the configuration space of the device.
Technical information
One of the major improvements the PCI Local Bus had over other I/O architectures was its configuration mechanism. In addition to the normal memory-mapped and I/O port spaces, each device function on the bus has a configuration space, which is 256 bytes long, addressable by knowing the eight-bit PCI bus, five-bit device, and three-bit function numbers for the device (commonly referred to as the BDF or B/D/F, as abbreviated from bus/device/function). This allows up to 256 buses, each with up to 32 devices, each supporting eight functions. A single PCI expansion card can respond as a device and must implement at least function number zero. The first 64 bytes of configuration space are standardized; the remainder are available for vendor-defined purposes. Some high-end computers support more than one PCI domain (or PCI segment); each PCI domain supports up to 256 buses.
In order to allow more parts of configuration space to be standardized without conflicting with existing uses, there can be |
https://en.wikipedia.org/wiki/T.50%20%28standard%29 | ITU-T recommendation T.50 specifies the International Reference Alphabet (IRA), formerly International Alphabet No. 5 (IA5), a character encoding. ASCII is the U.S. variant of that character set.
The original version from November 1988 corresponds to ISO 646. The current version is from September 1992.
History
At the beginning was the International Telegraph Alphabet No. 2 (ITA2), a five bits code. IA5 is an improvement based on seven bits bytes.
Recommendation V.3 IA5 (1968) Initial version, superseded
Recommendation V.3 IA5 (1972) Superseded
Recommendation V.3 IA5 (1976-10) Superseded
Recommendation V.3 IA5 (1980-11) Superseded
Recommendation T.50 IA5 (1984-10) Superseded
Recommendation T.50 IA5 (1988-11-25) Superseded
Recommendation T.50 IRA (1992-09-18) In force
Use
This standard is referenced by other standards such as RFC 3966. It is also used by some analog modems such as Cisco ones.
This standard is referenced by other standards such as RFC 3939 - Calling Line Identification for Voice Mail Messages.
Character set
The following table shows the IA5 character set. Each character is shown with the hex code of its Unicode equivalent.
Standardisation
Identical standard: ISO/IEC 646:1991 (Twinned)
See also
ITU T.51
References
External links
Official ITU-T T.50 page
Tech Info - Character Codes (IA5 and ISO 646)
Character encoding
Character sets
ITU-T recommendations
ITU-T T Series Recommendations |
https://en.wikipedia.org/wiki/ITU%20T.61 | T.61 is an ITU-T Recommendation for a Teletex character set. T.61 predated Unicode,
and was the primary character set in ASN.1 used in early versions of X.500 and X.509
for encoding strings containing characters used in Western European languages. It is also used by older versions of LDAP. While T.61 continues to be supported in modern versions of X.500 and X.509, it has been deprecated in favor of Unicode. It is also called Code page 1036, CP1036, or IBM 01036.
While ASN.1 does see wide use and the T.61 character set is used on some standards using ASN.1 (for example in RSA Security's PKCS #9), the 1988-11 version of the T.61 standard itself was superseded by a never-published 1993-03 version; the 1993-03 version was withdrawn by the ITU-T. The 1988-11 version is still available.
T.61 was one of the encodings supported by Mozilla software in email and HTML until 2014, when the supported encodings were limited to those in the WHATWG Encoding Standard (although T.61 remained supported for LDAP).
Code page layout
The following table maps the T.61 characters to their equivalent Unicode code points.
See ITU T.51 for a description of how the accents at 0xC0..CF worked. They prefix the letters, as opposed to postfix used by Unicode.
See also
ITU T.51
Footnotes
References
External links
ITU-T Recommendation T.61 at ITU-T
ISO-IR-103 (ISO-IR registration of right-hand part)
Character sets
ASN.1
T.61
T.61 |
https://en.wikipedia.org/wiki/Teletex | Teletex was ITU-T specification F.200 for a text and document communications service that could be provided over telephone lines. It was rapidly superseded by e-mail but the name Teletex lives on in several of the X.500 standard attributes used in Lightweight Directory Access Protocol.
Overview
Teletex was designed as an upgrade to the conventional telex service. The terminal-to-terminal communication service of telex would be turned into an office-to-office document transmission system by teletex. Teletex envisaged direct communication between electronic typewriters, word processors and personal computers. These units had storage for transmitting and receiving messages. The use of such equipment considerably enhanced the character set available for document preparation.
Features
Character sets
In addition to the standard character set, a rich set of graphic symbols and a comprehensive set of control characters were supported in teletex. The set of control characters helped in preparation and reproduction of documents. In particular, they permitted the positioning of the printing element, specification of page orientation, left and right margins, vertical spacing and the use of underlining. The page control feature allowed standard A4 size papers to be used for receiving messages instead of the continuous stationery used in conventional telex systems.
Transmission and reception
A background/foreground operation was envisaged in teletex. Transmission/reception of messages should proceed in the background without affecting the work which the user might be carrying out in the foreground with the equipment. In other words, a user might be preparing a new document, while another document was being transmitted or received. The teletex would also maintain compatibility with the present telex system and inter-operate with it. Telex procedures called for the exchange of header information before the actual document transfer took place. The header information consisted |
https://en.wikipedia.org/wiki/Socket%203 | Socket 3 was a series of CPU sockets for various x86 microprocessors. It was sometimes found alongside a secondary socket designed for a math coprocessor chip, such as the 487. Socket 3 resulted from Intel's creation of lower voltage microprocessors. An upgrade to Socket 2, it rearranged the pin layout. Socket 3 is compatible with 168-pin socket CPUs.
Socket 3 was a 237-pin low insertion force (LIF) or zero insertion force (ZIF) 19×19 pin grid array (PGA) socket suitable for the 3.3 V and 5 V, 25–50 MHz Intel 486 SX, 486 DX, 486 DX2, 486 DX4, 486 OverDrive and Pentium OverDrive processors as well as AMD Am486, Am5x86 and Cyrix Cx5x86 processors.
See also
List of Intel microprocessors
List of AMD microprocessors
References
Socket 003 |
https://en.wikipedia.org/wiki/Gilbreath%27s%20conjecture | Gilbreath's conjecture is a conjecture in number theory regarding the sequences generated by applying the forward difference operator to consecutive prime numbers and leaving the results unsigned, and then repeating this process on consecutive terms in the resulting sequence, and so forth. The statement is named after Norman L. Gilbreath who, in 1958, presented it to the mathematical community after observing the pattern by chance while doing arithmetic on a napkin. In 1878, eighty years before Gilbreath's discovery, François Proth had, however, published the same observations along with an attempted proof, which was later shown to be false.
Motivating arithmetic
Gilbreath observed a pattern while playing with the ordered sequence of prime numbers
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, ...
Computing the absolute value of the difference between term n + 1 and term n in this sequence yields the sequence
1, 2, 2, 4, 2, 4, 2, 4, 6, 2, ...
If the same calculation is done for the terms in this new sequence, and the sequence that is the outcome of this process, and again ad infinitum for each sequence that is the output of such a calculation, the following five sequences in this list are
1, 0, 2, 2, 2, 2, 2, 2, 4, ...
1, 2, 0, 0, 0, 0, 0, 2, ...
1, 2, 0, 0, 0, 0, 2, ...
1, 2, 0, 0, 0, 2, ...
1, 2, 0, 0, 2, ...
What Gilbreath—and François Proth before him—noticed is that the first term in each series of differences appears to be 1.
The conjecture
Stating Gilbreath's observation formally is significantly easier to do after devising a notation for the sequences in the previous section. Toward this end, let denote the ordered sequence of prime numbers, and define each term in the sequence by
where is positive. Also, for each integer greater than 1, let the terms in be given by
Gilbreath's conjecture states that every term in the sequence for positive is equal to 1.
Verification and attempted proofs
, no valid proof of the conjecture has been publ |
https://en.wikipedia.org/wiki/Disjunction%20and%20existence%20properties | In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).
Definitions
The disjunction property is satisfied by a theory if, whenever a sentence A ∨ B is a theorem, then either A is a theorem, or B is a theorem.
The existence property or witness property is satisfied by a theory if, whenever a sentence is a theorem, where A(x) has no other free variables, then there is some term t such that the theory proves .
Related properties
Rathjen (2005) lists five properties that a theory may possess. These include the disjunction property (DP), the existence property (EP), and three additional properties:
The numerical existence property (NEP) states that if the theory proves , where φ has no other free variables, then the theory proves for some Here is a term in representing the number n.
Church's rule (CR) states that if the theory proves then there is a natural number e such that, letting be the computable function with index e, the theory proves .
A variant of Church's rule, CR1, states that if the theory proves then there is a natural number e such that the theory proves is total and proves .
These properties can only be directly expressed for theories that have the ability to quantify over natural numbers and, for CR1, quantify over functions from to . In practice, one may say that a theory has one of these properties if a definitional extension of the theory has the property stated above (Rathjen 2005).
Results
Non-examples and examples
Almost by definition, a theory that accepts excluded middle while having independent statements does not have the disjunction property. So all classical theories expressing Robinson arithmetic do not have it. Most classical theories, such as Peano arithmetic and ZFC in turn do not validate the existence property either, e.g. because they validate the least number principle existence claim. Bu |
https://en.wikipedia.org/wiki/Reflected-wave%20switching | Reflected-wave switching is a signalling technique used in backplane computer buses such as PCI.
A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line.
Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account.
When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance. The impedance of the microstrip is its characteristic impedance, which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching. This is the system used in most computer buses predating PCI, such as the VME bus.
When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors), the pulse is absorbed and its energy is converted to heat. This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflec |
https://en.wikipedia.org/wiki/Method%20overriding | Method overriding, in object-oriented programming, is a language feature that allows a subclass or child class to provide a specific implementation of a method that is already provided by one of its superclasses or parent classes. In addition to providing data-driven algorithm-determined parameters across virtual network interfaces, it also allows for a specific type of polymorphism (subtyping). The implementation in the subclass overrides (replaces) the implementation in the superclass by providing a method that has same name, same parameters or signature, and same return type as the method in the parent class. The version of a method that is executed will be determined by the object that is used to invoke it. If an object of a parent class is used to invoke the method, then the version in the parent class will be executed, but if an object of the subclass is used to invoke the method, then the version in the child class will be executed. This helps in preventing problems associated with differential relay analytics which would otherwise rely on a framework in which method overriding might be obviated. Some languages allow a programmer to prevent a method from being overridden.
Language-specific examples
Ada
Ada provides method overriding by default.
To favor early error detection (e.g. a misspelling),
it is possible to specify when a method
is expected to be actually overriding, or not. That will be checked by the compiler.
type T is new Controlled with ......;
procedure Op(Obj: in out T; Data: in Integer);
type NT is new T with null record;
overriding -- overriding indicator
procedure Op(Obj: in out NT; Data: in Integer);
overriding -- overriding indicator
procedure Op(Obj: in out NT; Data: in String);
-- ^ compiler issues an error: subprogram "Op" is not overriding
C#
C# does support method overriding, but only if explicitly requested using the modifiers and or .
abstract class Animal
{
public string Name { get; set; }
|
https://en.wikipedia.org/wiki/Double%20dispatch | In software engineering, double dispatch is a special form of multiple dispatch, and a mechanism that dispatches a function call to different concrete functions depending on the runtime types of two objects involved in the call. In most object-oriented systems, the concrete function that is called from a function call in the code depends on the dynamic type of a single object and therefore they are known as single dispatch calls, or simply virtual function calls.
Dan Ingalls first described how to use double dispatching in Smalltalk, calling it multiple polymorphism.
Overview
The general problem addressed is how to dispatch a message to different methods depending not only on the receiver but also on the arguments.
To that end, systems like CLOS implement multiple dispatch. Double dispatch is another solution that gradually reduces the polymorphism on systems that do not support multiple dispatch.
Use cases
Double dispatch is useful in situations where the choice of computation depends on the runtime types of its arguments. For example, a programmer could use double dispatch in the following situations:
Sorting a mixed set of objects: algorithms require that a list of objects be sorted into some canonical order. Deciding if one element comes before another element requires knowledge of both types and possibly some subset of the fields.
Adaptive collision algorithms usually require that collisions between different objects be handled in different ways. A typical example is in a game environment where the collision between a spaceship and an asteroid is computed differently from the collision between a spaceship and a spacestation.
Painting algorithms that require the intersection points of overlapping sprites to be rendered in a different manner.
Personnel management systems may dispatch different types of jobs to different personnel. A schedule algorithm that is given a person object typed as an accountant and a job object typed as engineering rejects the sc |
https://en.wikipedia.org/wiki/Radeon%20R400%20series | The R420 GPU, developed by ATI Technologies, was the company's basis for its 3rd-generation DirectX 9.0/OpenGL 2.0-capable graphics cards. Used first on the Radeon X800, the R420 was produced on a 0.13 micrometer (130 nm) low-K photolithography process and used GDDR-3 memory. The chip was designed for AGP graphics cards.
Driver support of this core was discontinued as of Catalyst 9.4, and as a result there is no official Windows 7 support for any of the X700 - X850 products.
Development
In terms of supported DirectX features, R420 (codenamed Loki) was very similar to the R300. R420 basically takes a "wider is better" approach to the previous architecture, with some small tweaks thrown in to enhance it in various ways. The chip came equipped with over double the pixel and vertex pushing resources compared to the Radeon 9800 XT's R360 (a minor evolution of the R350), with 16 DirectX 9.0b pixel pipelines and 16 ROPs. One would not be far off seeing the X800 XT basically as a pair of Radeon 9800 cores connected together and also running with a ~30% higher clock speed.
The R420 design was a 4 "quad" arrangement (4 pipelines per quad.) This organization internally allowed ATI to disable defective "quads" and sell chips with 12, 8 or even 4 pixel pipelines, an evolution of the technique used with Radeon 9500/9700 and 9800SE/9800. The separation into "quads" also allowed ATI to design a system to optimize the efficiency of the overall chip. Coined the "quad dispatch system", the screen is tiled and work is spread out evenly among the separate "quads" to optimize their throughput. This is how the R300-series chips performed their tasks as well, but R420 refined this by allowing programmable tile sizes in order to control work flow on a finer level of granularity. Apparently by reducing tile sizes, ATI was able to optimize for different triangle sizes.
When ATI doubled the number of pixel pipelines, they also raised the number of vertex shader engines from 4 to 6. This |
https://en.wikipedia.org/wiki/Value%20investing | Value investing is an investment paradigm that involves buying securities that appear underpriced by some form of fundamental analysis. The various forms of value investing derive from the investment philosophy first taught by Benjamin Graham and David Dodd at Columbia Business School in 1928, and subsequently developed in their 1934 text Security Analysis.
The early value opportunities identified by Graham and Dodd included stock in public companies trading at discounts to book value or tangible book value, those with high dividend yields, and those having low price-to-earning multiples, or low price-to-book ratios.
High-profile proponents of value investing, including Berkshire Hathaway chairman Warren Buffett, have argued that the essence of value investing is buying stocks at less than their intrinsic value. The discount of the market price to the intrinsic value is what Benjamin Graham called the "margin of safety". For 25 years, under the influence of Charlie Munger, Buffett expanded the value investing concept with a focus on "finding an outstanding company at a sensible price" rather than generic companies at a bargain price. Hedge fund manager Seth Klarman has described value investing as rooted in a rejection of the efficient-market hypothesis (EMH). While the EMH proposes that securities are accurately priced based on all available data, value investing proposes that some equities are not accurately priced.
Graham never used the phrase value investing – the term was coined later to help describe his ideas and has resulted in significant misinterpretation of his principles, the foremost being that Graham simply recommended cheap stocks. The Heilbrunn Center at Columbia Business School is the current home of the Value Investing Program.
History
While managing the endowment of King's College, Cambridge starting in the 1920s, economist John Maynard Keynes first attempted a strategy based on market timing, or predicting the movement of the finance market |
https://en.wikipedia.org/wiki/Bourbaki%E2%80%93Witt%20theorem | In mathematics, the Bourbaki–Witt theorem in order theory, named after Nicolas Bourbaki and Ernst Witt, is a basic fixed point theorem for partially ordered sets. It states that if X is a non-empty chain complete poset, and
such that
for all
then f has a fixed point. Such a function f is called inflationary or progressive.
Special case of a finite poset
If the poset X is finite then the statement of the theorem has a clear interpretation that leads to the proof. The sequence of successive iterates,
where x0 is any element of X, is monotone increasing. By the finiteness of X, it stabilizes:
for n sufficiently large.
It follows that x∞ is a fixed point of f.
Proof of the theorem
Pick some . Define a function K recursively on the ordinals as follows:
If is a limit ordinal, then by construction
is a chain in X. Define
This is now an increasing function from the ordinals into X. It cannot be strictly increasing, as if it were we would have an injective function from the ordinals into a set, violating Hartogs' lemma. Therefore the function must be eventually constant, so for some
that is,
So letting
we have our desired fixed point. Q.E.D.
Applications
The Bourbaki–Witt theorem has various important applications. One of the most common is in the proof that the axiom of choice implies Zorn's lemma. We first prove it for the case where X is chain complete and has no maximal element. Let g be a choice function on
Define a function
by
This is allowed as, by assumption, the set is non-empty. Then f(x) > x, so f is an inflationary function with no fixed point, contradicting the theorem.
This special case of Zorn's lemma is then used to prove the Hausdorff maximality principle, that every poset has a maximal chain, which is easily seen to be equivalent to Zorn's Lemma.
Bourbaki–Witt has other applications. In particular in computer science, it is used in the theory of computable functions.
It is also used to define recursive data type |
https://en.wikipedia.org/wiki/Predicate%20variable | In mathematical logic, a predicate variable is a predicate letter which functions as a "placeholder" for a relation (between terms), but which has not been specifically assigned any particular relation (or meaning). Common symbols for denoting predicate variables include capital roman letters such as , and , or lower case roman letters, e.g., . In first-order logic, they can be more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional variables which can stand for well-formed formulas of the same logic, and such variables can be quantified by means of (at least) second-order quantifiers.
Notation
Predicate variables should be distinguished from predicate constants, which could be represented either with a different (exclusive) set of predicate letters, or by their own symbols which really do have their own specific meaning in their domain of discourse: e.g. .
If letters are used for both predicate constants and predicate variables, then there must be a way of distinguishing between them. One possibility is to use letters W, X, Y, Z to represent predicate variables and letters A, B, C,..., U, V to represent predicate constants. If these letters are not enough, then numerical subscripts can be appended after the letter in question (as in X1, X2, X3).
Another option is to use Greek lower-case letters to represent such metavariable predicates. Then, such letters could be used to represent entire well-formed formulae (wff) of the predicate calculus: any free variable terms of the wff could be incorporated as terms of the Greek-letter predicate. This is the first step towards creating a higher-order logic.
Usage
If the predicate variables are not defined as belonging to the vocabulary of the predicate calculus, then they are predicate metavariables, whereas the rest of the predicates are just called "predicate letters". The metavariables are thus understood to be used to code for axiom schema and theorem |
https://en.wikipedia.org/wiki/Multilevel%20security | Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications (i.e., at different security levels), permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security. One is to refer to a system that is adequate to protect itself from subversion and has robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an application of a computer that will require the computer to be strong enough to protect itself from subversion and possess adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is important because systems that need to be trusted are not necessarily trustworthy.
Trusted operating systems
An MLS operating environment often requires a highly trustworthy information processing system often built on an MLS operating system (OS), but not necessarily. Most MLS functionality can be supported by a system composed entirely from untrusted computers, although it requires multiple independent computers linked by hardware security-compliant channels (see section B.6.2 of the Trusted Network Interpretation, NCSC-TG-005). An example of hardware enforced MLS is asymmetric isolation. If one computer is being used in MLS mode, then that computer must use a trusted operating system (OS). Because all information in an MLS environment is physically accessible by the OS, strong logical controls must exist to ensure that access to information is strictly controlled. Typically this involves mandatory access control that uses security labels, like the Bell–LaPadula model.
Customers that deploy trusted operating systems typically require that the product complete a formal computer security evaluation. The evaluation is stricter for a |
https://en.wikipedia.org/wiki/National%20Information%20Assurance%20Glossary | Committee on National Security Systems Instruction No. 4009, National Information Assurance Glossary, published by the United States federal government, is an unclassified glossary of Information security terms intended to provide a common vocabulary for discussing Information Assurance concepts.
The glossary was previously published as the National Information Systems Security Glossary (NSTISSI No. 4009) by the National Security Telecommunications and Information Systems Security Committee (NSTISSC). Under Executive Order (E.O.) 13231 of October 16, 2001, Critical Infrastructure Protection in the Information Age, the President George W. Bush redesignated the National Security Telecommunications and Information Systems Security Committee (NSTISSC) as the Committee on National Security Systems (CNSS).
The most recent version was revised April 26, 2010.
See also
Encryption
References
External links
National Information Assurance (IA) Review
National Information Assurance Glossary Terms
Cryptography publications
Glossaries
Publications of the United States government
Reference works in the public domain |
https://en.wikipedia.org/wiki/Fixed-point%20theorem | In mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms.
In mathematical analysis
The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.
By contrast, the Brouwer fixed-point theorem (1911) is a non-constructive result: it says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point (See also Sperner's lemma).
For example, the cosine function is continuous in [−1,1] and maps it into [−1, 1], and thus must have a fixed point. This is clear when examining a sketched graph of the cosine function; the fixed point occurs where the cosine curve y = cos(x) intersects the line y = x. Numerically, the fixed point (known as the Dottie number) is approximately x = 0.73908513321516 (thus x = cos(x) for this value of x).
The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology is notable because it gives, in some sense, a way to count fixed points.
There are a number of generalisations to Banach fixed-point theorem and further; these are applied in PDE theory. See fixed-point theorems in infinite-dimensional spaces.
The collage theorem in fractal compression proves that, for many images, there exists a relatively small description of a function that, when iteratively applied to any starting image, rapidly converges on the desired image.
In algebra and discrete mathematics
The Knaster–Tarski theorem states that any order-preserving function on a complete lattice has a fixed point, and indeed a smallest fixed point. See also Bourbaki–Witt theorem.
The theorem has applications in abstract interpretation, a form of static program analysis.
A common theme in lambd |
https://en.wikipedia.org/wiki/Hexapawn | Hexapawn is a deterministic two-player game invented by Martin Gardner. It is played on a rectangular board of variable size, for example on a 3×3 board or on a regular chessboard. On a board of size n×m, each player begins with m pawns, one for each square in the row closest to them. The goal of each player is to either advance a pawn to the opposite end of the board or leave the other player with no legal moves, either by stalemate or by having all of their pieces captured.
Hexapawn on the 3×3 board is a solved game; with perfect play, White will always lose in 3 moves (1.b2 axb2 2.cxb2 c2 3.a2 c1#). Indeed, Gardner specifically constructed it as a game with a small game tree in order to demonstrate how it could be played by a heuristic AI implemented by a mechanical computer based on Donald Michie's Matchbox Educable Noughts and Crosses Engine.
A variant of this game is octopawn, which is played on a 4×4 board with 4 pawns on each side. It is a forced win for White.
Rules
As in chess, a pawn may be moved in two different ways: it may be moved one square vertically forward, or it may capture a pawn one square diagonally ahead of it. A pawn may not be moved forward if there is a pawn in the next square. Unlike chess, the first move of a pawn may not advance it by two spaces. A player loses if they have no legal moves or one of the other player's pawns reaches the end of the board.
Dawson's chess
Whenever a player advances a pawn to the penultimate rank and attacks an opposing pawn, there is a threat to proceed to the final rank by capture. The opponent's only sensible responses, therefore, are to either capture the advanced pawn or advance the threatened one, the latter only being sensible in the case that there is one threatened pawn rather than two. If one restricts 3× hexapawn with the additional rule that capturing is always compulsory, the result is the game Dawson's chess. The game was invented by Thomas Rayner Dawson in 1935.
Dawson's chess reduc |
https://en.wikipedia.org/wiki/Thermal%20design%20power | The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload.
Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating.
Intel has introduced a new metric called scenario design power (SDP) for some Ivy Bridge Y-series processors.
Calculation
The average CPU power (ACP) is the power consumption of central processing units, especially server processors, under "average" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture (Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the "average" ACP rating of the same processor.
According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth),
which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures.
The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a "therm-trip") or throttle their speed down. Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted hea |
https://en.wikipedia.org/wiki/Whetstone%20%28benchmark%29 | The Whetstone benchmark is a synthetic benchmark for evaluating the performance of computers. It was first written in Algol 60 in 1972 at the Technical Support Unit of the Department of Trade and Industry (later part of the Central Computer and Telecommunications Agency) in the United Kingdom. It was derived from statistics on program behaviour gathered on the KDF9 computer at NPL National Physical Laboratory, using a modified version of its Whetstone ALGOL 60 compiler. The workload on the machine was represented as a set of frequencies of execution of the 124 instructions of the Whetstone Code. The Whetstone Compiler was built at the Atomic Power Division of the English Electric Company in Whetstone, Leicestershire, England, hence its name. Dr. B.A. Wichman at NPL produced a set of 42 simple ALGOL 60 statements, which in a suitable combination matched the execution statistics.
To make a more practical benchmark Harold Curnow of TSU wrote a program incorporating the 42 statements. This program worked in its ALGOL 60 version, but when translated into FORTRAN it was not executed correctly by the IBM optimizing compiler. Calculations whose results were not output were omitted. He then produced a set of program fragments which were more like real code and which collectively matched the original 124 Whetstone instructions. Timing this program gave a measure of the machine’s speed in thousands of Whetstone instructions per second (). The Fortran version became the first general purpose benchmark that set industry standards of computer system performance. Further development was carried out by Roy Longbottom, also of TSU/CCTA, who became the official design authority. The Algol 60 program ran under the Whetstone compiler in July 2010, for the first time since the last KDF9 was shut down in 1980, but now executed by a KDF9 emulator. Following increased computer speeds, performance measurement was changed to Millions of Whetstone Instructions Per Second (MWIPS).
Source cod |
https://en.wikipedia.org/wiki/Solovay%E2%80%93Strassen%20primality%20test | The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic test to determine if a number is composite or probably prime. The idea behind the test was discovered by M. M. Artjuhov in 1967
(see Theorem E in the paper). This test has been largely superseded by the Baillie–PSW primality test and the Miller–Rabin primality test, but has great historical importance in showing the practical feasibility of the RSA cryptosystem. The Solovay–Strassen test is essentially an Euler–Jacobi probable prime test.
Concepts
Euler proved that for any odd prime number p and any integer a,
where is the Legendre symbol. The Jacobi symbol is a generalisation of the Legendre symbol to , where n can be any odd integer. The Jacobi symbol can be computed in time O((log n)²) using Jacobi's generalization of the
law of quadratic reciprocity.
Given an odd number n we can contemplate whether or not the congruence
holds for various values of the "base" a, given that a is relatively prime to n. If n is prime then this congruence is true for all a. So if we pick values of a at random and test the congruence, then
as soon as we find an a which doesn't fit the congruence we know that n is not prime (but this does not tell us a nontrivial factorization of n). This base a is called an Euler witness for n; it is a witness for the compositeness of n. The base a is called an Euler liar for n if the congruence is true while n is composite.
For every composite odd n, at least half of all bases
are (Euler) witnesses as the set of Euler liars is a proper subgroup of . For example, for , the set of Euler liars has order 8 and , and has order 48.
This contrasts with the Fermat primality test, for which the proportion of witnesses may be much smaller. Therefore, there are no (odd) composite n without many witnesses, unlike the case of Carmichael numbers for Fermat's test.
Example
Suppose we wish to determine if n = 221 is prime. We write (n−1 |
https://en.wikipedia.org/wiki/Branch%20%28computer%20science%29 | A branch is an instruction in a computer program that can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order. Branch (or branching, branched) may also refer to the act of switching execution to a different instruction sequence as a result of executing a branch instruction. Branch instructions are used to implement control flow in program loops and conditionals (i.e., executing a particular sequence of instructions only if certain conditions are satisfied).
A branch instruction can be either an unconditional branch, which always results in branching, or a conditional branch, which may or may not cause branching depending on some condition. Also, depending on how it specifies the address of the new instruction sequence (the "target" address), a branch instruction is generally classified as direct, indirect or relative, meaning that the instruction contains the target address, or it specifies where the target address is to be found (e.g., a register or memory location), or it specifies the difference between the current and target addresses.
Implementation
Branch instructions can alter the contents of the CPU's Program Counter (or PC) (or Instruction Pointer on Intel microprocessors). The PC maintains the memory address of the next machine instruction to be fetched and executed. Therefore, a branch, if executed, causes the CPU to execute code from a new memory address, changing the program logic according to the algorithm planned by the programmer.
One type of machine level branch is the jump instruction. These may or may not result in the PC being loaded or modified with some new, different value other than what it ordinarily would have been (being incremented past the current instruction to point to the following, next instruction). Jumps typically have unconditional and conditional forms where the latter may be taken or not taken (the PC is modified or not) depending |
https://en.wikipedia.org/wiki/MSI%20protocol | In computing, the MSI protocol - a basic cache-coherence protocol - operates in multiprocessor systems. As with other cache coherency protocols, the letters of the protocol name identify the possible states in which a cache line can be.
Overview
In MSI, each block contained inside a cache can have one of three possible states:
Modified: The block has been modified in the cache. The data in the cache is then inconsistent with the backing store (e.g. memory). A cache with a block in the "M" state has the responsibility to write the block to the backing store when it is evicted.
Shared: This block is unmodified and exists in read-only state in at least one cache. The cache can evict the data without writing it to the backing store.
Invalid: This block is either not present in the current cache or has been invalidated by a bus request, and must be fetched from memory or another cache if the block is to be stored in this cache.
These coherency states are maintained through communication between the caches and the backing store. The caches have different responsibilities when blocks are read or written, or when they learn of other caches issuing reads or writes for a block.
When a read request arrives at a cache for a block in the "M" or "S" states, the cache supplies the data. If the block is not in the cache (in the "I" state), it must verify that the block is not in the "M" state in any other cache. Different caching architectures handle this differently. For example, bus architectures often perform snooping, where the read request is broadcast to all of the caches. Other architectures include cache directories which have agents (directories) that know which caches last had copies of a particular cache block. If another cache has the block in the "M" state, it must write back the data to the backing store and go to the "S" or "I" states. Once any "M" line is written back, the cache obtains the block from either the backing store, or another cache with th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.