source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/133%20%28number%29 | 133 (one hundred [and] thirty-three) is the natural number following 132 and preceding 134.
In mathematics
133 is an n whose divisors (excluding n itself) added up divide φ(n). It is an octagonal number and a happy number.
133 is a Harshad number, because it is divisible by the sum of its digits.
133 is a repdigit in base 11 (111) and base 18 (77), whilst in base 20 it is a cyclic number formed from the reciprocal of the number three.
133 is a semiprime: a product of two prime numbers, namely 7 and 19. Since those prime factors are Gaussian primes, this means that 133 is a Blum integer.
133 is the number of compositions of 13 into distinct parts.
In the military
Douglas C-133 Cargomaster was a United States cargo aircraft built between 1956 and 1961
is a heavy landing craft which launched in 1972
was a United States Navy Mission Buenaventura-class fleet oilers during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy General G. O. Squier-class transport ship during World War II
was a United States Navy during World War I
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy S-class submarine during World War II
was a United States Navy during World War II
was a United States Navy heavy cruiser during the Korean War
Frontstalag 133 was a temporary German prisoner of war camp during World War II located near Rennes, northern France
Caproni Ca.133 was a three-engine transport/bomber aircraft used by the Italian Regia Aeronautica from the Second Italo-Abyssinian War until World War II
Naval Mobile Construction Battalion 133 is an active duty Seabee battalion originally commissioned during World War II as the 133rd Naval Construction Battalion(NCB)
In transportation
London Buses route 133 is a Transport for London contracted bus route in London
RATB route 133 is bus route run by RATB in Buchar |
https://en.wikipedia.org/wiki/1%2C000%2C000 | One million (1,000,000), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian millione (milione in modern Italian), from mille, "thousand", plus the augmentative suffix -one.
It is commonly abbreviated in British English as m (not to be confused with the metric prefix "m", milli, for , or with metre), M, MM ("thousand thousands", from Latin "Mille"; not to be confused with the Roman numeral = 2,000), mm (not to be confused with millimetre), or mn in financial contexts.
In scientific notation, it is written as or 106. Physical quantities can also be expressed using the SI prefix mega (M), when dealing with SI units; for example, 1 megawatt (1 MW) equals 1,000,000 watts.
The meaning of the word "million" is common to the short scale and long scale numbering systems, unlike the larger numbers, which have different names in the two systems.
The million is sometimes used in the English language as a metaphor for a very large number, as in "Not in a million years" and "You're one in a million", or a hyperbole, as in "I've walked a million miles" and "You've asked a million-dollar question".
1,000,000 is also the square of 1000 and also the cube of 100.
Visualizing one million
Even though it is often stressed that counting to precisely a million would be an exceedingly tedious task due to the time and concentration required, there are many ways to bring the number "down to size" in approximate quantities, ignoring irregularities or packing effects.
Information: Not counting spaces, the text printed on 136 pages of an Encyclopædia Britannica, or 600 pages of pulp paperback fiction contains approximately one million characters.
Length: There are one million millimetres in a kilometre, and roughly a million sixteenths of an inch in a mile (1 sixteenth = 0.0625). A typical car tire might rotate a million times in a trip, while the engine would do several times that number of revolution |
https://en.wikipedia.org/wiki/Formal%20equivalence%20checking | Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.
Equivalence checking and levels of abstraction
In general, there is a wide range of possible definitions of functional equivalence covering comparisons between different levels of abstraction and varying granularity of timing details.
The most common approach is to consider the problem of machine equivalence which defines two synchronous design specifications functionally equivalent if, clock by clock, they produce exactly the same sequence of output signals for any valid sequence of input signals.
Microprocessor designers use equivalence checking to compare the functions specified for the instruction set architecture (ISA) with a register transfer level (RTL) implementation, ensuring that any program executed on both models will cause an identical update of the main memory content. This is a more general problem.
A system design flow requires comparison between a transaction level model (TLM), e.g., written in SystemC and its corresponding RTL specification. Such a check is becoming of increasing interest in a system-on-a-chip (SoC) design environment.
Synchronous machine equivalence
The register transfer level (RTL) behavior of a digital chip is usually described with a hardware description language, such as Verilog or VHDL. This description is the golden reference model that describes in detail which operations will be executed during which clock cycle and by which pieces of hardware. Once the logic designers, by simulations and other verification methods, have verified register transfer description, the design is usually converted into a netlist by a logic synthesis tool. Equivalence is not to be confused with functional correctness, which must be determined by functional verification.
The initial netlist |
https://en.wikipedia.org/wiki/Projection-valued%20measure | In mathematics, particularly in functional analysis, a projection-valued measure (PVM) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. Projection-valued measures are formally similar to real-valued measures, except that their values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.
Projection-valued measures are used to express results in spectral theory, such as the important spectral theorem for self-adjoint operators, in which case the PVM is sometimes referred to as the spectral measure. The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics, PVMs are the mathematical description of projective measurements. They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state.
Formal definition
A projection-valued measure on a measurable space
, where is a σ-algebra of subsets of , is a mapping from to the set of self-adjoint projections on a Hilbert space (i.e. the orthogonal projections) such that
(where is the identity operator of ) and for every , the following function
is a complex measure on (that is, a complex-valued countably additive function).
We denote this measure by
.
Note that is a real-valued measure, and a probability measure when has length one.
If is a projection-valued measure and
then the images , are orthogonal to each other. From this follows that in general,
and they commute.
Example. Suppose is a measure space. Let, for every measurable subset in ,
be the operator of multiplication by the indicator function on L2(X). Then is a projection-valued measure. F |
https://en.wikipedia.org/wiki/Quillaia | Quillaia is the milled inner bark or small stems and branches of the soapbark (Quillaja saponaria, Molina). Other names include Murillo bark extract, Panama bark extract, Quillaia extract, Quillay bark extract, and Soapbark extract. Quillaia contains high concentrations of saponins that can be increased further by processing. Highly purified saponins from quillaia are used as adjuvants to enhance the effectiveness of vaccines. Other compounds in the crude extract include tannins and other polyphenols, and calcium oxalate.
Quillaia is used in the manufacture of food additives, and it is listed as an ingredient in root beer and cream soda. The extract also is used as a humectant in baked goods, frozen dairy products, and puddings and as a foaming agent in soft drinks. It is used in agriculture for some "natural" spray adjuvant formulations.
Saponin adjuvants
The saponins from Quillaja saponaria are used in several approved veterinary vaccines (e.g., foot-and-mouth disease vaccines). Initially a crude preparation was used, but more recently purified products have been developed. Two of these (Quil A and Matrix-M) have been shown to be more effective and cause less local irritation.
Quil A is still a mixture of more than 25 different saponin molecules. One of them, the saponin QS21, has been investigated for as an adjuvant for human vaccines.
Novavax uses a highly purified quillaja extract as an adjuvant in its veterinary and human vaccines. The adjuvant, Matrix-M, is made at facilities in Sweden and Denmark.
References
External links
WHO report
Food additives
E-number additives |
https://en.wikipedia.org/wiki/PDTV | PDTV is an abbreviation short for Pure Digital Television. Often seen as part of the filename of TV shows shared through P2P , The Scene, and FTP servers on the Internet. In this case, PDTV refers not to container, bitrate or dimensions of the video, but the digital nature of the capture source. Non Scene European rippers often use the label DVBRip or DVB-rip to specify a purely digital rip of a Digital Video Broadcast (DVB), however all Scene groups use standardized labeling.
PDTV encompasses a broad array of capture methods and sources, but generally it involves the capture of SD or non-HD digital television broadcasts without any analog-to-digital conversion, instead relying on directly ripping MPEG streams. PDTV sources can be captured by a variety of digital TV tuner cards from a digital feed such as ClearQAM unencrypted cable, Digital Terrestrial Television, Digital Video Broadcast or other satellite sources. Just as with Freeview (DVB-T) in the United Kingdom, broadcast television in the United States has no barriers to PDTV capture. Hardware such as the HDHomeRun when connected to an ATSC (Antenna) or unencrypted ClearQAM cable feed allows lossless digital capture of MPEG-2 streams (Pure Digital Television), without monthly fees or other restrictions normally implemented by a Set-top box. Although different from the analog hole, Pure Digital Television capture imposes no technological restriction on what is done with the stream; playback, Mash-Ups and even recompression/pirated distribution are possible without the permission of the rights holder.
A publisher of fan-made DVD releases also uses the name PDTV, but with no connection to the more common usage explained above. The "PD" in this case refers to "planet dust" with an additional connotation of Public Domain, even though the material offered is more often the video equivalent of abandonware as opposed to anything where copyright has actually expired. Whereas PDTV content online (as described abov |
https://en.wikipedia.org/wiki/Equal-loudness%20contour | An equal-loudness contour is a measure of sound pressure level, over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones. The unit of measurement for loudness levels is the phon and is arrived at by reference to equal-loudness contours. By definition, two sine waves of differing frequencies are said to have equal-loudness level measured in phons if they are perceived as equally loud by the average young person without significant hearing impairment.
The Fletcher–Munson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by Harvey Fletcher and Wilden A. Munson, and reported in a 1933 paper entitled "Loudness, its definition, measurement and calculation" in the Journal of the Acoustical Society of America. Fletcher–Munson curves have been superseded and incorporated into newer standards. The definitive curves are those defined in ISO 226 from the International Organization for Standardization, which are based on a review of modern determinations made in various countries.
Amplifiers often feature a "loudness" button, known technically as loudness compensation, that boosts low and high-frequency components of the sound. These are intended to offset the apparent loudness fall-off at those frequencies, especially at lower volume levels. Boosting these frequencies produces a flatter equal-loudness contour that appears to be louder even at low volume, preventing the perceived sound from being dominated by the mid-frequencies where the ear is most sensitive.
Fletcher–Munson curves
The first research on the topic of how the ear hears different frequencies at different levels was conducted by Fletcher and Munson in 1933. Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though a re-determination was carried out by Robinson and Dadson in 1956, which became the basis for an ISO 226 standard.
It is |
https://en.wikipedia.org/wiki/Habitat%20%28video%20game%29 | Habitat is a massively multiplayer online role-playing game (MMORPG) developed by LucasArts. It is the first attempt at a large-scale commercial virtual community that was graphic based. Initially created in 1985 by Randy Farmer, Chip Morningstar, Aric Wilmunder and Janet Hunter the game was made available as a beta test in 1986 by Quantum Link, an online service for the Commodore 64 computer and the corporate progenitor to AOL. Both Farmer and Morningstar were given a First Penguin Award at the 2001 Game Developers Choice Awards for their innovative work on Habitat. As a graphical MUD it is considered a forerunner of modern MMORPGs unlike other online communities of the time (i.e. MUDs and massively multiplayer onlines with text-based interfaces). Habitat had a GUI and large user base of consumer-oriented users, and those elements in particular have made Habitat a much-cited project and acknowledged benchmark for the design of today's online communities that incorporate accelerated 3D computer graphics and immersive elements into their environments.
Culture
Users in the virtual world were represented by onscreen avatars, meaning that individual users had a third-person perspective of themselves, making it rather like a videogame. Players in the same region (denoted by all objects and elements shown on a particular screen) could see, speak (through onscreen text output from the users), and interact with one another. Habitat was governed by its citizenry. The only off-limits portions were those concerning the underlying software constructs and physical components of the system. The users were responsible for laws and acceptable behavior within Habitat. The authors of Habitat were greatly concerned with allowing the broadest range of interaction possible, since they felt that interaction, not technology or information, truly drove cyberspace. Avatars had to barter for resources within Habitat, and could even be robbed or "killed" by other avatars. Initially, this le |
https://en.wikipedia.org/wiki/American%20Nuclear%20Society | The American Nuclear Society (ANS) is an international, not-for-profit organization of scientists, engineers, and industry professionals that promote the field of nuclear engineering and related disciplines.
ANS is composed of three communities: professional divisions, local sections/plant branches, and student sections. Individual members consist of fellows, professional members, and student members. Various organization members are also included in the Society including corporations, governmental agencies, educational institutions, and associations.
As of spring 2020, ANS is composed of more than 10,000 members from more than 40 countries. ANS is also a member of the International Nuclear Societies Council (INSC).
Professional Divisions within the American Nuclear Society focus on specific technical domains, encompassing 18 areas and the Young Members Group. They provide members with specialized engagement opportunities in nuclear science and technology. ANS members can join any number of these divisions. Their activities are coordinated by the Professional Divisions Committee. Topics covered by the divisions range from Accelerator Applications to Fusion Energy and more.
The main objectives of ANS are to provide professional development opportunities for members, engage and inform the public and students about the benefits of nuclear technology, encourage innovation in the nuclear field, and advocate effectively for nuclear technology at both domestic and international levels.
History
The American Nuclear Society was founded in 1954 as a not-for-profit association to promote the growing nuclear field. Shortly thereafter in 1955, ANS held its first annual meeting and elected Walter Zinn as its first president. Originally headquartered in space provided by the Oak Ridge Institute of Nuclear Studies (ORINS), the Society's headquarters were moved to various locations over the years until 1977 in which the Society settled into its own headquarters building in La |
https://en.wikipedia.org/wiki/Integrator | An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output.
Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest type and are still used for metering water flow or electrical power. Electronic analogue integrators are the basis of analog computers and charge amplifiers. Integration can also be performed by algorithms in digital computers.
In signal processing circuits
An electronic integrator is a form of first-order low-pass filter, which can be performed in the continuous-time (analog) domain or approximated (simulated) in the discrete-time (digital) domain. An integrator will have a low pass filtering effect but when given an offset it will accumulate a value building it until it reaches a limit of the system or overflows.
A current integrator is an electronic device performing a time integration of an electric current, thus measuring a total electric charge. A capacitor's current–voltage relation makes it a very simple current integrator:
More sophisticated current integrator circuits build on this relation, such as the charge amplifier. A current integrator is also used to measure the electric charge on a Faraday cup in a residual gas analyzer to measure partial pressures of gasses in a vacuum. Another application of current integration is in ion beam deposition, where the measured charge directly corresponds to the number of ions deposited on a substrate, assuming the charge state of the ions is known. The two current-carrying electrical leads must to be connected to the ion source and the substrate, closing the electric circuit which in part is given by the ion beam.
A voltage integrator is an electronic device performing a time integration of an electric voltage, thus measuring the total volt-second product. A simple resistor–capa |
https://en.wikipedia.org/wiki/Fantasy%20Zone | is a 1986 arcade video game by Sega, and the first game in the Fantasy Zone series. It was later ported to a wide variety of consoles, including the Master System. The player controls a sentient spaceship named Opa-Opa who fights an enemy invasion in the titular group of planets. The game contains a number of features atypical of the traditional scrolling shooter. The main character, Opa-Opa, is sometimes referred to as Sega's first mascot character.
The game design and main character have many similarities to the earlier TwinBee, and both are credited with establishing the cute 'em up subgenre. It also popularized the concept of a boss rush, a stage where the player faces multiple previous bosses again in succession. Numerous sequels were made over the years.
Gameplay
In the game, the player's ship is placed in a level with a number of bases to destroy. When all the bases are gone, the stage boss appears, who must be defeated in order to move on to the next stage. There are eight stages, and in all of them, except the final one, the scroll is not fixed; the player can move either left or right, although the stage loops. The final level consists of a rematch against all of the previous bosses in succession before facing the final boss.
Opa-Opa uses two different attacks: the standard weapon (initially bullets) and bombs. He can also move down to land on the ground by sprouting feet and walking around until he flies again.
It is possible to upgrade Opa-Opa's weapons, get bombs and flying engine to increase speed, and get extra lives. To do these, the player must get money by defeating enemies, bases or bosses, and access a shop by touching a marked balloon. Prices rise with each purchase. When the player chooses to exit or the time runs up, another screen appears to equip these upgrades; only one engine, weapon and bomb can be equipped at a time.
Some of the new weapons have a time limit that starts as soon as the shop is left. Some of the bombs can be used at |
https://en.wikipedia.org/wiki/Commando%20%28video%20game%29 | Commando, released as in Japan, is a vertically scrolling run and gun video game released by Capcom for arcades in 1985. The game was designed by Tokuro Fujiwara. It was distributed in North America by Data East, and in Europe by several companies including Capcom, Deith Leisure and Sega, S.A. SONIC. Versions were released for various home computers and video game consoles. It is unrelated to the 1985 film of the same name, which was released six months after the game.
Commando was a critical and commercial success, becoming one of the highest-grossing arcade video games of 1985 and one of the best-selling home video games of 1986. It was highly influential, spawning numerous clones following its release, while popularizing the run-and-gun shooter genre. Its influence can be seen in many later shooter games, especially those released during the late 1980s to early 1990s.
The game later appeared on Capcom Classics Collection, Activision Anthology, and on the Wii Virtual Console Arcade, as well as Capcom Arcade Cabinet for PlayStation 3 and Xbox 360. A sequel, Mercs, was released in 1989.
Gameplay
The player takes control of a soldier named Super Joe, who starts by being dropped off in a jungle by a helicopter, and has to fight his way out singlehandedly, fending off a massive assault of enemy soldiers.
Super Joe is armed with an assault rifle (which has unlimited ammunition) as well as a limited supply of hand grenades. While Joe can fire his gun in any of the eight directions that he faces, his grenades can only be thrown vertically towards the top of the screen, irrespective of the direction Joe is facing. Unlike his assault rifle bullets, grenades can be thrown to clear obstacles, and explosions from well-placed grenades can kill several enemies at once.
At the end of each level, the screen stops, and the player must fight several soldiers streaming from a gate or fortress. They are ordered out by a cowardly officer, who immediately runs away, although sho |
https://en.wikipedia.org/wiki/Recursive%20definition | In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set.
A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function is defined by the rules
This definition is valid for each natural number , because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function , starting from and proceeding onwards with etc.
The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction.
An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set of natural numbers is:
1 is in
If an element n is in then is in
is the intersection of all sets satisfying (1) and (2).
There are many sets that satisfy (1) and (2) – for example, the set satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members. Note that this definition assumes that is contained in a larger set (such as the set of real numbers) — in which the operation + is defined.
Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of whenever it holds of , then the property holds of all natural numbers (Aczel 1977:742 |
https://en.wikipedia.org/wiki/PWM%20rectifier | PWM rectifier is an AC to DC power converter, that is implemented using forced commutated power electronic semiconductor switches. Conventional PWM converters are used for wind turbines that have a permanent-magnet alternator.
Today, insulated gate bipolar transistors are typical switching devices. In contrast to diode bridge rectifiers, PWM rectifiers achieve bidirectional power flow. In frequency converters this property makes it possible to perform regenerative braking. PWM rectifiers are also used in distributed power generation applications, such as micro turbines, fuel cells and windmills.
The major advantage of using the pulse width modulation technique is the reduction of higher order harmonics. It also makes it possible to control the magnitude of the output voltage, and improve the power factor by forcing the switches to follow the input voltage waveform using a PLL loop.
Thus we can reduce the total harmonic distortion (THD).
References
Electronic circuits |
https://en.wikipedia.org/wiki/Bastion%20host | A bastion host is a special-purpose computer on a network specifically designed and configured to withstand attacks, so named by analogy to the bastion, a military fortification. The computer generally hosts a single application or process, for example, a proxy server or load balancer, and all other services are removed or limited to reduce the threat to the computer. It is hardened in this manner primarily due to its location and purpose, which is either on the outside of a firewall or inside of a demilitarized zone (DMZ) and usually involves access from untrusted networks or computers. These computers are also equipped with special networking interfaces to withstand high-bandwidth attacks through the internet.
Definitions
The term is generally attributed to a 1990 article discussing firewalls by Marcus J. Ranum, who defined a bastion host as "a system identified by the firewall administrator as a critical strong point in the network security. Generally, bastion hosts will have some degree of extra attention paid to their security, may undergo regular audits, and may have modified software".
It has also been described as "any computer that is fully exposed to attack by being on the public side of the DMZ, unprotected by a firewall or filtering router. Firewalls and routers, anything that provides perimeter access control security can be considered bastion hosts. Other types of bastion hosts can include web, mail, DNS, and FTP servers. Due to their exposure, a great deal of effort must be put into designing and configuring bastion hosts to minimize the chances of penetration".
Placement
There are two common network configurations that include bastion hosts and their placement. The first requires two firewalls, with bastion hosts sitting between the first "outside world" firewall, and an inside firewall, in a DMZ. Often, smaller networks do not have multiple firewalls, so if only one firewall exists in a network, bastion hosts are commonly placed outside the fir |
https://en.wikipedia.org/wiki/SmealSearch | SmealSearch (now BizSeer) was a web portal, search engine and digital library for academic business documents that was originally hosted at the defunct eBusiness Research Center at the Pennsylvania State University. It was based on the CiteSeer digital library and search engine technology. Due to lack of support, it moved to the College of Information Sciences and Technology and became BizSeer. It was enhanced and modified by many including Lee Giles (project manager), Yang Sun (technical lead), Sandip Debnath, Isaac Councill, Arvind Rangaswamy, Nirmal Pal, Yves Petinot and Pradeep Teregowda.
BizSeer's goal was to intelligently crawl and harvest academic business documents on the web and use autonomous citation indexing to permit querying by citation or by document. Currently, it was publicly available on the World Wide Web at the College of Information Sciences and Technology at the Pennsylvania State University, and has over 150,000 documents, primarily in the fields of business, and related areas. The goal was to have the document collection continuously grow. BizSeer harvested business school information from the web and created a large database of business schools from around the world. BizSeer's goals were to assist students, professors, researchers and others with business related research and other information needs.
BizSeer, formerly SmealSearch, grew from and consumed a similar search engine, eBizSearch, designed to harvest and index academic documents related to e-business. Currently, BizSeer is no longer supported.
See also
CiteSeerX
CiteSeer
Citebase
Google Scholar
Scopus
References
Internet search engines
Online databases |
https://en.wikipedia.org/wiki/Sunwise | In Scottish folklore, sunwise, deosil or sunward (clockwise) was considered the “prosperous course”, turning from east to west in the direction of the sun. The opposite course, anticlockwise, was known as widdershins (Lowland Scots), or tuathal (Scottish Gaelic). In the Northern Hemisphere, "sunwise" and "clockwise" run in the same direction, because sundials were used to tell time, and their features were transferred to clock faces. Another influence may have been the right-handed bias in many cultures.
Irish culture
During the days of Gaelic Ireland and of the Irish clans, the Psalter known as was used as both a rallying cry and protector in battle by the Chiefs of Clan O'Donnell. Before a battle it was customary for a chosen monk or holy man (usually attached to the Clan McGroarty and who was in a state of grace) to wear the Cathach and the cumdach, or book shrine, around his neck and then walk three times sunwise around the warriors of Clan O'Donnell.
According to folklorist Kevin Danaher, on St. John's Eve in Ulster and Connaught, it was customary to light a bonfire at sunset and to walk sunwise around the fire while praying the rosary. Those who could not afford a rosary would keep tally by holding a small pebble during each prayer and throwing it into the bonfire as each prayer was completed.
Scottish culture
This is descriptive of the ceremony observed by the druids, of walking round their temples by the south, in the course of their directions, always keeping their temples on their right. This course (diasil or deiseal) was deemed propitious, while the contrary course is perceived as fatal, or at least unpropitious. From this ancient superstition are derived several Gaelic customs which were still observed around the turn of the twentieth century, such as drinking over the left thumb, as Toland expresses it, or according to the course of the sun.
Martin Martin says:
"Deosil" and other spellings
Wicca uses the spelling deosil, which violates the Gaeli |
https://en.wikipedia.org/wiki/Pirbright%20Institute | The Pirbright Institute (formerly the Institute for Animal Health) is a research institute in Surrey, England, dedicated to the study of infectious diseases of farm animals. It forms part of the UK government's Biotechnology and Biological Sciences Research Council (BBSRC). The institute employs scientists, vets, PhD students and operations staff.
History
It began in 1914 to test cows for tuberculosis. More buildings were added in 1925. Compton was established by the Agricultural Research Council in 1937. Pirbright became a research institute in 1939 and Compton in 1942. The Houghton Poultry Research Station at Houghton, Cambridgeshire was established in 1948. In 1963 Pirbright became the Animal Virus Research Institute and Compton became the Institute for Research on Animal Diseases. The Neuropathogenesis Unit (NPU) was established in Edinburgh in 1981. This became part of the Roslin Institute in 2007.
In 1987, Compton, Houghton and Pirbright became the Institute for Animal Health, being funded by BBSRC. Houghton closed in 1992, operations at Compton ended in 2015.
The Edward Jenner Institute for Vaccine Research was sited at Compton until October 2005, when it merged with the vaccine programmes of the University of Oxford and the Institute for Animal Health.
The Pirbright site was implicated in the 2007 United Kingdom foot-and-mouth outbreak, with the Health and Safety Executive (HSE) concluding that a local case of the disease was a result of contaminated effluent release either from the Pirbright Institute or the neighbouring Merial Animal Health laboratory.
Significant investment (over £170 million) took place at Pirbright with the development of new world-class laboratory and animal facilities. The institute has been known as "The Pirbright Institute" since October 2012.
On 14 June 2019 the largest stock of the rinderpest virus was destroyed at the Pirbright Institute.
Directors of note
Dr John Burns Brooksby 1964 until 1980
Structure
The work previou |
https://en.wikipedia.org/wiki/Winepress | A winepress is a device used to extract juice from crushed grapes during winemaking. There are a number of different styles of presses that are used by wine makers but their overall functionality is the same. Each style of press exerts controlled pressure in order to free the juice from the fruit (most often grapes). The pressure must be controlled, especially with grapes, in order to avoid crushing the seeds and releasing a great deal of undesirable tannins into the wine. Wine was being made at least as long ago as 4000 BC; in 2011, a winepress was unearthed in Armenia with red wine dated 6,000 years old.
Press types
Basket
A basket press consists of a large basket filled with the crushed grapes. Pressure is applied through a plate that is forced down onto the fruit. The mechanism to lower the plate is often either a screw or a hydraulic device. The juice flows through openings in the basket. The basket style press was the first type of mechanized press to be developed, and its basic design has not changed in nearly 1000 years.
Horizontal screw
A horizontal screw press works using the same principle as the basket press. Instead of a plate being brought down to put pressure on the grapes, plates from either side of a closed cylinder are brought together to squeeze the grapes. Generally the volume of grapes handled is significantly greater than that of a basket press.
Bladder
A bladder press consists of a large cylinder, closed at each end, into which the fruit is loaded. To press the grapes, a large bladder expands and pushes the grapes against the sides. The juice then flows out through small openings in the cylinder. The cylinder rotates during the process to help homogenize the pressure that is placed on the grapes.
Continuous screw
A continuous screw press differs from the above presses in that it does not process a single batch of grapes at a time. Instead it uses an Archimedes' screw to continuously force grapes up against the wall of the device. Juice is |
https://en.wikipedia.org/wiki/Allograph | In graphemics and typography, the term allograph is used of a glyph that is a design variant of a letter or other grapheme, such as a letter, a number, an ideograph, a punctuation mark or other typographic symbol. In graphemics, an obvious example in English (and many other writing systems) is the distinction between uppercase and lowercase letters. Allographs can vary greatly, without affecting the underlying identity of the grapheme. Even if the word "cat" is rendered as "cAt", it remains recognizable as the sequence of the three graphemes , , .
Letters and other graphemes can also have significant variations that may be missed by many readers. The letter g, for example, has two common forms (glyphs) in different typefaces, and a wide variety in people's handwriting. A positional example of allography is the long s (), a symbol which was once a widely used as a non-final allograph of the lowercase letter s.
A grapheme variant can acquire a separate meaning in a specialized writing system, such as the International Phonetic Alphabet used in linguistics. Several such variants have distinct code points in Unicode and thus are not allographs for some applications.
Han characters
In the Han script, there exist several graphemes that have more than one written representation. Han typefaces often contain many variants of some graphemes. Different regional standards have adopted certain character variants. For instance:
{| class=wikitable
!Standard!!Allograph!!Definition
|-
|Mainland China||lang="zh-Hans-CN" align="center"|户||户
|-
|Japan||lang="ja" align="center"| 戸||戸
|-
|Taiwan||lang="zh-Hant-TW" align="center"| 戶||戶
|}
Typography
In typography, the term 'allograph' is used more specifically to describe the different representations of the same grapheme or character in different typefaces. The resulting font elements may look quite different in shape and style from the reference character or each other, but nevertheless their meaning remains the same.
In U |
https://en.wikipedia.org/wiki/Algebraic%20normal%20form | In Boolean algebra, the algebraic normal form (ANF), ring sum normal form (RSNF or RNF), Zhegalkin normal form, or Reed–Muller expansion is a way of writing propositional logic formulas in one of three subforms:
The entire formula is purely true or false:
One or more variables are combined into a term by AND (), then one or more terms are combined by XOR () together into ANF. Negations are not permitted:
The previous subform with a purely true term:
Formulas written in ANF are also known as Zhegalkin polynomials and Positive Polarity (or Parity) Reed–Muller expressions (PPRM).
Common uses
ANF is a canonical form, which means that two logically equivalent formulas will convert to the same ANF, easily showing whether two formulas are equivalent for automated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctive and disjunctive normal forms also require recording whether each variable is negated or not. Negation normal form is unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent.
Putting a formula into ANF also makes it easy to identify linear functions (used, for example, in linear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedback shift registers can also be deduced from certain properties of the feedback function in ANF.
Performing operations within algebraic normal form
There are straightforward ways to perform the standard boolean operations on ANF inputs in order to get ANF results.
XOR (logical exclusive disjunction) is performed directly:
() ⊕ ()
⊕
1 ⊕ 1 ⊕ x ⊕ x ⊕ y
y
NOT (logical negation) is XORing 1:
1 ⊕ 1 ⊕ x ⊕ y
x ⊕ y
AND (logical conjunction) is distributed algebraically
( ⊕ )
⊕
(1 ⊕ x ⊕ y) ⊕ (x ⊕ x ⊕ xy)
1 ⊕ x ⊕ x ⊕ x ⊕ y ⊕ xy
1 ⊕ x ⊕ y ⊕ xy
|
https://en.wikipedia.org/wiki/Die%20preparation | Die preparation is a step of semiconductor device fabrication during which a wafer is prepared for IC packaging and IC testing. The process of die preparation typically consists of two steps: wafer mounting and wafer dicing.
Wafer mounting
Wafer mounting is a step that is performed during the die preparation of a wafer as part of the process of semiconductor fabrication. During this step, the wafer is mounted on a plastic tape that is attached to a ring. Wafer mounting is performed right before the wafer is cut into separate dies. The adhesive film upon which the wafer is mounted ensures that the individual dies remain firmly in place during 'dicing', as the process of cutting the wafer is called.
The picture on the right shows a 300 mm wafer after it was mounted and diced. The blue plastic is the adhesive tape. The wafer is the round disc in the middle. In this case, a large number of dies were already removed.
Semiconductor-die cutting
In the manufacturing of micro-electronic devices, die cutting, dicing or singulation is a process of reducing a wafer containing multiple identical integrated circuits to individual dies each containing one of those circuits.
During this process, a wafer with up to thousands of circuits is cut into rectangular pieces, each called a die. In between those functional parts of the circuits, a thin non-functional spacing is foreseen where a saw can safely cut the wafer without damaging the circuits. This spacing is called the scribe line or saw street. The width of the scribe is very small, typically around 100 μm. A very thin and accurate saw is therefore needed to cut the wafer into pieces. Usually the dicing is performed with a water-cooled circular saw with diamond-tipped teeth.
Types of blades
The most common make up of blade used is either a metal or resin bond containing abrasive grit of natural or more commonly synthetic diamond, or borazon in various forms. Alternatively, the bond and grit may be applied as a coating |
https://en.wikipedia.org/wiki/Apollonian%20gasket | In mathematics, an Apollonian gasket or Apollonian net is a fractal generated by starting with a triple of circles, each tangent to the other two, and successively filling in more circles, each tangent to another three. It is named after Greek mathematician Apollonius of Perga.
Construction
The construction of the Apollonian gasket starts with three circles , , and (black in the figure), that are each tangent to the other two, but that do not have a single point of triple tangency. These circles may be of different sizes to each other, and it is allowed for two to be inside the third, or for all three to be outside each other. As Apollonius discovered, there exist two more circles and (red) that are tangent to all three of the original circles – these are called Apollonian circles. These five circles are separated from each other by six curved triangular regions, each bounded by the arcs from three pairwise-tangent circles. The construction continues by adding six more circles, one in each of these six curved triangles, tangent to its three sides. These in turn create 18 more curved triangles, and the construction continues by again filling these with tangent circles, ad infinitum.
Continued stage by stage in this way, the construction adds new circles at stage , giving a total of circles after stages. In the limit, this set of circles is an Apollonian gasket. In it, each pair of tangent circles has an infinite Pappus chain of circles tangent to both circles in the pair.
The size of each new circle is determined by Descartes' theorem, which states that, for any four mutually tangent circles, the radii of the circles obeys the equation
This equation may have a solution with a negative radius; this means that one of the circles (the one with negative radius) surrounds the other three.
One or two of the initial circles of this construction, or the circles resulting from this construction, can degenerate to a straight line, which can be thought of as a circl |
https://en.wikipedia.org/wiki/Hollow%20structural%20section | A hollow structural section (HSS) is a type of metal profile with a hollow cross section. The term is used predominantly in the United States, or other countries which follow US construction or engineering terminology.
HSS members can be circular, square, or rectangular sections, although other shapes such as elliptical are also available. HSS is only composed of structural steel per code.
HSS is sometimes mistakenly referenced as hollow structural steel. Rectangular and square HSS are also commonly called tube steel or box section. Circular HSS are sometimes mistakenly called steel pipe, although true steel pipe is actually dimensioned and classed differently from HSS. (HSS dimensions are based on exterior dimensions of the profile; pipes are also manufactured to an exterior tolerance, albeit to a different standard.) The corners of HSS are heavily rounded, having a radius which is approximately twice the wall thickness. The wall thickness is uniform around the section.
In the UK, or other countries which follow British construction or engineering terminology, the term HSS is not used. Rather, the three basic shapes are referenced as CHS, SHS, and RHS, being circular, square, and rectangular hollow sections. Typically, these designations will also relate to metric sizes, thus the dimensions and tolerances differ slightly from HSS.
Use in structures
HSS, especially rectangular sections, are commonly used in welded steel frames where members experience loading in multiple directions. Square and circular HSS have very efficient shapes for this multiple-axis loading as they have uniform geometry along two or more cross-sectional axes, and thus uniform strength characteristics. This makes them good choices for columns. They also have excellent resistance to torsion.
HSS can also be used as beams, although wide flange or I-beam shapes are in many cases a more efficient structural shape for this application. However, the HSS has superior resistance to lateral tors |
https://en.wikipedia.org/wiki/YellowTAB | yellowTAB was a German software firm that produced an operating system called "yellowTAB ZETA". While the operating system was based on BeOS 5.1.0, the company never publicly confirmed that it has the BeOS source code or what their licensing agreement with BeOS's owners PalmSource was. The company went insolvent and ceased trading in 2006. Later, David Schlesinger, directory of Open Source technologies at ACCESS, Inc., which had meanwhile become the owner of the BeOS source code, stated that there had never been a license agreement covering yellowTAB's use of the source code and that ZETA was therefore an infringed copy.
The company's offices were in Mannheim, and its corporate motto was Assume The Power. Following their closure, the OS was taken over by magnussoft, who started selling it as "magnussoft ZETA".
yellowTAB has come under some criticism from the BeOS userbase, who claim that the company did not give back what it took from the Haiku project and other open source BeOS projects. In many cases, open source programmers have recreated yellowTAB's extensions to BeOS, most notably their SVG graphics extensions to OpenTracker. However, yellowTAB's actions to date have not violated the BSD/MIT licences under which most open source BeOS projects exist.
In March 2006, yellowTAB donated their "Intel Extreme" driver to one of the Haiku developers for integration into the Haiku source tree where further development was to take place. Both yellowTAB and Haiku developers were to collaborate on Intel Extreme Graphics driver development, but to date this code has not yet been committed to the repository.
In April 2006, insolvency protection proceedings were filed for the company, although employees denied that it was actually filed by the company, suggesting potential malicious intent. However, the firm has transferred development and support of ZETA to a third-party, magnussoft.
References
External links
yellowTAB - official site
Software companies of Germany
BeO |
https://en.wikipedia.org/wiki/Logic%20family | In computer engineering, a logic family is one of two related concepts:
A logic family of monolithic digital integrated circuit devices is a group of electronic logic gates constructed using one of several different designs, usually with compatible logic levels and power supply characteristics within a family. Many logic families were produced as individual components, each containing one or a few related basic logical functions, which could be used as "building-blocks" to create systems or as so-called "glue" to interconnect more complex integrated circuits.
A logic family may also be a set of techniques used to implement logic within VLSI integrated circuits such as central processors, memories, or other complex functions. Some such logic families use static techniques to minimize design complexity. Other such logic families, such as domino logic, use clocked dynamic techniques to minimize size, power consumption and delay.
Before the widespread use of integrated circuits, various solid-state and vacuum-tube logic systems were used but these were never as standardized and interoperable as the integrated-circuit devices. The most common logic family in modern semiconductor devices is metal–oxide–semiconductor (MOS) logic, due to low power consumption, small transistor sizes, and high transistor density.
Technologies
The list of packaged building-block logic families can be divided into categories, listed here in roughly chronological order of introduction, along with their usual abbreviations:
Resistor–transistor logic (RTL)
Direct-coupled transistor logic (DCTL)
Direct-coupled unipolar transistor logic (DCUTL)
Resistor–capacitor–transistor logic (RCTL)
Emitter-coupled logic (ECL)
Positive emitter-coupled logic (PECL)
Low-voltage PECL (LVPECL)
Complementary transistor micrologic (CTuL)
Diode–transistor logic (DTL)
Complemented transistor diode logic (CTDL)
High-threshold logic (HTL)
Transistor–transistor logic (TTL)
Metal–oxide–semiconductor (MO |
https://en.wikipedia.org/wiki/Cray%20XD1 | The Cray XD1 was an entry-level supercomputer range, made by Cray Inc.
The XD1 uses AMD Opteron 64-bit CPUs, and utilizes the Direct Connect Architecture over HyperTransport to remove the bottleneck at the PCI and contention at the memory. The MPI latency is ¼ that of Infiniband, and 1/30 that of Gigabit Ethernet.
The XD1 was originally designed by OctigaBay Systems Corp. of Vancouver, British Columbia, Canada as the OctigaBay 12K system. The company was acquired by Cray Inc. in February 2004.
Announced on 4 October 2004, the Cray XD1 range incorporate Xilinx Virtex-II Pro FPGAs for application acceleration. With 12 CPUs in a chassis, and up to 12 chassis installable in a rack, XD1 systems may hold several 144-CPU multiples in multirack configurations. The operating system used on the XD1 is a customized version of Linux, and the machine's load balancing / resource management system is an enhanced version of Sun Microsystems' Sun Grid Engine.
External links
Cray Legacy Products
Xd1
X86 supercomputers |
https://en.wikipedia.org/wiki/Laver%20table | In mathematics, Laver tables (named after Richard Laver, who discovered them towards the end of the 1980s in connection with his works on set theory) are tables of numbers that have certain properties of algebraic and combinatorial interest. They occur in the study of racks and quandles.
Definition
For any nonnegative integer n, the n-th Laver table is the 2n × 2n table whose entry in the cell at row p and column q (1 ≤ p,q ≤ 2n) is defined as
where is the unique binary operation that satisfies the following two equations for all p, q in {1,...,2n}:
and
Note: Equation () uses the notation to mean the unique member of {1,...,2n} congruent to x modulo 2n.
Equation () is known as the (left) self-distributive law, and a set endowed with any binary operation satisfying this law is called a shelf. Thus, the n-th Laver table is just the multiplication table for the unique shelf ({1,...,2n}, ) that satisfies Equation ().
Examples: Following are the first five Laver tables, i.e. the multiplication tables for the shelves ({1,...,2n}, ), n = 0, 1, 2, 3, 4:
There is no known closed-form expression to calculate the entries of a Laver table directly, but Patrick Dehornoy provides a simple algorithm for filling out Laver tables.
Properties
For all p, q in {1,...,2n}: .
For all p in {1,...,2n}: is periodic with period πn(p) equal to a power of two.
For all p in {1,...,2n}: is strictly increasing from to .
For all p,q:
Are the first-row periods unbounded?
Looking at just the first row in the n-th Laver table, for n = 0, 1, 2, ..., the entries in each first row are seen to be periodic with a period that's always a power of two, as mentioned in Property 2 above. The first few periods are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, ... . This sequence is nondecreasing, and in 1995 Richard Laver proved, under the assumption that there exists a rank-into-rank (a large cardinal property), that it actually increases without bound. (It is not known whether this is also prov |
https://en.wikipedia.org/wiki/Multiple-criteria%20decision%20analysis | Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on only intuition. On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria. In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences.
Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specialized decision-making software, have been developed for their application in an array of disciplines |
https://en.wikipedia.org/wiki/Pythagorean%20trigonometric%20identity | The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions.
The identity is
As usual, means .
Proofs and their relationships to the Pythagorean theorem
Proof based on right-angle triangles
Any similar triangles have the property that if we select the same angle in all of them, the ratio of the two sides defining the angle is the same regardless of which similar triangle is selected, regardless of its actual size: the ratios depend upon the three angles, not the lengths of the sides. Thus for either of the similar right triangles in the figure, the ratio of its horizontal side to its hypotenuse is the same, namely cos θ.
The elementary definitions of the sine and cosine functions in terms of the sides of a right triangle are:
The Pythagorean identity follows by squaring both definitions above, and adding; the left-hand side of the identity then becomes
which by the Pythagorean theorem is equal to 1. This definition is valid for all angles, due to the definition of defining and for the unit circle and thus and for a circle of radius c and reflecting our triangle in the y axis and setting and .
Alternatively, the identities found at Trigonometric symmetry, shifts, and periodicity may be employed. By the periodicity identities we can say if the formula is true for then it is true for all real θ. Next we prove the identity in the range to do this we let t will now be in the range We can then make use of squared versions of some basic shift identities (squaring conveniently removes the minus signs):
All that remains is to prove it for this can be done by squaring the symmetry identities to get
Related identities
The identities
and
are also called Pythagorean trigonometric identities. If one leg of a right triangle has length |
https://en.wikipedia.org/wiki/Memorandum | A memorandum (: memoranda; from the Latin memorandum, "(that) which is to be remembered"), also known as a briefing note, is a written message that is typically used in a professional setting. Commonly abbreviated memo, these messages are usually brief and are designed to be easily and quickly understood. Memos can thus communicate important information efficiently in order to make dynamic and effective changes.
In law, a memorandum is a record of the terms of a transaction or contract, such as a policy memo, memorandum of understanding, memorandum of agreement, or memorandum of association. In business, a memo is typically used by firms for internal communication, while letters are typically for external communication.
Other memorandum formats include briefing notes, reports, letters, and binders. They may be considered grey literature. Memorandum formatting may vary by office or institution. For example, if the intended recipient is a cabinet minister or a senior executive, the format might be rigidly defined and limited to one or two pages. If the recipient is a colleague, the formatting requirements are usually more flexible.
Policy briefing note
A specific type of memorandum is the policy briefing note (alternatively referred to in various jurisdictions and governing traditions as policy issues paper, policy memoranda, or cabinet submission amongst other terms), a document for transmitting policy analysis into the political decision making sphere. Typically, a briefing note may be denoted as either “for information” or “for decision”.
Origins of term
The origins of the term “briefing” lie in legal “briefs” and the derivative “military briefings”. The plural form of the Latin noun memorandum so derived is properly memoranda, but if the word is deemed to have become a word of the English language, the plural memorandums, abbreviated to memos, may be used. (See also Agenda, Corrigenda, Addenda).
Purpose
There are many important purposes of a memorandum. Bri |
https://en.wikipedia.org/wiki/Online%20game | An online game is a video game that is either partially or primarily played through the Internet or any other computer network available. Online games are ubiquitous on modern gaming platforms, including PCs, consoles and mobile devices, and span many genres, including first-person shooters, strategy games, and massively multiplayer online role-playing games (MMORPG). In 2019, revenue in the online games segment reached $16.9 billion, with $4.2 billion generated by China and $3.5 billion in the United States. Since the 2010s, a common trend among online games has been to operate them as games as a service, using monetization schemes such as loot boxes and battle passes as purchasable items atop freely-offered games. Unlike purchased retail games, online games have the problem of not being permanently playable, as they require special servers in order to function.
The design of online games can range from simple text-based environments to the incorporation of complex graphics and virtual worlds. The existence of online components within a game can range from being minor features, such as an online leaderboard, to being part of core gameplay, such as directly playing against other players. Many online games create their own online communities, while other games, especially social games, integrate the players' existing real-life communities. Some online games can receive a massive influx of popularity due to many well-known Twitch streamers and YouTubers playing them.
Online gaming has drastically increased the scope and size of video game culture. Online games have attracted players of a variety of ages, nationalities, and occupations. The online game content is now being studied in the scientific field, especially gamers' interactions within virtual societies in relation to the behavior and social phenomena of everyday life. As in other cultures, the community has developed a gamut of slang words or phrases that can be used for communication in or outside of games. |
https://en.wikipedia.org/wiki/Response%20bias | Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.
Response bias can be induced or caused by numerous factors, all relating to the idea that human subjects do not respond passively to stimuli, but rather actively integrate multiple sources of information to generate a response in a given situation. Because of this, almost any aspect of an experimental condition may potentially bias a respondent. Examples include the phrasing of questions in surveys, the demeanor of the researcher, the way the experiment is conducted, or the desires of the participant to be a good experimental subject and to provide socially desirable responses may affect the response in some way. All of these "artifacts" of survey and self-report research may have the potential to damage the validity of a measure or study. Compounding this issue is that surveys affected by response bias still often have high reliability, which can lure researchers into a false sense of security about the conclusions they draw.
Because of response bias, it is possible that some study results are due to a systematic response bias rather than the hypothesized effect, which can have a profound effect on psychological and other types of research using questionnaires or surveys. It is therefore important for researchers to be aware of response bias and the effect it can have on their research so that they can attempt to prevent it from impacting their findings in a negative manner.
History of research
Awareness of response bias has been present in psychology and sociology literature for some time because self-reporting features significantly in those fields of research. However, researchers were initially unwilling to admit the degree to which |
https://en.wikipedia.org/wiki/Monocrete%20construction | Monocrete is a building construction method utilising modular bolt-together pre-cast concrete wall panels.
Monocrete construction was widely used in the construction of government housing in the 1940s and 1950s in Canberra, Australia. The expansion of the new capital was exceeding the ability of the Government to build houses, so alternative construction methods were investigated.
The Canberra monocrete homes are built on brick piers and surrounding brick footing. All of the walls are of monocrete construction including interior ones. They are precast with steel windows and door frames set directly into the concrete. Steel plates in the ceiling space bolt the individual wall panels together. The floor and roof are of normal construction - wood and tile respectively. The gaps between the wall panels are filled with a flexible gap-filling compound and covered with tape on the interior. It has been suggested that the panels tend to move separately to one another, opening up cracks in between them, and that the houses also tend to be susceptible to condensation build up and mold growth on the inside of the walls.
A similar technique is used in the construction of some modern commercial buildings.
References
Building engineering
Building materials
Prefabricated houses |
https://en.wikipedia.org/wiki/Monounsaturated%20fat | In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond.
Molecular description
Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats.
Health
Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability.
Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol.
Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d).
In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles.
The Mediterranean diet is one heavily influenced by monounsaturated fats. People in Mediterranean countries consume more total fat than Northern European countries, but most of the fat is in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of satur |
https://en.wikipedia.org/wiki/Probabilistic%20encryption | Probabilistic encryption is the use of randomness in an encryption algorithm, so that when encrypting the same message several times it will, in general, yield different ciphertexts. The term "probabilistic encryption" is typically used in reference to public key encryption algorithms; however various symmetric key encryption algorithms achieve a similar property (e.g., block ciphers when used in a chaining mode such as CBC), and stream ciphers such as Freestyle which are inherently random. To be semantically secure, that is, to hide even partial information about the plaintext, an encryption algorithm must be probabilistic.
History
The first provably-secure probabilistic public-key encryption scheme was proposed by Shafi Goldwasser and Silvio Micali, based on the hardness of the quadratic residuosity problem and had a message expansion factor equal to the public key size. More efficient probabilistic encryption algorithms include Elgamal, Paillier, and various constructions under the random oracle model, including OAEP.
Security
Probabilistic encryption is particularly important when using public key cryptography. Suppose that the adversary observes a ciphertext, and suspects that the plaintext is either "YES" or "NO", or has a hunch that the plaintext might be "ATTACK AT CALAIS". When a deterministic encryption algorithm is used, the adversary can simply try encrypting each of his guesses under the recipient's public key, and compare each result to the target ciphertext. To combat this attack, public key encryption schemes must incorporate an element of randomness, ensuring that each plaintext maps into one of a large number of possible ciphertexts.
An intuitive approach to converting a deterministic encryption scheme into a probabilistic one is to simply pad the plaintext with a random string before encrypting with the deterministic algorithm. Conversely, decryption involves applying a deterministic algorithm and ignoring the random padding. However |
https://en.wikipedia.org/wiki/Pyraminx | The Pyraminx () is a regular tetrahedron puzzle in the style of Rubik's Cube. It was made and patented by Uwe Mèffert after the original 3 layered Rubik's Cube by Ernő Rubik, and introduced by Tomy Toys of Japan (then the 3rd largest toy company in the world) in 1981.
Description
The Pyraminx was first conceived by Mèffert in 1970. He did nothing with his design until 1981 when he applied for a patent on 27/03 (EP0042695 on 12/30/81) and brought it to Hong Kong for production. Uwe is fond of saying had it not been for Ernő Rubik's invention of the cube, his Pyraminx would have never been produced. Somewhat earlier (for 40 days) in the Soviet Union, the chief technologist of the Kishinev Tractor Plant, Alexander Alexandrovich Ordynets, filed his application for an invention (patent SU980739 dated 12/15/1982, with the filing date 02/18/81), because of that, in Russia many people call puzzle "Молдавская пирамидка" (Moldavian pyramid).
The Pyraminx is a puzzle in the shape of a regular tetrahedron, divided into 4 axial pieces, 6 edge pieces, and 4 trivial tips. It can be twisted along its cuts to permute its pieces. The axial pieces are octahedral in shape, although this is not immediately obvious, and can only rotate around the axis they are attached to. The 6 edge pieces can be freely permuted. The trivial tips are so called because they can be twisted independently of all other pieces, making them trivial to place in solved position. Meffert also produces a similar puzzle called the Tetraminx, which is the same as the Pyraminx except that the trivial tips are removed, turning the puzzle into a truncated tetrahedron.
The purpose of the Pyraminx is to scramble the colors, and then restore them to their original configuration.
The 4 trivial tips can be easily rotated to line up with the axial piece they are respectively attached to, and the axial pieces are also easily rotated so that their colors line up with each other. This leaves only the 6 edge pieces as a |
https://en.wikipedia.org/wiki/Szemer%C3%A9di%E2%80%93Trotter%20theorem | The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given points and lines in the Euclidean plane, the number of incidences (i.e., the number of point-line pairs, such that the point lies on the line) is
This bound cannot be improved, except in terms of the implicit constants.
As for the implicit constants, it was shown by János Pach, Radoš Radoičić, Gábor Tardos, and Géza Tóth that the upper bound holds. Since then better constants are known due to better crossing lemma constants; the current best is 2.44. On the other hand, Pach and Tóth showed that the statement does not hold true if one replaces the coefficient 2.5 with 0.42.
An equivalent formulation of the theorem is the following. Given points and an integer , the number of lines which pass through at least of the points is
The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as cell decomposition. Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs. (See below.)
The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics.
Proof of the first formulation
We may discard the lines which contain two or fewer of the points, as they can contribute at most incidences to the total number. Thus we may assume that every line contains at least three of the points.
If a line contains points, then it will contain line segments which connect two consecutive points along the line. Because after discarding the two-point lines, it follows that , so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if denotes the number of such line segments, i |
https://en.wikipedia.org/wiki/Rollerball%20%28video%20game%29 | is a video game produced by HAL Laboratory in 1984 for the MSX. A Nintendo Entertainment System version of the game was released in 1988. It is designed to be played by one to four players, in turn. It is an emulation of a pinball machine.
The pinball machine rendered in Rollerball is composed of four screens, which, by proportion, would be about as long as two standard pinball tables if it were a real table. The graphics on two of the four screens are based on various aspects of the New York City skyline. The topmost screen (hereafter called the bonus screen) merely shows some clouds and a blimp. The second screen (the main screen) shows the top of the Empire State Building, while the third screen (intermediate) shows the lower skyline and the Statue of Liberty. The lowest (final) screen shows only a blue backdrop, representing the Ocean.
Gameplay
Main screen
In the main mode, each player launches the ball directly into the main screen. To the top is a small loop marked SLOT; if the player sends the ball through this loop, a slot machine display in the center of the screen cycles, with various penalties or rewards given when one of the three symbols (an eggplant, a pair of cherries, or a bell) appears three times. The center of the SLOT loop is open, allowing the player to enter the bonus screen.
There is a pair of kickback holes in the upper left of the main screen, with a bumper in proximity to the upper one. The lower one will shoot the ball straight up, relying on the curvature of the wall to direct it into the upper hole if the player is lucky. If the ball falls in the upper hole, one of two things can happen. If the kickback slot in the bonus screen is empty, the ball will be transferred to that slot and another ball will be released to the plunger to enter play. If the kickback slot is already filled, the ball is relaunched from the lower hole.
Intermediate screen
When in the intermediate screen, the most viable way of returning to the main sc |
https://en.wikipedia.org/wiki/Logarithmic%20form | In algebraic geometry and the theory of complex manifolds, a logarithmic differential form is a differential form with poles of a certain kind. The concept was introduced by Pierre Deligne. In short, logarithmic differentials have the mildest possible singularities needed in order to give information about an open submanifold (the complement of the divisor of poles). (This idea is made precise by several versions of de Rham's theorem discussed below.)
Let X be a complex manifold, D ⊂ X a reduced divisor (a sum of distinct codimension-1 complex subspaces), and ω a holomorphic p-form on X−D. If both ω and dω have a pole of order at most 1 along D, then ω is said to have a logarithmic pole along D. ω is also known as a logarithmic p-form. The p-forms with log poles along D form a subsheaf of the meromorphic p-forms on X, denoted
The name comes from the fact that in complex analysis, ; here is a typical example of a 1-form on the complex numbers C with a logarithmic pole at the origin. Differential forms such as make sense in a purely algebraic context, where there is no analog of the logarithm function.
Logarithmic de Rham complex
Let X be a complex manifold and D a reduced divisor on X. By definition of and the fact that the exterior derivative d satisfies d2 = 0, one has
for every open subset U of X. Thus the logarithmic differentials form a complex of sheaves , known as the logarithmic de Rham complex associated to the divisor D. This is a subcomplex of the direct image , where is the inclusion and is the complex of sheaves of holomorphic forms on X−D.
Of special interest is the case where D has normal crossings: that is, D is locally a sum of codimension-1 complex submanifolds that intersect transversely. In this case, the sheaf of logarithmic differential forms is the subalgebra of generated by the holomorphic differential forms together with the 1-forms for holomorphic functions that are nonzero outside D. Note that
Concretely, if D is a divisor wi |
https://en.wikipedia.org/wiki/Newton%20OS | Newton OS is a discontinued operating system for the Apple Newton PDAs produced by Apple Computer, Inc. between 1993 and 1997. It was written entirely in C++ and trimmed to be low power consuming and use the available memory efficiently. Many applications were pre-installed in the ROM of the Newton (making for quick start-up) to save on RAM and flash memory storage for user applications.
Features
Newton OS features many interface elements that the Macintosh system software didn't have at the time, such as drawers and the "poof" animation. An animation similar to this is found in Mac OS X, and parts of the Newton's handwriting recognition system have been implemented as Inkwell in Mac OS X.
Sound responsive — Clicking menus and icons makes a sound; this feature was later introduced in Mac OS 8.
Icons - Similar to the Macintosh Desktop metaphor, Newton OS uses icons to open applications.
Tabbed documents — Similar to tabbed browsing in today's browsers and Apple's At Ease interface, documents titles appear in a small tab at the top right hand of the screen.
Screen rotation — In Newton 2.0, the screen can be rotated to be used for drawing or word processing.
File documents — Notes and Drawings can be categorized. E.g. Fun, Business, Personal, etc.
Print documents — Documents on the Newton can be printed.
Send documents — Documents can be sent to another Newton via Infrared technology or sent using the Internet by E-Mail, or faxed.
Menus — Similar to menus seen in Mac OS, but menu titles are instead presented at the bottom of the screen in small rectangles, making them similar to buttons with attached "pop-up" menus.
Many features of the Newton are best appreciated in the context of the history of Pen computing.
Software
Shortly after the Newton PDA's release in 1993, developers were not paying much attention to the new Newton OS API and were still more interested in developing for the Macintosh and Windows platforms. It was not until two years later that developers |
https://en.wikipedia.org/wiki/Sylvester%E2%80%93Gallai%20theorem | The Sylvester–Gallai theorem in geometry states that every finite set of points in the Euclidean plane has a line that passes through exactly two of the points or a line that passes through all of them. It is named after James Joseph Sylvester, who posed it as a problem in 1893, and Tibor Gallai, who published one of the first proofs of this theorem in 1944.
A line that contains exactly two of a set of points is known as an ordinary line. Another way of stating the theorem is that every finite set of points that is not collinear has an ordinary line. According to a strengthening of the theorem, every finite point set (not all on one line) has at least a linear number of ordinary lines. An algorithm can find an ordinary line in a set of points in time .
History
The Sylvester–Gallai theorem was posed as a problem by . suggests that Sylvester may have been motivated by a related phenomenon in algebraic geometry, in which the inflection points of a cubic curve in the complex projective plane form a configuration of nine points and twelve lines (the Hesse configuration) in which each line determined by two of the points contains a third point. The Sylvester–Gallai theorem implies that it is impossible for all nine of these points to have real coordinates.
claimed to have a short proof of the Sylvester–Gallai theorem, but it was already noted to be incomplete at the time of publication. proved the theorem (and actually a slightly stronger result) in an equivalent formulation, its projective dual. Unaware of Melchior's proof, again stated the conjecture, which was subsequently proved by Tibor Gallai, and soon afterwards by other authors.
In a 1951 review, Erdős called the result "Gallai's theorem", but it was already called the Sylvester–Gallai theorem in a 1954 review by Leonard Blumenthal. It is one of many mathematical topics named after Sylvester.
Equivalent versions
The question of the existence of an ordinary line can also be posed for points in the real |
https://en.wikipedia.org/wiki/Copac | Copac (originally an acronym of Consortium of Online Public Access Catalogues) was a union catalogue which provided free access to the merged online catalogues of many major research libraries and specialist libraries in the United Kingdom and Ireland, plus the British Library, the National Library of Scotland and the National Library of Wales. It had over 40 million records from around 90 libraries as of 2019, representing a wide range of materials across all subject areas. Copac was freely available to all, and was widely used, with users mainly coming from Higher Education institutions in the United Kingdom, but also worldwide. Copac was valued by users as a research tool.
Copac was searchable through with a web browser or Z39.50 client. It was also accessible through OpenURL and Search/Retrieve via URL (SRU) interfaces. These interfaces could be used to provide links to items on Copac from external sites, such as those used on the Institute of Historical Research website.
Copac was a Jisc service provided for the UK community on the basis of an agreement with Research Libraries UK (RLUK). The service used records supplied by RLUK members, as well as an increasing range of specialist libraries with collections of national research interest. A full list of contributors is available including the National Trust for Places of Historic Interest or Natural Beauty, the Royal Botanic Gardens, Kew, the Middle Temple library and Institution of Mechanical Engineers (IMechE) Library.
In July 2019, Jisc replaced COPAC with Library Hub Discover.
See also
OPAC
SUNCAT
Talis Group
References
Academic libraries in the United Kingdom
Bibliographic databases and indexes
Databases in the United Kingdom
Higher education in the United Kingdom
Jisc
Library cataloging and classification
Online databases |
https://en.wikipedia.org/wiki/Statistical%20learning%20theory | Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
Introduction
The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood. Supervised learning involves learning from a training set of data. Every point in the training is an input–output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input.
Depending on the type of output, supervised learning problems are either problems of regression or problems of classification. If the output takes a continuous range of values, it is a regression problem. Using Ohm's law as an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be , such that
Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture.
After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in t |
https://en.wikipedia.org/wiki/GraphicsMagick | GraphicsMagick is a fork of ImageMagick, emphasizing stability of both programming API and command-line options. It was branched off ImageMagick's version 5.5.2 in 2002 after irreconcilable differences emerged in the developers' group.
In addition to the programming language APIs available with ImageMagick, GraphicsMagick also includes a Tcl API, called TclMagick.
GraphicsMagick is used by several websites to process large numbers of uploaded photographs. As of 2023, GraphicsMagick had 4 active code contributors while ImageMagick had 27 active contributors.
References
External links
Slides from Web2.0 Expo 2009. (and somethin else interestin’)
Batch Processing Millions and Millions of Images
Command-line software
Free graphics software
Free raster graphics editors
Graphics libraries
Graphics software
Software forks |
https://en.wikipedia.org/wiki/DNA%20extraction | The first isolation of deoxyribonucleic acid (DNA) was done in 1869 by Friedrich Miescher. DNA extraction is the process of isolating DNA from the cells of an organism isolated from a sample, typically a biological sample such as blood, saliva, or tissue . It involves breaking open the cells, removing proteins and other contaminants, and purifying the DNA so that it is free of other cellular components. The purified DNA can then be used for downstream applications such as PCR, sequencing, or cloning. Currently, it is a routine procedure in molecular biology or forensic analyses.
This process can be done in several ways, depending on the type of the sample and the downstream application, the most common methods are: mechanical, chemical and enzymatic lysis, precipitation, purification, and concentration. The specific method used to extract the DNA, such as phenol-chloroform extraction, alcohol precipitation, or silica-based purification.
For the chemical method, many different kits are used for extraction, and selecting the correct one will save time on kit optimization and extraction procedures. PCR sensitivity detection is considered to show the variation between the commercial kits.
There are many different methods for extracting DNA, but some common steps include:
Lysis: This step involves breaking open the cells to release the DNA. For example, in the case of bacterial cells, a solution of detergent and salt (such as SDS) can be used to disrupt the cell membrane and release the DNA. For plant and animal cells, mechanical or enzymatic methods are often used.
Precipitation: Once the DNA is released, proteins and other contaminants must be removed. This is typically done by adding a precipitating agent, such as alcohol (such as ethanol or isopropanol), or a salt (such as ammonium acetate). The DNA will form a pellet at the bottom of the solution, while the contaminants will remain in the liquid.
Purification: After the DNA is precipitated, it is usually fur |
https://en.wikipedia.org/wiki/TypeParameter | In computer programming languages, TypeParameter is a generic label used in generic programming to reference an unknown data type, data structure, or class. TypeParameter is most frequently used in C++ templates and Java generics . TypeParameter is similar to a metasyntactic variable (e.g., foo and bar), but distinct. It is not the name of a generic type variable, but the name of a generic type of variable.
The capitalisation varies according to programming language and programmer preference. TypeParameter, Typeparameter, TYPEPARAMETER, typeparameter, and type_parameter are all possible. Alternate labels are also used, especially in complex templates where more than one type parameter is necessary. X, Y, Foo, Bar, Item, Thing are typical alternate labels. Many programming languages are case-sensitive, making a consistent choice of labels important. The CamelCase TypeParameter is one of the most commonly used.
See also
Metasyntactic variable
Generic programming
External links
How to Think Like a Computer Scientist, Chapter 17: Templates
Programming constructs |
https://en.wikipedia.org/wiki/Functional%20genomics | Functional genomics is a field of molecular biology that attempts to describe gene (and protein) functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects (such as genome sequencing projects and RNA sequencing). Functional genomics focuses on the dynamic aspects such as gene transcription, translation, regulation of gene expression and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach.
Definition and goals of functional genomics
In order to understand functional genomics it is important to first define function. In their paper Graur et al. define function in two possible ways. These are "selected effect" and "causal role". The "selected effect" function refers to the function for which a trait (DNA, RNA, protein etc.) is selected for. The "causal role" function refers to the function that a trait is sufficient and necessary for. Functional genomics usually tests the "causal role" definition of function.
The goal of functional genomics is to understand the function of genes or proteins, eventually all components of a genome. The term functional genomics is often used to refer to the many technical approaches to study an organism's genes and proteins, including the "biochemical, cellular, and/or physiological properties of each and every gene product" while some authors include the study of nongenic elements in their definition. Functional genomics may also include studies of natural genetic variation over time (such as an organism's development) or space (such as its body regions), as well as functional disruptions such as mutations.
The promise of functional genomics is to generate and synthesize genomic and proteomic knowl |
https://en.wikipedia.org/wiki/Stevens%27s%20power%20law | Stevens' power law is an empirical relationship in psychophysics between an increased intensity or strength in a physical stimulus and the perceived magnitude increase in the sensation created by the stimulus. It is often considered to supersede the Weber–Fechner law, which is based on a logarithmic relationship between stimulus and sensation, because the power law describes a wider range of sensory comparisons, down to zero intensity.
The theory is named after psychophysicist Stanley Smith Stevens (1906–1973). Although the idea of a power law had been suggested by 19th-century researchers, Stevens is credited with reviving the law and publishing a body of psychophysical data to support it in 1957.
The general form of the law is
where I is the intensity or strength of the stimulus in physical units (energy, weight, pressure, mixture proportions, etc.), ψ(I) is the magnitude of the sensation evoked by the stimulus, a is an exponent that depends on the type of stimulation or sensory modality, and k is a proportionality constant that depends on the units used.
A distinction has been made between local psychophysics, where stimuli can only be discriminated with a probability around 50%, and global psychophysics, where the stimuli can be discriminated correctly with near certainty (Luce & Krumhansl, 1988). The Weber–Fechner law and methods described by L. L. Thurstone are generally applied in local psychophysics, whereas Stevens' methods are usually applied in global psychophysics.
The table to the right lists the exponents reported by Stevens.
Methods
The principal methods used by Stevens to measure the perceived intensity of a stimulus were magnitude estimation and magnitude production. In magnitude estimation with a standard, the experimenter presents a stimulus called a standard and assigns it a number called the modulus. For subsequent stimuli, subjects report numerically their perceived intensity relative to the standard so as to preserve the ratio between t |
https://en.wikipedia.org/wiki/Setuid | The Linux and Unix access rights flags setuid and setgid (short for set user identity and set group identity) allow users to run an executable with the file system permissions of the executable's owner or group respectively and to change behaviour in directories. They are often used to allow users on a computer system to run programs with temporarily elevated privileges to perform a specific task. While the assumed user id or group id privileges provided are not always elevated, at a minimum they are specific.
The flags setuid and setgid are needed for tasks that require different privileges than what the user is normally granted, such as the ability to alter system files or databases to change their login password. Some of the tasks that require additional privileges may not immediately be obvious, though, such as the ping command, which must send and listen for control packets on a network interface.
File modes
The setuid and setgid bits are normally represented as the values 4 for setuid and 2 for setgid in the high-order octal digit of the file mode. For example, 6711 has both the setuid and setgid bits () set, and also the file read/write/executable for the owner (7), and executable by the group (first 1) and others (second 1). Most implementations have a symbolic representation of these bits; in the previous example, this could be u=rwx,go=x,ug+s.
Typically, chmod does not have a recursive mode restricted to directories, so modifying an existing directory tree must be done manually, with a command such as .
Effects
The setuid and setgid flags have different effects, depending on whether they are applied to a file, to a directory or binary executable or non-binary executable file. The setuid and setgid flags have an effect only on binary executable files and not on scripts (e.g., Bash, Perl, Python).
When set on an executable file
When the setuid or setgid attributes are set on an executable file, then any users able to execute the file will automatica |
https://en.wikipedia.org/wiki/Sweetness%20of%20wine | The subjective sweetness of a wine is determined by the interaction of several factors, including the amount of sugar in the wine, but also the relative levels of alcohol, acids, and tannins. Sugars and alcohol enhance a wine's sweetness, while acids cause sourness and bitter tannins cause bitterness. These principles are outlined in the 1987 work by Émile Peynaud, The Taste of Wine.
History
Vintage: The Story of Wine, a book authored by British wine writer Hugh Johnson, presents several methods that have been used throughout history to sweeten wine. The most common way was to harvest the grapes as late as possible. This method was advocated by Virgil and Martial in Roman times. In contrast, the ancient Greeks would harvest the grapes early, to preserve some of their acidity, and then leave them in the sun for a few days to allow them to shrivel and concentrate the sugar. In Crete, a similar effect was achieved by twisting the stalks of the grape to deprive them of sap and letting them dry on the vine—a method that produced passum and the modern Italian equivalent, passito.
Stopping the fermentation also enhanced a wine's potential sweetness. In ancient times, this was achieved by submerging the amphorae in cold water till winter.
Wine can also be sweetened by the addition of sugar in some form, after fermentation is completed – the German method like the Süssreserve. In Roman times, this was done in preparing mulsum, wine freshly sweetened with honey and flavored with spices, used as an apéritif, and also in the manufacture of conditum, which had similar ingredients but was matured and stored before drinking.
It was also common from the Roman era until quite recently to sweeten wine with sugar of lead, a toxic substance that increases the apparent sweetness of wines and other beverages. The practice continued well into the 19th century, although the leading was mostly restricted to very cheap wines after the harmful nature of lead was demonstrated in the 17th c |
https://en.wikipedia.org/wiki/Straight-fourteen%20engine | A straight-14 engine or inline-14 engine is a fourteen-cylinder piston engine with all fourteen cylinders mounted in a straight line along the crankcase. This design results in a very long engine, therefore it has only been used as marine propulsion engines in large ships.
The only straight-14 engine known to reach production is part of the Wärtsilä-Sulzer RTA96-C family of 6-cylinder to 14-cylinder two-stroke marine engines. This engine is used in the Emma Mærsk, which was the world's largest container ship when it was built in 2006. The engine produces and displaces , has a bore of and a stroke of . The engine is long, high and weighs .
References
Straight-14
14-cylinder engines
14 |
https://en.wikipedia.org/wiki/AT%20%28form%20factor%29 | In the era of IBM compatible personal computers, the AT form factor comprises the dimensions and layout (form factor) of the motherboard for the IBM AT. Baby AT motherboards are a little smaller, measuring 8.5" by 13". Like the IBM PC and IBM XT models before it, many third-party manufacturers produced motherboards compatible with the IBM AT form factor, allowing end users to upgrade their computers for faster processors. The IBM AT became a widely copied design in the booming home computer market of the 1980s. IBM clones made at the time began using AT compatible designs, contributing to its popularity. In the 1990s many computers still used AT and its variants. Since 1997, the AT form factor has been largely supplanted by ATX.
Design
The original AT motherboard, later known as "Full AT", is , which means it will not fit in "mini desktop" or "minitower cases". The board's size also means that it takes up space behind the drive bays, making installation of new drives more difficult. (In IBM's original heavy-gauge steel case, the two " full-height drive bays overhang the front of the motherboard. More precisely, the left bay overhangs the motherboard, while the right bay is subdivided into two half-height bays and additionally extends downward toward the bottom of the chassis, allowing a second full-height fixed disk to be installed below a single half-height drive.)
The power connectors for AT motherboards are two nearly identical 6-pin plugs and sockets. As designed by IBM, the connectors are mechanically keyed so that each can only be inserted in its correct position, but some clone manufacturers cut costs and used unkeyed (interchangeable) connectors. Unfortunately, the two power connectors it requires are not easily distinguishable, leading many people to damage their boards when they were improperly connected; when plugged in, the two black wires on each connector must be adjacent to each other, making a row of four consecutive black wires (out of the total |
https://en.wikipedia.org/wiki/Oxo%20%28food%29 | Oxo (stylized OXO) is a brand of food products, including stock cubes, herbs and spices, dried gravy, and yeast extract. The original product was the beef stock cube, and the company now also markets chicken and other flavour cubes, including versions with Chinese and Indian spices. The cubes are broken up and used as flavouring in meals or gravy or dissolved into boiling water to produce a bouillon.
In the United Kingdom, the OXO brand belongs to Premier Foods. In South Africa, the Oxo brand is owned and manufactured by Mars, Incorporated and in Canada is owned and manufactured by Knorr.
History
Around 1840, Justus von Liebig developed a concentrated meat extract. Liebig's Extract of Meat Company (Lemco; established in the United Kingdom) promoted it, starting in 1866. The original product was a viscous liquid, containing only meat extract and 4% salt. In 1899, the company introduced the trademark Oxo; the origin of the name is unknown, but presumably comes from the word "ox." Since the cost of liquid Oxo remained beyond the reach of many families, the company launched a research project to develop a solid version that could be sold in cubes for a penny. After much research, Oxo produced their first cubes in 1910 and further increased Oxo's popularity. During World War I, 100 million Oxo cubes were provided to the British armed forces, all of them individually hand-wrapped.
The Vestey Group acquired Lemco in 1924, and the factory was renamed El Anglo. Vestey merged with Brooke Bond in 1968, which was in turn acquired by Unilever in 1984. Unilever sold the Oxo brand to the Campbell Soup Company in 2001, and Premier Foods bought Campbell's UK operation in 2006. This sale included sites at both Worksop and Kings Lynn. The Worksop plant currently produces Oxo cubes.
In South Africa, Oxo is now a brand of Mars, Incorporated. The only product marketed under the Oxo brand in South Africa was a yeast-extract-based spread. The product also contained a small portion of |
https://en.wikipedia.org/wiki/Sheaf%20cohomology | In mathematics, sheaf cohomology is the application of homological algebra to analyze the global sections of a sheaf on a topological space. Broadly speaking, sheaf cohomology describes the obstructions to solving a geometric problem globally when it can be solved locally. The central work for the study of sheaf cohomology is Grothendieck's 1957 Tôhoku paper.
Sheaves, sheaf cohomology, and spectral sequences were introduced by Jean Leray at the prisoner-of-war camp Oflag XVII-A in Austria. From 1940 to 1945, Leray and other prisoners organized a "université en captivité" in the camp.
Leray's definitions were simplified and clarified in the 1950s. It became clear that sheaf cohomology was not only a new approach to cohomology in algebraic topology, but also a powerful method in complex analytic geometry and algebraic geometry. These subjects often involve constructing global functions with specified local properties, and sheaf cohomology is ideally suited to such problems. Many earlier results such as the Riemann–Roch theorem and the Hodge theorem have been generalized or understood better using sheaf cohomology.
Definition
The category of sheaves of abelian groups on a topological space X is an abelian category, and so it makes sense to ask when a morphism f: B → C of sheaves is injective (a monomorphism) or surjective (an epimorphism). One answer is that f is injective (respectively surjective) if and only if the associated homomorphism on stalks Bx → Cx is injective (respectively surjective) for every point x in X. It follows that f is injective if and only if the homomorphism B(U) → C(U) of sections over U is injective for every open set U in X. Surjectivity is more subtle, however: the morphism f is surjective if and only if for every open set U in X, every section s of C over U, and every point x in U, there is an open neighborhood V of x in U such that s restricted to V is the image of some section of B over V. (In words: every section of C lifts locally t |
https://en.wikipedia.org/wiki/Sangaku | Sangaku or san gaku () are Japanese geometrical problems or theorems on wooden tablets which were placed as offerings at Shinto shrines or Buddhist temples during the Edo period by members of all social classes.
History
The sangaku were painted in color on wooden tablets (ema) and hung in the precincts of Buddhist temples and Shinto shrines as offerings to the kami and buddhas, as challenges to the congregants, or as displays of the solutions to questions. Many of these tablets were lost during the period of modernization that followed the Edo period, but around nine hundred are known to remain.
Fujita Kagen (1765–1821), a Japanese mathematician of prominence, published the first collection of sangaku problems, his Shimpeki Sampo (Mathematical problems Suspended from the Temple) in 1790, and in 1806 a sequel, the Zoku Shimpeki Sampo.
During this period Japan applied strict regulations to commerce and foreign relations for western countries so the tablets were created using Japanese mathematics, developed in parallel to western mathematics. For example, the connection between an integral and its derivative (the fundamental theorem of calculus) was unknown, so sangaku problems on areas and volumes were solved by expansions in infinite series and term-by-term calculation.
Select examples
A typical problem, which is presented on an 1824 tablet in Gunma Prefecture, covers the relationship of three touching circles with a common tangent, a special case of Descartes' theorem. Given the size of the two outer large circles, what is the size of the small circle between them? The answer is:
(See also Ford circle.)
Soddy's hexlet, thought previously to have been discovered in the west in 1937, had been discovered on a sangaku dating from 1822.
One sangaku problem from Sawa Masayoshi and other from Jihei Morikawa were solved only recently.
See also
Equal incircles theorem
Japanese theorem for concyclic polygons
Japanese theorem for concyclic quadrilaterals
Probl |
https://en.wikipedia.org/wiki/Lucas%20pseudoprime | Lucas pseudoprimes and Fibonacci pseudoprimes are composite integers that pass certain tests which all primes and very few composite numbers pass: in this case, criteria relative to some Lucas sequence.
Baillie-Wagstaff-Lucas pseudoprimes
Baillie and Wagstaff define Lucas pseudoprimes as follows: Given integers P and Q, where P > 0 and ,
let Uk(P, Q) and Vk(P, Q) be the corresponding Lucas sequences.
Let n be a positive integer and let be the Jacobi symbol. We define
If n is a prime that does not divide Q, then the following congruence condition holds:
If this congruence does not hold, then n is not prime.
If n is composite, then this congruence usually does not hold. These are the key facts that make Lucas sequences useful in primality testing.
The congruence () represents one of two congruences defining a Frobenius pseudoprime. Hence, every Frobenius pseudoprime is also a Baillie-Wagstaff-Lucas pseudoprime, but the converse does not always hold.
Some good references are chapter 8 of the book by Bressoud and Wagon (with Mathematica code), pages 142–152 of the book by Crandall and Pomerance, and pages 53–74 of the book by Ribenboim.
Lucas probable primes and pseudoprimes
A Lucas probable prime for a given (P, Q) pair is any positive integer n for which equation () above is true (see, page 1398).
A Lucas pseudoprime for a given (P, Q) pair is a positive composite integer n for which equation () is true (see, page 1391).
A Lucas probable prime test is most useful if D is chosen such that the Jacobi symbol is −1
(see pages 1401–1409 of, page 1024 of, or pages 266–269 of
). This is especially important when combining a Lucas test with a strong pseudoprime test, such as the Baillie–PSW primality test. Typically implementations will use a parameter selection method that ensures this condition (e.g. the Selfridge method recommended in and described below).
If then equation () becomes
If congruence () is false, this constitutes a proof that n is co |
https://en.wikipedia.org/wiki/Deniable%20encryption | In cryptography and steganography, plausibly deniable encryption describes encryption techniques where the existence of an encrypted file or message is deniable in the sense that an adversary cannot prove that the plaintext data exists.
The users may convincingly deny that a given piece of data is encrypted, or that they are able to decrypt a given piece of encrypted data, or that some specific encrypted data exists. Such denials may or may not be genuine. For example, it may be impossible to prove that the data is encrypted without the cooperation of the users. If the data is encrypted, the users genuinely may not be able to decrypt it. Deniable encryption serves to undermine an attacker's confidence either that data is encrypted, or that the person in possession of it can decrypt it and provide the associated plaintext.
Function
Deniable encryption makes it impossible to prove the existence of the plaintext message without the proper decryption key. This may be done by allowing an encrypted message to be decrypted to different sensible plaintexts, depending on the key used. This allows the sender to have plausible deniability if compelled to give up their encryption key.
The notion of "deniable encryption" was used by Julian Assange and Ralf Weinmann in the Rubberhose filesystem and explored in detail in a paper by Ran Canetti, Cynthia Dwork, Moni Naor, and Rafail Ostrovsky in 1996.
Scenario
Deniable encryption allows the sender of an encrypted message to deny sending that message. This requires a trusted third party. A possible scenario works like this:
Bob suspects his wife Alice is engaged in adultery. That being the case, Alice wants to communicate with her secret lover Carl. She creates two keys, one intended to be kept secret, the other intended to be sacrificed. She passes the secret key (or both) to Carl.
Alice constructs an innocuous message M1 for Carl (intended to be revealed to Bob in case of discovery) and an incriminating love letter M2 to Carl. |
https://en.wikipedia.org/wiki/Stroboscope | A stroboscope, also known as a strobe, is an instrument used to make a cyclically moving object appear to be slow-moving, or stationary. It consists of either a rotating disk with slots or holes or a lamp such as a flashtube which produces brief repetitive flashes of light. Usually, the rate of the stroboscope is adjustable to different frequencies. When a rotating or vibrating object is observed with the stroboscope at its vibration frequency (or a submultiple of it), it appears stationary. Thus stroboscopes are also used to measure frequency.
The principle is used for the study of rotating, reciprocating, oscillating or vibrating objects. Machine parts and vibrating string are common examples. A stroboscope used to set the ignition timing of internal combustion engines is called a timing light.
Mechanical
In its simplest mechanical form, a stroboscope can be a rotating cylinder (or bowl with a raised edge) with evenly spaced holes or slots placed in the line of sight between the observer and the moving object. The observer looks through the holes/slots on the near and far side at the same time, with the slots/holes moving in opposite directions. When the holes/slots are aligned on opposite sides, the object is visible to the observer.
Alternately, a single moving hole or slot can be used with a fixed/stationary hole or slot. The stationary hole or slot limits the light to a single viewing path and reduces glare from light passing through other parts of the moving hole/slot.
Viewing through a single line of holes/slots does not work, since the holes/slots appear to just sweep across the object without a strobe effect.
The rotational speed is adjusted so that it becomes synchronised with the movement of the observed system, which seems to slow and stop. The illusion is caused by temporal aliasing, commonly known as the stroboscopic effect.
Electronic
In electronic versions, the perforated disc is replaced by a lamp capable of emitting brief and rapid fla |
https://en.wikipedia.org/wiki/DSTN | DSTN (double super twisted nematic), also known as dual-scan super twisted nematic or simply dual-scan, is an LCD technology in which a screen is divided in half, which are simultaneously refreshed giving faster refresh rate than traditional passive matrix screens. It is an improved form of supertwist nematic display that offers low power consumption but inferior sharpness and brightness compared to TFT screens.
History
For several years (early 1990s to early 2000s), TFT screens were only found in high-end laptops due to them being more expensive and lower-end laptops offering DSTN screens only. This was at a time when the screen was often the most expensive component of laptops. The price difference between a laptop with DSTN and one with TFT could easily be $400 or more. However, TFT gradually became cheaper and has essentially captured the entire market.
DSTN display quality is poor compared to TFT, with visible noise, smearing, much lower contrast and slow response. Such screens are unsuitable for viewing movies or playing video games of any kind.
References
Liquid crystal displays |
https://en.wikipedia.org/wiki/NEC%20SX-6 | The SX-6 is a NEC SX supercomputer built by NEC Corporation that debuted in 2001; the SX-6 was sold under license by Cray Inc. in the U.S. Each SX-6 single-node system contains up to eight vector processors, which share up to 64 GB of computer memory. The SX-6 processor is a single chip implementation containing a vector processor unit and a scalar processor fabricated in a 0.15 μm CMOS process with copper interconnects, whereas the SX-5 was a multi-chip implementation. The Earth Simulator is based on the SX-6 architecture.
The vector processor is made up of eight vector pipeline units each with seventy-two 256-word vector registers. The vector unit performs add/shift, multiply, divide and logical operations. The scalar unit is 64 bits wide and contains a 64 KB cache. The scalar unit can decode, issue and complete four instructions per clock cycle. Branch prediction and speculative execution is supported. A multi-node system is configured by interconnecting up to 128 single-node systems via a high-speed, low-latency IXS (Internode Crossbar Switch).
The peak performance of the SX-6 series vector processors is 8 GFLOPS. Thus a single-node system provides a peak performance of 64 GFLOPS, while a multi-node system provides up to 8 TFLOPS of peak floating-point performance.
The SX-6 uses SUPER-UX, a Unix-like operating system developed by NEC. A SAN-based global file system (NEC's GFS) is available for a multinode installation. The default batch processing system is NQSII, but open source batch systems such as Sun Grid Engine are also supported.
See also
SUPER-UX
NEC SX
Earth Simulator
NEC Corporation
References
External links
SX-6 Specifications
Scalable Vector Supercomputer - SX Series Downloads
Sx-6
Vector supercomputers |
https://en.wikipedia.org/wiki/Oracle%20Grid%20Engine | Oracle Grid Engine, previously known as Sun Grid Engine (SGE), CODINE (Computing in Distributed Networked Environments) or GRD (Global Resource Director), was a grid computing computer cluster software system (otherwise known as a batch-queuing system), acquired as part of a purchase of Gridware, then improved and supported by Sun Microsystems and later Oracle. There have been open source versions and multiple commercial versions of this technology, initially from Sun, later from Oracle and then from Univa Corporation.
On October 22, 2013 Univa announced it acquired the intellectual property and trademarks for the Grid Engine technology and that Univa will take over support. Univa has since evolved the Grid Engine technology, e.g. improving scalability as demonstrated by a 1 million core cluster in Amazon Web Services (AWS) announced on June 24, 2018.
The original Grid Engine open-source project website closed in 2010, but versions of the technology are still available under its original Sun Industry Standards Source License (SISSL). Those projects were forked from the original project code and are known as Son of Grid Engine, Open Grid Scheduler and Univa Grid Engine.
Grid Engine is typically used on a computer farm or high-performance computing (HPC) cluster and is responsible for accepting, scheduling, dispatching, and managing the remote and distributed execution of large numbers of standalone, parallel or interactive user jobs. It also manages and schedules the allocation of distributed resources such as processors, memory, disk space, and software licenses.
Grid Engine used to be the foundation of the Sun Grid utility computing system, made available over the Internet in the United States in 2006, later becoming available in many other countries and having been an early version of a public cloud computing facility predating AWS, for instance.
History
In 2000, Sun acquired Gridware a privately owned commercial vendor of advanced computing resource managem |
https://en.wikipedia.org/wiki/ORION%20%28research%20and%20education%20network%29 | The Ontario Research and Innovation Optical Network (ORION) is a high-speed optical research and education network in Ontario, Canada. It connects virtually all of Ontario's research and education institutions including every university, most colleges, several teaching hospitals, public research facilities and several school boards to one another and to the global grid of R&E networks using optical fibre.
History
ORION was founded in 2001 (then as The Optical Regional Advanced Network of Ontario, or ORANO) with the support of the Ontario Government. ORION is a self-sustaining not-for-profit organization kickstarted by the Ontario Government under Premier Harris.
ORION is owned and operated by a not-for-profit corporation governed by a board of directors, which includes representatives from the fields of education, research and business. ORION was initially funded by the Government of Ontario with additional funding from CANARIE, as well as private and public sector organizations and institutions. ORION is a self-sustaining organization, generating revenue from its connected institutions through user access fees and over network-based services.
In April 2023, GTAnet merged with ORION.
Network
The network spans and connects 28 communities throughout Ontario. Almost 100 organizations and special projects connect to ORION directly, including 21 universities, 22 colleges, 34 school boards representing 2 million students, 13 teaching hospitals and medical research centres, and other research and educational and public library facilities.
ORION connects to research and education networks elsewhere in Canada and internationally through the national CANARIE network, which exchanges with ORION at the Toronto Internet Exchange.
References
External links
Academic computer network organizations
2002 establishments in Ontario
Organizations based in Toronto
Organizations established in 2002
Science and technology in Canada |
https://en.wikipedia.org/wiki/Leaky%20bucket | The leaky bucket is an algorithm based on an analogy of how a bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once. It can be used to determine whether some sequence of discrete events conforms to defined limits on their average and peak rates or frequencies, e.g. to limit the actions associated with these events to these rates or delay them until they do conform to the rates. It may also be used to check conformance or limit to an average rate alone, i.e. remove any variation from the average.
It is used in packet-switched computer networks and telecommunications networks in both the traffic policing, traffic shaping and scheduling of data transmissions, in the form of packets, to defined limits on bandwidth and burstiness (a measure of the variations in the traffic flow).
A version of the leaky bucket, the generic cell rate algorithm, is recommended for Asynchronous Transfer Mode (ATM) networks in Usage/Network Parameter Control at user–network interfaces or inter-network interfaces or network-to-network interfaces to protect a network from excessive traffic levels on connections routed through it. The generic cell rate algorithm, or an equivalent, may also be used to shape transmissions by a network interface card onto an ATM network.
At least some implementations of the leaky bucket are a mirror image of the token bucket algorithm and will, given equivalent parameters, determine exactly the same sequence of events to conform or not conform to the same limits.
Overview
Two different methods of applying this leaky bucket analogy are described in the literature. These give what appear to be two different algorithms, both of which are referred to as the leaky bucket algorithm and generally without reference to the other method. This has resulted in confusion about what the leaky bucket algorithm is and |
https://en.wikipedia.org/wiki/Ant%20colony | An ant colony is a population of a single ant species able to maintain its complete lifecycle. Ant colonies are eusocial, communal, and efficiently organized and are very much like those found in other social Hymenoptera, though the various groups of these developed sociality independently through convergent evolution. The typical colony consists of one or more egg-laying queens, numerous sterile females (workers, soldiers) and, seasonally, many winged sexual males and females. In order to establish new colonies, ants undertake flights that occur at species-characteristic times of the day. Swarms of the winged sexuals (known as alates) depart the nest in search of other nests. The males die shortly thereafter, along with most of the females. A small percentage of the females survive to initiate new nests.
Names
The term "ant colony" refers to a population of workers, reproductive individuals, and brood that live together, cooperate, and treat one another non-aggressively. Often this comprises the genetically related progeny from a single queen, although this is not universal across ants. The name "ant farm" is commonly given to ant nests that are kept in formicaria, isolated from their natural habitat. These formicaria are formed so scientists can study by rearing or temporarily maintaining them. Another name is "formicary", which derives from the Medieval Latin word formīcārium. The word also derives from formica. "Ant nests" are the physical spaces in which the ants live. These can be underground, in trees, under rocks, or even inside a single acorn. The name "ant hill" (or "anthill") applies to aboveground nests where the workers pile sand or soil outside the entrance, forming a large mound.
Colony size
Colony size (the number of individuals that make up the colony) is very important to ants: it can affect how they forage, how they defend their nests, how they mate, and even their physical appearances. Body size is often seen as the most important factor in s |
https://en.wikipedia.org/wiki/Vmlinux | vmlinux is a statically linked executable file that contains the Linux kernel in one of the object file formats supported by Linux, which includes Executable and Linkable Format (ELF) and Common Object File Format (COFF). The vmlinux file might be required for kernel debugging, symbol table generation or other operations, but must be made bootable before being used as an operating system kernel by adding a multiboot header, bootsector and setup routines.
Etymology
Traditionally, UNIX platforms called the kernel image /unix. With the development of virtual memory, kernels that supported this feature were given the vm- prefix to differentiate them. The name vmlinux is a mutation of vmunix, while in vmlinuz the letter z at the end denotes that it is compressed (for example gzipped).
Location
Traditionally, the kernel was located in the root directory of the filesystem hierarchy; however, as the bootloader must use BIOS drivers to access the hard disk, limitations on some i386 systems meant only the first 1024 cylinders of the hard disk were addressable.
To overcome this, Linux distributors encouraged users to create a partition at the beginning of their drives specifically for storing bootloader and kernel-related files. GRUB, LILO and SYSLINUX are common bootloaders.
By convention, this partition is mounted on the filesystem hierarchy as /boot. This was later standardised by the Filesystem Hierarchy Standard (FHS), which now requires the Linux kernel image to be located in either / or /boot, although there is no technical restriction enforcing this.
Compression
Traditionally, when creating a bootable kernel image, the kernel is also compressed using gzip, or, since Linux 2.6.30, using LZMA or bzip2, which requires a very small decompression stub to be included in the resulting image. The stub decompresses the kernel code, on some systems printing dots to the console to indicate progress, and then continues the boot process. Support for LZO, xz, LZ4 and zstd compr |
https://en.wikipedia.org/wiki/GC-content | In molecular biology and genetics, GC-content (or guanine-cytosine content) is the percentage of nitrogenous bases in a DNA or RNA molecule that are either guanine (G) or cytosine (C). This measure indicates the proportion of G and C bases out of an implied four total bases, also including adenine and thymine in DNA and adenine and uracil in RNA.
GC-content may be given for a certain fragment of DNA or RNA or for an entire genome. When it refers to a fragment, it may denote the GC-content of an individual gene or section of a gene (domain), a group of genes or gene clusters, a non-coding region, or a synthetic oligonucleotide such as a primer.
Structure
Qualitatively, guanine (G) and cytosine (C) undergo a specific hydrogen bonding with each other, whereas adenine (A) bonds specifically with thymine (T) in DNA and with uracil (U) in RNA. Quantitatively, each GC base pair is held together by three hydrogen bonds, while AT and AU base pairs are held together by two hydrogen bonds. To emphasize this difference, the base pairings are often represented as "G≡C" versus "A=T" or "A=U".
DNA with low GC-content is less stable than DNA with high GC-content; however, the hydrogen bonds themselves do not have a particularly significant impact on molecular stability, which is instead caused mainly by molecular interactions of base stacking. In spite of the higher thermostability conferred to a nucleic acid with high GC-content, it has been observed that at least some species of bacteria with DNA of high GC-content undergo autolysis more readily, thereby reducing the longevity of the cell per se. Because of the thermostability of GC pairs, it was once presumed that high GC-content was a necessary adaptation to high temperatures, but this hypothesis was refuted in 2001. Even so, it has been shown that there is a strong correlation between the optimal growth of prokaryotes at higher temperatures and the GC-content of structural RNAs such as ribosomal RNA, transfer RNA, and many |
https://en.wikipedia.org/wiki/Mojave%20phone%20booth | The Mojave phone booth ( ) was a lone telephone booth in what is now the Mojave National Preserve in California. It attracted online attention in 1997 for its unusual location – it was located at the intersection of two dirt roads in a remote part of the Mojave Desert, from the nearest paved road (Interstate 15 to the northeast, Kelbaker Road to the southwest) and miles from any buildings.
History
The phone booth was originally set up in 1948 to provide telephone service to local volcanic cinder miners and others living in the area, at the request of Emerson Ray, who owned the Cima Cinder Mine nearby. It was assigned the name Cinder Peak Policy Station, part of a network of "policy stations" placed by mandate of the California government to serve residents of isolated parts of the state. The Mojave phone booth probably replaced an earlier booth to the south. The original hand-cranked magneto phone was replaced with a payphone in the 1960s. The rotary phone was then replaced with a touch-tone model in the 1970s. The booth phone's original number was BAker-3-9969. After area codes were established in 1947, it shared the same area code as all of Southern California, 213. In 1951, the area code 714 was split off from 213, and 714 was the booth's area code until 1982 when parts of Riverside and San Bernardino counties were given the area code 619. The booth's area code for its final few years of existence was 760.
In 1997, a Los Angeles man spotted a telephone icon on a map of the Mojave Desert and decided to visit it. He wrote a letter about his adventure to an underground magazine and included the booth's telephone number. An Arizona man, Godfrey Daniels, read the letter and started a website devoted to the Mojave telephone booth. The booth became an Internet sensation after it was mentioned in a New York Times article. Soon other people began calling the booth, others made websites about it, and a few even took trips to the booth to answer, often camping out at t |
https://en.wikipedia.org/wiki/Microbial%20ecology | Microbial ecology (or environmental microbiology) is the ecology of microorganisms: their relationship with one another and with their environment. It concerns the three major domains of life—Eukaryota, Archaea, and Bacteria—as well as viruses.
Microorganisms, by their omnipresence, impact the entire biosphere. Microbial life plays a primary role in regulating biogeochemical systems in virtually all of our planet's environments, including some of the most extreme, from frozen environments and acidic lakes, to hydrothermal vents at the bottom of deepest oceans, and some of the most familiar, such as the human small intestine, nose, and mouth. As a consequence of the quantitative magnitude of microbial life (calculated as cells; eight orders of magnitude greater than the number of stars in the observable universe) microbes, by virtue of their biomass alone, constitute a significant carbon sink. Aside from carbon fixation, microorganisms' key collective metabolic processes (including nitrogen fixation, methane metabolism, and sulfur metabolism) control global biogeochemical cycling. The immensity of microorganisms' production is such that, even in the total absence of eukaryotic life, these processes would likely continue unchanged.
History
While microbes have been studied since the seventeenth-century, this research was from a primarily physiological perspective rather than an ecological one. For instance, Louis Pasteur and his disciples were interested in the problem of microbial distribution both on land and in the ocean. Martinus Beijerinck invented the enrichment culture, a fundamental method of studying microbes from the environment. He is often incorrectly credited with framing the microbial biogeographic idea that "everything is everywhere, but, the environment selects", which was stated by Lourens Baas Becking. Sergei Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context—making him among the |
https://en.wikipedia.org/wiki/James%20Rumbaugh | James E. Rumbaugh (born August 22, 1947) is an American computer scientist and object-oriented methodologist who is best known for his work in creating the Object Modeling Technique (OMT) and the Unified Modeling Language (UML).
Biography
Born in Bethlehem, Pennsylvania, Rumbaugh received a B.S. in physics from the Massachusetts Institute of Technology (MIT), an M.S. in astronomy from the California Institute of Technology (Caltech), and received a Ph.D. in computer science from MIT under Professor Jack Dennis.
Rumbaugh started his career in the 1960s at Digital Equipment Corporation (DEC) as a lead research scientist. From 1968 to 1994 he worked at the General Electric Research and Development Center developing technology, teaching, and consulting. At General Electric he also led the development of Object-modeling technique (OMT), an object modeling language for software modeling and designing.
In 1994, he joined Rational Software, where he worked with Ivar Jacobson and Grady Booch ("the Three Amigos") to develop Unified Modeling Language (UML). Later they merged their software development methologies, OMT, OOSE and Booch into the Rational Unified Process (RUP). In 2003 he moved to IBM, after its acquisition of Rational Software. He retired in 2006.
He has two grown up children and (in 2009) lived in Saratoga, California with his wife.
Work
Rumbaugh's main research interests are formal description languages, "semantics of computation, tools for programming productivity, and applications using complex algorithms and data structures".
In his graduate work at MIT, Rumbaugh contributed to the development of data flow computer architecture. His thesis described parallel programming language, parallel processor computer and a basis for a network architecture, which orients itself at data flow. Rumbaugh made further contributions to Object Modeling Technique, IDEF4, the Rational Unified Process and Unified Modeling Language.
Publications
Rumbaugh has written a n |
https://en.wikipedia.org/wiki/Programming%20game | A programming game is a video game that incorporates elements of computer programming, enabling the player to direct otherwise autonomous units within the game to follow commands in a domain-specific programming language, often represented as a visual language to simplify the programming metaphor. Programming games broadly fall into two areas: single-player games where the programming elements either make up part of or the whole of a puzzle game, and multiplayer games where the player's automated program is pitted against other players' programs.
As puzzle games
Early games in the genre include System 15000 and Hacker, released in 1984 and 1985 respectively.
Programming games have been used as part of puzzle games, challenging the player to achieve a specific result once the program starts operating. An example of such a game is SpaceChem, where the player must use its visual language to manipulate two waldos as to disassemble and reassemble chemical molecules. In such games, players are able to test and debug their program as often as necessary until they find a solution that works. Many of these games encourage the player to find the most efficient program, measured by the number of timesteps needed or number of commands required. Other similar games include Human Resource Machine, Infinifactory, and TIS-100. Zachtronics is a video game development company known for its programming-centric puzzle games.
Other games incorporate the elements of programming as portions of puzzles in the larger game. For example, Hack 'n' Slash include a metaphor of being able to access the internal programs and variables of objects represented in the game world, pausing the rest of the game as the player engages this programming interface, and modify the object's program as to progress further; this might be changing the state of an object from being indestructible to destructible. Other similar games with this type of programming approach include Transistor, else Heart.Break(), G |
https://en.wikipedia.org/wiki/Port%20triggering | Port triggering is a configuration option on a NAT-enabled router that controls communication between internal and external host machines in an IP network. It is similar to port forwarding in that it enables incoming traffic to be forwarded to a specific internal host machine, although the forwarded port is not open permanently and the target internal host machine is chosen dynamically.
Description
When two networks communicate through a NAT-router, the host machines on the internal network behave as if they have the IP address of the NAT-router from the perspective of the host machines on the external network. Without any traffic forwarding rules, it is impossible for a host machine on an external network (host B) to open a connection to a host machine in the internal network (host A). This is because the connection can only be targeted to the IP of the NAT-router, since the internal network is hidden behind NAT. With port triggering, when some host A opens a connection to a host B using a predefined port or ports, then all incoming traffic that the router receives on some predefined port or ports is forwarded to host A. This is the 'triggering' event for the forwarding rule. The forwarding rule is disabled after a period of inactivity.
Port triggering is useful for network applications where the client and server roles must be switched for certain tasks, such as authentication for IRC chat and file downloading for FTP file sharing.
Example
As an example of how port triggering operates, when connecting to IRC (Internet Relay Chat), it is common to authenticate a username with the Ident protocol via port 113.
When connecting to IRC, the client computer typically makes an outgoing connection on port 6667 (or any port in the range 6660–7000), causing the IRC server to attempt to verify the username given by making a new connection back to the client computer on port 113. When the computer is behind NAT, the NAT device silently drops this connection because it |
https://en.wikipedia.org/wiki/COP8 | The National Semiconductor COP8 is an 8-bit CISC core microcontroller. COP8 is an enhancement to the earlier COP400 4-bit microcontroller family. COP8 main features are:
Large amount of I/O pins
Up to 32 KB of Flash memory/ROM for code and data
Very low EMI (no known bugs)
Many integrated peripherals (meant as single chip design)
In-System Programming
Free assembler toolchain. Commercial C compilers available
Free Multitasking OS and TCP/IP stack
It has a machine cycle of up to 2M cycles per second, but most versions seem to be overclockable to up to 2.8M cycles per second (28 MHz clock).
Registers and memory map
The COP8 uses separate instruction and data spaces (Harvard architecture). Instruction address space is 15-bit (32 KiB maximum), while data addresses are 8-bit (256 bytes maximum, extended via bank-switching).
To allow software bugs to be caught, all invalid instruction addresses read as zero, which is a trap instruction. Invalid RAM above the stack reads as all-ones, which is an invalid address.
The CPU has an 8-bit accumulator and 15-bit program counter. 16 additional 8-bit registers (R0–R15) and an 8-bit program status word are memory mapped. There are special instructions to access them, but general RAM access instructions may also be used.
The memory map is divided into half RAM and half control registers as follows:
If RAM is not banked, then R15 (S) is just another general-purpose register. If RAM is banked, then the low half of the data address space (addresses 0x00–7F) is directed to a RAM bank selected by S. The special purpose registers in the high half of the data address space are always visible. The data registers at 0xFx can be used to copy data between banks.
RAM banks other than bank 0 have all 128 bytes available. The stack (addressed via the stack pointer) is always on bank 0, no matter how the S register is set.
Control transfers
In addition to 3-byte and instructions which can address the entire address space, |
https://en.wikipedia.org/wiki/CompactRISC | CompactRISC is a family of instruction set architectures from National Semiconductor.
The architectures are designed according to reduced instruction set computing principles, and are mainly used in microcontrollers.
The subarchitectures of this family are the 16-bit CR16 and CR16C and the 32-bit CRX.
Architectures
Features of CR16 family: compact implementations (less than 1 mm2 with 250 nm), addressing of 2 MB (2), frequencies up to 66 MHz, hardware multiplier for 16-bit integers.
It has complex instructions such as bit manipulation, saving/restoring and push/pop of several registers with single command.
CR16 has 16 general purpose registers of 16 bits, and address registers of 21 bits wide. There are 8 special registers: program counter, interrupt stack pointer ISP, interrupt vector address register INTBASE, status register PSR, configuration register and 3 debug registers. Status register implements flags: C, T, L, F, Z, N, E, P, I.
Instructions are encoded in two-address form in several formats, usually they have 16-bit encoding, but there are two formats for medium immediate instructions with length of 32-bit. Typical opcode length is 4 bits (bits 9–12 of most encoding types. Basic encoding formats are:
Register-to-register,
Short 5-bit immediate value to register,
Medium immediate of 16-bit value to register (32-bit encoding),
Load/store relative with short 5-bit displacement (2-bit opcode),
Load/store relative with medium 18-bit displacement (32-bit encoding, 2-bit opcode).
CR16C comes with a different opcode encoding format, has 23–32-bit-wide address registers and provides two 32-bit general purpose registers.
CR16 implements traps and interrupts. Implementations of CR16 has three-stage pipeline: fetch, decode, execute.
CR16 products
CR16 was used in several National Semiconductor microcontrollers, and since 2001 integrated microcontrollers were available having built-in flash memory. Since 2007 CR16-based IP was available to licensing
Referen |
https://en.wikipedia.org/wiki/Semigroup%20action | In algebra and theoretical computer science, an action or act of a semigroup on a set is a rule which associates to each element of the semigroup a transformation of the set in such a way that the product of two elements of the semigroup (using the semigroup operation) is associated with the composite of the two corresponding transformations. The terminology conveys the idea that the elements of the semigroup are acting as transformations of the set. From an algebraic perspective, a semigroup action is a generalization of the notion of a group action in group theory. From the computer science point of view, semigroup actions are closely related to automata: the set models the state of the automaton and the action models transformations of that state in response to inputs.
An important special case is a monoid action or act, in which the semigroup is a monoid and the identity element of the monoid acts as the identity transformation of a set. From a category theoretic point of view, a monoid is a category with one object, and an act is a functor from that category to the category of sets. This immediately provides a generalization to monoid acts on objects in categories other than the category of sets.
Another important special case is a transformation semigroup. This is a semigroup of transformations of a set, and hence it has a tautological action on that set. This concept is linked to the more general notion of a semigroup by an analogue of Cayley's theorem.
(A note on terminology: the terminology used in this area varies, sometimes significantly, from one author to another. See the article for details.)
Formal definitions
Let S be a semigroup. Then a (left) semigroup action (or act) of S is a set X together with an operation which is compatible with the semigroup operation ∗ as follows:
for all s, t in S and x in X, .
This is the analogue in semigroup theory of a (left) group action, and is equivalent to a semigroup homomorphism into the set of functions o |
https://en.wikipedia.org/wiki/Multipurpose%20community%20telecenters | Multipurpose Community Telecenters are telecentre facilities, which provide public access to a variety of communication and information services, such as libraries and seminar rooms. They are promoted by many governments and organisations including the International Telecommunication Union. They are generally introduced to try to bring access to information and communication technologies in rural communities, but often find significant obstacles in the high cost of connectivity, low digital literacy in the community and high maintenance costs, and are thus forced to shut down. Out of 23 of the MCT's built in rural Mexico, only 5 were working 2 years later.
References
Community networks
Internet access
Public phones
Telecommunications for development |
https://en.wikipedia.org/wiki/CK722 | The CK722 was the first low-cost junction transistor available to the general public. It was a PNP germanium small-signal unit. Developed by Norman Krim, it was introduced by Raytheon in early 1953 for $7.60 each; the price was reduced to $3.50 in late 1954 and to $0.99 in 1956. Norm Krim selected Radio Shack to sell the CK721 and CK722 through their catalog. Krim had a long-standing personal and business relationship with Radio Shack. The CK722s were selected "fall out" from the Raytheon's premium-priced CK721 (which are fallouts from CK718 hearing-aid transistors). Raytheon actively encouraged hobbyists with design contests and advertisements.
In the 1950s and 1960s, hundreds of hobbyist electronics projects based around the CK722 transistor were published in popular books and magazines. Raytheon also participated in expanding the role of the CK721/CK722 as a hobbyist electronics device by publishing "Transistor Applications" and "Transistor Applications Volume 2" during the mid-1950s.
Construction
The original CK722 were direct fallouts from CK718 hearing-aid transistors that did not meet specifications. These fallouts were later stamped with CK721 or CK722 numbers based on gain, noise and other dynamic characteristics. Early CK722s were plastic-encapsulated and had a black body. As Raytheon improved its production of hearing-aid transistors with the introduction of the smaller CK78x series, the body of the CK721/CK722s was changed to a metal case. Raytheon, however, kept the basic body size and used a unique method by taking the smaller CK78x rejects and inserting it into the larger body and sealing it. The first metal-cased CK721/CK722s were blue, and the later ones were silver. More details of this can be found in Jack Ward's website, Semiconductor Museum or the CK722 Museum, see external link reference below.
Engineers associated with the CK722
Norman Krim – father of the transistor hobbyist market
In the late 1930s, Norm Krim, then an engineer for Rayt |
https://en.wikipedia.org/wiki/Orthogonal%20complement | In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace W of a vector space V equipped with a bilinear form B is the set W⊥ of all vectors in V that are orthogonal to every vector in W. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V.
Example
Let be the vector space equipped with the usual dot product (thus making it an inner product space), and let with
then its orthogonal complement can also be defined as being
The fact that every column vector in is orthogonal to every column vector in can be checked by direct computation. The fact that the spans of these vectors are orthogonal then follows by bilinearity of the dot product. Finally, the fact that these spaces are orthogonal complements follows from the dimension relationships given below.
General bilinear forms
Let be a vector space over a field equipped with a bilinear form We define to be left-orthogonal to , and to be right-orthogonal to when For a subset of define the left orthogonal complement to be
There is a corresponding definition of right orthogonal complement. For a reflexive bilinear form, where implies for all and in the left and right complements coincide. This will be the case if is a symmetric or an alternating form.
The definition extends to a bilinear form on a free module over a commutative ring, and to a sesquilinear form extended to include any free module over a commutative ring with conjugation.
Properties
An orthogonal complement is a subspace of ;
If then ;
The radical of is a subspace of every orthogonal complement;
;
If is non-degenerate and is finite-dimensional, then
If are subspaces of a finite-dimensional space and then
Inner product spaces
This section considers orthogonal complements in an inner product space
Two vectors and are called if which happens if and only if for all scalars
If is any subset of an inner produc |
https://en.wikipedia.org/wiki/National%20LambdaRail | National LambdaRail (NLR) was a , high-speed national computer network owned and operated by the U.S. research and education community. In November 2011, the control of NLR was purchased from its university membership by a billionaire Patrick Soon-Shiong. NLR ceased operations in March 2014.
Goals
The goals of the National LambdaRail project are:
To bridge the gap between leading-edge optical network research and state-of-the-art applications research;
To push beyond the technical and performance limitations of today’s Internet backbones;
To provide the growing set of major computationally intensive science (often termed e-Science) projects, initiatives and experiments with the dedicated bandwidth, deterministic performance characteristics, and/or other advanced network capabilities they need; and
To enable creative experimentation and innovation that characterized facilities-based network research during the early years of the Internet.
Description
NLR uses fiber-optic lines, and is the first transcontinental 10 Gigabit Ethernet network. Its high capacity (up to 1.6 Tbit/s aggregate), high bitrate (40 Gbit/s implemented; planning for 100 Gbit/s underway) and high availability (99.99% or more), enable National LambdaRail to support demanding research projects. Users include NASA, the National Oceanic and Atmospheric Administration, Oak Ridge National Laboratory, and over 280 research universities and other laboratories. In 2009 National LambdaRail was selected to provide wide-area networking for U.S. laboratories participating in research related to the Large Hadron Collider project, based near Geneva, Switzerland.
It is primarily oriented to aid terascale computing efforts and to be used as a network testbed for experimentation with large-scale networks. National LambdaRail is a university-based and -owned initiative, in contrast with the Abilene Network and Internet2, which are university-corporate sponsorships. National LambdaRail does not impose any acc |
https://en.wikipedia.org/wiki/Secure%20Communications%20Interoperability%20Protocol | The Secure Communications Interoperability Protocol (SCIP) is a US standard for secure voice and data communication, for circuit-switched one-to-one connections, not packet-switched networks. SCIP derived from the US Government Future Narrowband Digital Terminal (FNBDT) project.
SCIP supports a number of different modes, including national and multinational modes which employ different cryptography. Many nations and industries develop SCIP devices to support the multinational and national modes of SCIP.
SCIP has to operate over the wide variety of communications systems, including commercial land line telephone, military radios, communication satellites, Voice over IP and the several different cellular telephone standards. Therefore, it was designed to make no assumptions about the underlying channel other than a minimum bandwidth of 2400 Hz. It is similar to a dial-up modem in that once a connection is made, two SCIP phones first negotiate the parameters they need and then communicate in the best way possible.
US SCIP or FNBDT systems were used since 2001, beginning with the CONDOR secure cell phone. The standard is designed to cover wideband as well as narrowband voice and data security.
SCIP was designed by the Department of Defense Digital Voice Processor Consortium (DDVPC) in cooperation with the U.S. National Security Agency and is intended to solve problems with earlier NSA encryption systems for voice, including STU-III and Secure Terminal Equipment (STE) which made assumptions about the underlying communication systems that prevented interoperability with more modern wireless systems. STE sets can be upgraded to work with SCIP, but STU-III cannot. This has led to some resistance since various government agencies already own over 350,000 STU-III telephones at a cost of several thousand dollars each.
There are several components to the SCIP standard: key management, voice compression, encryption and a signalling plan for voice, data and multimedia appl |
https://en.wikipedia.org/wiki/Film%20look | Film look (also known as filmizing or film-look) is a process in which video is altered in overall appearance to appear to have been shot on film stock. The process is usually electronic, although filmizing can sometimes occur as an unintentional by-product of some optical techniques, such as telerecording. The effect is the exact opposite of a process called VidFIRE.
Differences between video and film
Frame rate: 24 frames per second for film, 30 or 40 frames per second for old SD video. Modern video cameras shoot 24 and up as well.
Shutter angle: Shorter (90° to 210°) for film, often ~350° for old video. Modern video cameras have adjustable electronic, or – in Arri's video cameras – mechanical shutters.
Dynamic range: film and video systems have widely varying limits to the luminance dynamic ranges that they can capture. Modern video cameras are much closer to the dynamic range of film, and their use is better understood by directors.
Field of view and depth of field: Depth of field is tangentially related to the size of the image plane, however, it is a popular misconception that the image plane is directly related to DOF. Smaller image planes (whether film or sensor) require a proportionally smaller lens to achieve a similar field of view. This means that a frame with a 12 degree horizontal field of view will require a 50 mm lens on 16 mm film, a 100 mm lens on 35 mm film, and a 250 mm lens on 65 mm film. And a 250 mm lens delivers much shallower DOF than a 50 mm lens does. It follows that standard lenses on most consumer video cameras with small sensors provide much larger depth of field than 35 mm film. Digital cinema cameras like the Red One or Panavision Genesis, as well as some digital SLR cameras with video capabilities, (such as the Canon EOS 5D Mark II), have sensors roughly equal in size to 35 mm film frames and thus show the same field of view characteristics.
Photo-chemical color-timing/grading: only possible with film; white balance adjustment |
https://en.wikipedia.org/wiki/Computational%20photography | Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or "post focus"). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.
The definition of computational photography has evolved to cover a number of
subject areas in computer graphics, computer vision, and applied
optics. These areas are given below, organized according to a taxonomy
proposed by Shree K. Nayar. Within each area is a list of techniques, and for
each technique one or two representative papers or books are cited.
Deliberately omitted from the
taxonomy are image processing (see also digital image processing)
techniques applied to traditionally captured
images in order to produce better images. Examples of such techniques are
image scaling, dynamic range compression (i.e. tone mapping),
color management, image completion (a.k.a. inpainting or hole filling),
image compression, digital watermarking, and artistic image effects.
Also omitted are techniques that produce range data,
volume data, 3D models, 4D light fields,
4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations. Epsilon photography is a sub-field of computational photography.
Effect on photography
Photos taken using computational photography can allow amateurs to produce photographs rivalling the quality of professional photographe |
https://en.wikipedia.org/wiki/Sun%20Zhiwei | Sun Zhiwei (, born October 16, 1965) is a Chinese mathematician, working primarily in number theory, combinatorics, and group theory. He is a professor at Nanjing University.
Biography
Sun Zhiwei was born in Huai'an, Jiangsu. Sun and his twin brother Sun Zhihong proved a theorem about what are now known as the Wall–Sun–Sun primes.
Sun proved Sun's curious identity in 2002. In 2003, he presented a unified approach to three topics of Paul Erdős in combinatorial number theory: covering systems, restricted sumsets, and zero-sum problems or EGZ Theorem.
With Stephen Redmond, he posed the Redmond–Sun conjecture in 2006.
In 2013, he published a paper containing many conjectures on primes, one of which states that for any positive integer there are consecutive primes not exceeding such that , where denotes the -th prime.
He is the Editor-in-Chief of the Journal of Combinatorics and Number Theory.
Notes
External links
Zhi-Wei Sun's homepage
1965 births
20th-century Chinese mathematicians
21st-century Chinese mathematicians
Mathematicians from Jiangsu
Combinatorialists
Living people
Academic staff of Nanjing University
Number theorists
Scientists from Huai'an
Squares in number theory
Educators from Huai'an
Chinese twins |
https://en.wikipedia.org/wiki/Wi-Fi%20hotspot | A hotspot is a physical location where people can obtain Internet access, typically using Wi-Fi technology, via a wireless local-area network (WLAN) using a router connected to an Internet service provider.
Public hotspots may be created by a business for use by customers, such as coffee shops or hotels. Public hotspots are typically created from wireless access points configured to provide Internet access, controlled to some degree by the venue. In its simplest form, venues that have broadband Internet access can create public wireless access by configuring an access point (AP), in conjunction with a router to connect the AP to the Internet. A single wireless router combining these functions may suffice.
A private hotspot, often called tethering, may be configured on a smartphone or tablet that has a network data plan, to allow Internet access to other devices via Bluetooth pairing, or through the RNDIS protocol over USB, or even when both the hotspot device and the device[s] accessing it are connected to the same Wi-Fi network but one which does not provide Internet access. Similarly, a Bluetooth or USB OTG can be used by a mobile device to provide Internet access via Wi-Fi instead of a mobile network, to a device that itself has neither Wi-Fi nor mobile network capability.
Uses
The public can use a laptop or other suitable portable device to access the wireless connection (usually Wi-Fi) provided. Of the estimated 150 million laptops, 14 million PDAs, and other emerging Wi-Fi devices sold per year for the last few years, most include the Wi-Fi feature.
The iPass 2014 interactive map, that shows data provided by the analysts Maravedis Rethink, shows that in December 2014 there are 46,000,000 hotspots worldwide and more than 22,000,000 roamable hotspots. More than 10,900 hotspots are on trains, planes and airports (Wi-Fi in motion) and more than 8,500,000 are "branded" hotspots (retail, cafés, hotels). The region with the largest number of public hotspots is Eu |
https://en.wikipedia.org/wiki/Arpwatch | arpwatch is a computer software tool for monitoring Address Resolution Protocol traffic on a computer network. It generates a log of observed pairing of IP addresses with MAC addresses along with a timestamp when the pairing appeared on the network. It also has the option of sending an email to an administrator when a pairing changes or is added.
Network administrators monitor ARP activity to detect ARP spoofing, network flip-flops, changed and new stations and address reuse.
arpwatch was developed by Lawrence Berkeley National Laboratory, Network Research Group, as open-source software and is released under the BSD license.
See also
ArpON
arping
Ettercap
References
External links
Source files
Free network management software
Linux security software
Unix security software
Software using the BSD license |
https://en.wikipedia.org/wiki/Alienware | Alienware is an American computer hardware subsidiary of Dell. Their product range is dedicated to gaming computers and can be identified by their alien-themed designs. Alienware was founded in 1996 by Nelson Gonzalez and Alex Aguila. The development of the company is also associated with Frank Azor, Arthur Lewis, Joe Balerdi, and Michael S. Dell. The company's corporate headquarters is located in The Hammocks, Miami, Florida.
History
Overview
Established in 1996 as Saikai of Miami, Inc. by Nelson Gonzalez and Alex Aguila, two childhood friends, Alienware assembles desktops, notebooks, workstations, and PC gaming consoles. According to employees, the name "Alienware" was chosen because of the founders' fondness for the hit television series The X-Files, which also inspired the science-fiction themed names of product lines such as Area-51, Hangar 18, and Aurora. In 1997, it changed its name to Alienware.
Acquisition and current status
Dell had considered buying the Alienware company since 2002, but did not agree to purchase the company until March 22, 2006. As a subsidiary, it retains control of its design and marketing while benefiting from Dell's purchasing power, economies of scale, and supply chain, which lowered its operating costs.
Initially, Dell maintained its competing XPS line of gaming PCs, often selling computers with similar specifications, which may have hurt Alienware's market share within its market segment. Due to corporate restructuring in the spring of 2008, the XPS brand was scaled down, and the Desktop line was eliminated leaving only the XPS Notebooks, but XPS Desktop models had returned by the end of the year.
Product development of gaming PCs was consolidated with Dell's gaming division, with Alienware becoming Dell's premier gaming brand. On June 2, 2009, The M17x was introduced as the first Alienware/Dell branded system. This launch also expanded Alienware's global reach from 6 to 35 countries while supporting 17 different languages.
C |
https://en.wikipedia.org/wiki/Damage%20tolerance | In engineering, damage tolerance is a property of a structure relating to its ability to sustain defects safely until repair can be effected. The approach to engineering design to account for damage tolerance is based on the assumption that flaws can exist in any structure and such flaws propagate with usage. This approach is commonly used in aerospace engineering, mechanical engineering, and civil engineering to manage the extension of cracks in structure through the application of the principles of fracture mechanics. A structure is considered to be damage tolerant if a maintenance program has been implemented that will result in the detection and repair of accidental damage, corrosion and fatigue cracking before such damage reduces the residual strength of the structure below an acceptable limit.
History
Structures upon which human life depends have long been recognized as needing an element of fail-safety. When describing his flying machine, Leonardo da Vinci noted that "In constructing wings one should make one chord to bear the strain and a looser one in the same position so that if one breaks under the strain, the other is in the position to serve the same function."
Prior to the 1970s, the prevailing engineering philosophy of aircraft structures was to ensure that airworthiness was maintained with a single part broken, a redundancy requirement known as fail-safety. However, advances in fracture mechanics, along with infamous catastrophic fatigue failures such as those in the de Havilland Comet prompted a change in requirements for aircraft. It was discovered that a phenomenon known as multiple-site damage could cause many small cracks in the structure, which grow slowly by themselves, to join one another over time, creating a much larger crack, and significantly reducing the expected time until failure
Safe-life structure
Not all structure must demonstrate detectable crack propagation to ensure safety of operation. Some structures operate under th |
https://en.wikipedia.org/wiki/Residual%20strength | Residual strength is the load or force (usually mechanical) that a damaged object or material can still carry without failing. Material toughness, fracture size and geometry as well as its orientation all contribute to residual strength.
References
Materials science |
https://en.wikipedia.org/wiki/Specific%20weight | The specific weight, also known as the unit weight (symbol , the Greek letter gamma) is a volume-specific quantity defined as the weight per unit volume of a material.
A commonly used value is the specific weight of water on Earth at , which is .
Definition
The specific weight, , of a material is defined as the product of its density, , and the standard gravity, :
The density of the material is defined as mass per unit volume, typically measured in kg/m3. The standard gravity is acceleration due to gravity, usually given in m/s2, and on Earth usually taken as .
Unlike density, specific weight is not a fixed property of a material. It depends on the value of the gravitational acceleration, which varies with location. Pressure may also affect values, depending upon the bulk modulus of the material, but generally, at moderate pressures, has a less significant effect than the other factors.
Applications
Fluid mechanics
In fluid mechanics, specific weight represents the force exerted by gravity on a unit volume of a fluid. For this reason, units are expressed as force per unit volume (e.g., N/m3 or lbf/ft3). Specific weight can be used as a characteristic property of a fluid.
Soil mechanics
Specific weight is often used as a property of soil to solve earthwork problems.
In soil mechanics, specific weight may refer to:
Civil and mechanical engineering
Specific weight can be used in civil engineering and mechanical engineering to determine the weight of a structure designed to carry certain loads while remaining intact and remaining within limits regarding deformation.
Specific weight of water
Specific weight of air
References
External links
Submerged weight calculator
Specific weight calculator
http://www.engineeringtoolbox.com/density-specific-weight-gravity-d_290.html
http://www.themeter.net/pesi-spec_e.htm
Soil mechanics
Fluid mechanics
Physical chemistry
Physical quantities
Density
Volume-specific quantities |
https://en.wikipedia.org/wiki/AVG%20AntiVirus | AVG AntiVirus (previously known as AVG, an abbreviation of Anti-Virus Guard) is a line of antivirus software developed by AVG Technologies, a subsidiary of Avast, a part of Gen Digital. It is available for Windows, macOS and Android.
History
The brand AVG comes from Grisoft's first product, Anti-Virus Guard, launched in 1992 in the Czech Republic. In 1997, the first AVG licenses were sold in Germany and the UK. AVG was introduced in the US in 1998.
The AVG Free Edition helped raise awareness of the AVG product line. In 2006, the AVG security package grew to include anti-spyware as AVG Technologies acquired ewido Networks, an anti-spyware group. AVG Technologies acquired Exploit Prevention Labs (XPL) in December 2007 and incorporated that company's LinkScanner safe search and surf technology into the AVG 8.0 security product range released in March 2008. In January 2009, AVG Technologies acquired Sana Security, a developer of identity theft prevention software. This software was incorporated into the AVG security product range in March 2009.
According to AVG Technologies, the company has more than 200 million active users worldwide, including more than 100 million who use their products and services on mobile devices.
On 7 July 2016, Avast announced an agreement to acquire AVG for $1.3 billion.
Platform support
AVG provides AVG AntiVirus Free for Windows, AVG AntiVirus for Mac for macOS and AVG AntiVirus for Android for Android devices. All are freemium products: They are free to download, install, update and use, but for technical support a premium plan must be purchased.
AVG stopped providing new features for Windows XP and Windows Vista in January 2019. New versions require Windows 7 or later; virus definitions are still provided for previous versions.
Features
AVG features most of the common functions available in modern antivirus and Internet security programs, including periodic scans, scans of sent and received emails (including adding footers to t |
https://en.wikipedia.org/wiki/First%20Department | The First Department () was in charge of secrecy and political security of the workplace of every enterprise or institution of the Soviet Union that dealt with any kind of technical or scientific information (plants, R&D institutions, etc.) or had printing capabilities (e.g., publishing houses).
Every branch of the Central Statistical Administration and its successor the State Statistics Committee (Goskomstat) also had a First Department to control access, distribution, and publication of official economic, population, and social statistics. Copies of especially sensitive documents were numbered and labeled or stamped as secret or "For official use only". In some cases, the "official use" version of documents mimicked the public use versions in format but provided much more detailed information.
The first departments were a part of the KGB and not subordinated to the management of the enterprise or institution. Among its functions was control of access to information considered state secret, of foreign travel, and of publications. The First Department also kept account of the usage of copying devices (xerographers, printing presses, typewriters, etc.) to prevent unsanctioned copying, including samizdat.
See also
Censorship
Spetskhran
References
Soviet internal politics
Soviet phraseology
Data security
KGB |
https://en.wikipedia.org/wiki/FICON | FICON (Fibre Connection) is the IBM proprietary name for the ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol. It is a FC layer 4 protocol used to map both IBM's antecedent (either ESCON or parallel Bus and Tag) channel-to-control-unit cabling infrastructure and protocol onto standard FC services and infrastructure. The topology is fabric utilizing FC switches or directors. Valid rates include 1, 2, 4, 8 and 16 Gigabit per second data rates at distances up to 100 km.
FICON was introduced in 1998 as part of fifth generation of IBM System/390 mainframes. After 2011 FICON replaced ESCON in new IBM mainframe deployments because of FICON's technical superiority (especially its higher performance) and lower cost.
Protocol internals
Each FICON channel port is capable of multiple concurrent data exchanges (a maximum of 32) in full duplex mode. Information for active exchanges is transferred in Fibre Channel sequences mapped as FICON Information Units (IUs) which consist of one to four Fibre Channel frames, only the first of which carries 32 bytes of FICON (FC-SB-3) mapping protocol. Each FICON exchange may transfer one or many such IUs.
FICON channels use five classes of IUs to conduct information transfers between a channel and a control unit. They are: Data, Command, Status, Control, and lastly Link Control. Only a channel port may send Command or Command and Data IUs, while only a control unit port may send Status IUs.
As with prior Z channel protocols, there is a concept of a channel to control unit "connection". In its most primitive form, a connection is associated with a single channel program. In practice, a single channel program may result in the establishment of several sequential connections. This normally occurs during periods where data transfers become dormant while waiting for some type of independent device activity to complete (such as the physical positioning of tape or a disk access arm). In such |
https://en.wikipedia.org/wiki/Codress%20message | In military cryptography, a codress message is an encrypted message whose address is also encrypted. This is usually done to prevent traffic analysis.
References
Cryptography |
https://en.wikipedia.org/wiki/Multiple%20encryption | Multiple encryption is the process of encrypting an already encrypted message one or more times, either using the same or a different algorithm. It is also known as cascade encryption, cascade ciphering, multiple encryption, and superencipherment. Superencryption refers to the outer-level encryption of a multiple encryption.
Some cryptographers, like Matthew Green of Johns Hopkins University, say multiple encryption addresses a problem that mostly doesn't exist: Modern ciphers rarely get broken... You’re far more likely to get hit by malware or an implementation bug than you are to suffer a catastrophic attack on AES. .... and in that quote lies the reason for multiple encryption, namely poor implementation. Using two different cryptomodules and keying processes from two different vendors requires both vendors' wares to be compromised for security to fail completely.
Independent keys
Picking any two ciphers, if the key used is the same for both, the second cipher could possibly undo the first cipher, partly or entirely. This is true of ciphers where the decryption process is exactly the same as the encryption process—the second cipher would completely undo the first. If an attacker were to recover the key through cryptanalysis of the first encryption layer, the attacker could possibly decrypt all the remaining layers, assuming the same key is used for all layers.
To prevent that risk, one can use keys that are statistically independent for each layer (e.g. independent RNGs).
Ideally each key should have separate and different generation, sharing, and management processes.
Independent Initialization Vectors
For en/decryption processes that require sharing an Initialization Vector (IV) / nonce these are typically, openly shared or made known to the recipient (and everyone else). Its good security policy never to provide the same data in both plaintext and ciphertext when using the same key and IV. Therefore, its recommended (although at this moment without spe |
https://en.wikipedia.org/wiki/Central%20Food%20Technological%20Research%20Institute | The Central Food Technological Research Institute (CFTRI) is an Indian food research institute and laboratory headquartered in Mysore, India. It is a constituent laboratory of the Council of Scientific and Industrial Research.
India is the world's second largest food grain, fruit and vegetable producer, and the institute is engaged in research in the production and handling of grains, pulses, oilseed, along with spices, fruits, vegetables, meat, fish, and poultry.
Establishment
CFTRI was established on 21 October 1950, soon after the Dominion of India was constituted into a republic, under the Council of Scientific and Industrial Research, a research and development organisation co-founded by Sir A. R. Mudaliyar.
Maharaja Jayachamaraja Wadiyar donated Cheluvambavilas Palace and its vast campus to house the institute, where it is headquartered. It also has its resource centres at Hyderabad, Lucknow, and Mumbai, rendering technical assistance to numerous entrepreneurs.
Institute
The institute has nearly two hundred scientists, technologists, and engineers, and over a hundred technicians, skilled workers, and support staff. There are sixteen research and development departments, including laboratories focussing on food engineering, food biotechnology, microbiology, grain sciences, sensory science, biochemistry, molecular nutrition, and food safety.
The institute has developed over 300 products, processes, and equipment designs, and most of these technologies have been released to over 4000 licensees for commercial application. . The institute develops technologies to increase efficiency and reduce post-harvest losses, add convenience, increase export, find new sources of food products, integrate human resources in food industries, reduce costs, and modernise. It holds several patents and has published findings in reputed journals.
Notes
External links
https://web.archive.org/web/20130115030611/http://www.csir.res.in/External/Heads/aboutcsir/lab_directory.htm
|
https://en.wikipedia.org/wiki/Internet%20Storage%20Name%20Service | In computing, the proposed Internet Storage Name Service (iSNS) protocol allows automated discovery, management and configuration of iSCSI and Fibre Channel devices (using iFCP gateways) on a TCP/IP network.
Features
iSNS provides management services similar to those found in Fibre Channel networks, allowing a standard IP network to operate in much the same way that a Fibre Channel storage area network does. Because iSNS is able to emulate Fibre Channel fabric services and manage both iSCSI and Fibre Channel devices, an iSNS server can be used as a consolidated configuration point for an entire storage network. However, the use of iSNS is optional for iSCSI while it is required for iFCP. Additionally, an iSNS implementation is not required by the standard to provide support for both of these protocols.
Components
The iSNS standard defines four components:
The iSNS Protocol iSNSP is a protocol that specifies how iSNS clients and servers communicate. It is intended to be used by various platforms, including switches and targets as well as server hosts.
iSNS Clients iSNS clients are part of iSNSP aware storage devices. iSNS clients initiate transactions with iSNS servers using the iSNSP, register device attribute information in a common Discovery Domain (DD), download information about other registered clients and receive asynchronous notification of events that occur in their DD(s).
iSNS Servers iSNS servers respond to iSNS protocol queries and requests made by iSNS clients using the iSNSP. iSNS servers initiate iSNSP State Change Notifications and store properly authenticated information submitted by a registration request in an iSNS database.
iSNS Databases iSNS databases are the information repositories for iSNS server(s). They maintain information about iSNS client attributes; while implementations will vary, a directory-enabled implementation of iSNS, for example, might store client attributes in an LDAP directory.
Services
An iSNS implementation |
https://en.wikipedia.org/wiki/KCDSA | KCDSA (Korean Certificate-based Digital Signature Algorithm) is a digital signature algorithm created by a team led by the Korea Internet & Security Agency (KISA). It is an ElGamal variant, similar to the Digital Signature Algorithm and GOST R 34.10-94. The standard algorithm is implemented over , but an elliptic curve variant (EC-KCDSA) is also specified.
KCDSA requires a collision-resistant cryptographic hash function that can produce a variable-sized output (from 128 to 256 bits, in 32-bit increments). HAS-160, another Korean standard, is the suggested choice.
Domain parameters
: a large prime such that for .
: a prime factor of such that for .
: a base element of order in .
The revised version of the spec additional requires either that be prime or that all of its prime factors are greater than .
User parameters
: signer's private signature key such that .
: signer's public verification key computed by where .
: a hash-value of Cert Data, i.e., .
The 1998 spec is unclear about the exact format of the "Cert Data". In the revised spec, z is defined as being the bottom B bits of the public key y, where B is the block size of the hash function in bits (typically 512 or 1024). The effect is that the first input block corresponds to y mod 2^B.
: the lower B bits of y.
Hash Function
: a collision resistant hash function with |q|-bit digests.
Signing
To sign a message :
Signer randomly picks an integer and computes
Then computes the first part:
Then computes the second part:
If , the process must be repeated from the start.
The signature is
The specification is vague about how the integer be reinterpreted as a byte string input to hash function. In the example in section C.1 the interpretation is consistent with using the definition of I2OSP from PKCS#1/RFC3447.
Verifying
To verify a signature on a message :
Verifier checks that and and rejects the signature as invalid if not.
Verifier computes
Verifier checks if . If so |
https://en.wikipedia.org/wiki/Serology | Serology is the scientific study of serum and other body fluids. In practice, the term usually refers to the diagnostic identification of antibodies in the serum. Such antibodies are typically formed in response to an infection (against a given microorganism),<ref name=Baron>{{cite book | author = Washington JA | title = Principles of Diagnosis: Serodiagnosis. in: Baron's Medical Microbiology |veditors=Baron S, et al.| edition = 4th | publisher = Univ of Texas Medical Branch | year = 1996 | chapter-url = https://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=mmed.section.5462 | isbn = 978-0-9631172-1-2 | chapter = Principles of Diagnosis}}</ref> against other foreign proteins (in response, for example, to a mismatched blood transfusion), or to one's own proteins (in instances of autoimmune disease). In either case, the procedure is simple.
Serological tests
Serological tests are diagnostic methods that are used to identify antibodies and antigens in a patient's sample. Serological tests may be performed to diagnose infections and autoimmune illnesses, to check if a person has immunity to certain diseases, and in many other situations, such as determining an individual's blood type. Serological tests may also be used in forensic serology to investigate crime scene evidence. Several methods can be used to detect antibodies and antigens, including ELISA, agglutination, precipitation, complement-fixation, and fluorescent antibodies and more recently chemiluminescence.
Applications
Microbiology
In microbiology, serologic tests are used to determine if a person has antibodies against a specific pathogen, or to detect antigens associated with a pathogen in a person's sample. Serologic tests are especially useful for organisms that are difficult to culture by routine laboratory methods, like Treponema pallidum'' (the causative agent of syphilis), or viruses.
The presence of antibodies against a pathogen in a person's blood indicates that they have been exposed to that pathogen |
https://en.wikipedia.org/wiki/T-schema | The T-schema ("truth schema", not to be confused with "Convention T") is used to check if an inductive definition of truth is valid, which lies at the heart of any realisation of Alfred Tarski's semantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett.
The T-schema is often expressed in natural language, but it can be formalized in many-sorted predicate logic or modal logic; such a formalisation is called a "T-theory." T-theories form the basis of much fundamental work in philosophical logic, where they are applied in several important controversies in analytic philosophy.
As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S):
'S' is true if and only if S.
Example: 'snow is white' is true if and only if snow is white.
The inductive definition
By using the schema one can give an inductive definition for the truth of compound sentences. Atomic sentences are assigned truth values disquotationally. For example, the sentence "'Snow is white' is true" becomes materially equivalent with the sentence "snow is white", i.e. 'snow is white' is true if and only if snow is white. The truth of more complex sentences is defined in terms of the components of the sentence:
A sentence of the form "A and B" is true if and only if A is true and B is true
A sentence of the form "A or B" is true if and only if A is true or B is true
A sentence of the form "if A then B" is true if and only if A is false or B is true; see material implication.
A sentence of the form "not A" is true if and only if A is false
A sentence of the form "for all x, A(x)" is true if and only if, for every possible value of x, A(x) is true.
A sentence of the form "for some x, A(x)" is true if and only if, for some possible value of x, A(x) is true.
Predicates for truth that meet all of these criteria are called a "satisfaction classes", a notion often defined with respect to a fixed language (such a |
https://en.wikipedia.org/wiki/Wide%20character | A wide character is a computer character datatype that generally has a size greater than the traditional 8-bit character. The increased datatype size allows for the use of larger coded character sets.
History
During the 1960s, mainframe and mini-computer manufacturers began to standardize around the 8-bit byte as their smallest datatype. The 7-bit ASCII character set became the industry standard method for encoding alphanumeric characters for teletype machines and computer terminals. The extra bit was used for parity, to ensure the integrity of data storage and transmission. As a result, the 8-bit byte became the de facto datatype for computer systems storing ASCII characters in memory.
Later, computer manufacturers began to make use of the spare bit to extend the ASCII character set beyond its limited set of English alphabet characters. 8-bit extensions such as IBM code page 37, PETSCII and ISO 8859 became commonplace, offering terminal support for Greek, Cyrillic, and many others. However, such extensions were still limited in that they were region specific and often could not be used in tandem. Special conversion routines had to be used to convert from one character set to another, often resulting in destructive translation when no equivalent character existed in the target set.
In 1989, the International Organization for Standardization began work on the Universal Character Set (UCS), a multilingual character set that could be encoded using either a 16-bit (2-byte) or 32-bit (4-byte) value. These larger values required the use of a datatype larger than 8-bits to store the new character values in memory. Thus the term wide character was used to differentiate them from traditional 8-bit character datatypes.
Relation to UCS and Unicode
A wide character refers to the size of the datatype in memory. It does not state how each value in a character set is defined. Those values are instead defined using character sets, with UCS and Unicode simply being two |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.