source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Maximal%20and%20minimal%20elements
In mathematics, especially in order theory, a maximal element of a subset of some preordered set is an element of that is not smaller than any other element in . A minimal element of a subset of some preordered set is defined dually as an element of that is not greater than any other element in . The notions of maximal and minimal elements are weaker than those of greatest element and least element which are also known, respectively, as maximum and minimum. The maximum of a subset of a preordered set is an element of which is greater than or equal to any other element of and the minimum of is again defined dually. In the particular case of a partially ordered set, while there can be at most one maximum and at most one minimum there may be multiple maximal or minimal elements. Specializing further to totally ordered sets, the notions of maximal element and maximum coincide, and the notions of minimal element and minimum coincide. As an example, in the collection ordered by containment, the element {d, o} is minimal as it contains no sets in the collection, the element {g, o, a, d} is maximal as there are no sets in the collection which contain it, the element {d, o, g} is neither, and the element {o, a, f} is both minimal and maximal. By contrast, neither a maximum nor a minimum exists for Zorn's lemma states that every partially ordered set for which every totally ordered subset has an upper bound contains at least one maximal element. This lemma is equivalent to the well-ordering theorem and the axiom of choice and implies major results in other mathematical areas like the Hahn–Banach theorem, the Kirszbraun theorem, Tychonoff's theorem, the existence of a Hamel basis for every vector space, and the existence of an algebraic closure for every field. Definition Let be a preordered set and let is an element such that if satisfies then necessarily Similarly, is an element such that if satisfies then necessarily Equivalently, is a m
https://en.wikipedia.org/wiki/Universal%20set
In set theory, a universal set is a set which contains all objects, including itself. In set theory as usually formulated, it can be proven in multiple ways that a universal set does not exist. However, some non-standard variants of set theory include a universal set. Reasons for nonexistence Many set theories do not allow for the existence of a universal set. There are several different arguments for its non-existence, based on different choices of axioms for set theory. Regularity In Zermelo–Fraenkel set theory, the axiom of regularity and axiom of pairing prevent any set from containing itself. For any set , the set (constructed using pairing) necessarily contains an element disjoint from , by regularity. Because its only element is , it must be the case that is disjoint from , and therefore that does not contain itself. Because a universal set would necessarily contain itself, it cannot exist under these axioms. Russell's paradox Russell's paradox prevents the existence of a universal set in set theories that include Zermelo's axiom of comprehension. This axiom states that, for any formula and any set , there exists a set that contains exactly those elements of that satisfy . As a consequence of this axiom, to every set there corresponds another set consisting of the elements of that do not contain themselves. cannot contain itself, because it consists only of sets that do not contain themselves. It cannot be a member of , because if it were it would be included as a member of itself, by its definition, contradicting the fact that it cannot contain itself. Therefore, every set is non-universal: there exists a set that it does not contain. This indeed holds even with predicative comprehension and over intuitionistic logic. Cantor's theorem Another difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power set is a set of sets, it would necessarily be a subset of the set of all sets, provided
https://en.wikipedia.org/wiki/Completing%20the%20square
In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form to the form for some values of h and k. In other words, completing the square places a perfect square trinomial inside of a quadratic expression. Completing the square is used in solving quadratic equations, deriving the quadratic formula, graphing quadratic functions, evaluating integrals in calculus, such as Gaussian integrals with a linear term in the exponent, finding Laplace transforms. In mathematics, completing the square is often applied in any computation involving quadratic polynomials. History The technique of completing the square was known in the Old Babylonian Empire. Muhammad ibn Musa Al-Khwarizmi, a famous polymath who wrote the early algebraic treatise Al-Jabr, used the technique of completing the square to solve quadratic equations. Overview Background The formula in elementary algebra for computing the square of a binomial is: For example: In any perfect square, the coefficient of x is twice the number p, and the constant term is equal to p2. Basic example Consider the following quadratic polynomial: This quadratic is not a perfect square, since 28 is not the square of 5: However, it is possible to write the original quadratic as the sum of this square and a constant: This is called completing the square. General description Given any monic quadratic it is possible to form a square that has the same first two terms: This square differs from the original quadratic only in the value of the constant term. Therefore, we can write where . This operation is known as completing the square. For example: Non-monic case Given a quadratic polynomial of the form it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial. Example: This process of factoring out the coefficient a can further be simplified by only factorising it out of the first 2 terms. The integer at the
https://en.wikipedia.org/wiki/Distress%20signal
A distress signal, also known as a distress call, is an internationally recognized means for obtaining help. Distress signals are communicated by transmitting radio signals, displaying a visually observable item or illumination, or making a sound audible from a distance. A distress signal indicates that a person or group of people, watercraft, aircraft, or other vehicle is threatened by a serious or imminent danger and requires immediate assistance. Use of distress signals in other circumstances may be against local or international law. An urgency signal is available to request assistance in less critical situations. For distress signalling to be the most effective, two parameters must be communicated: Alert or notification of an emergency in progress Position or location (or localization or pinpointing) of the party in distress. For example, a single aerial flare alerts observers to the existence of a vessel in distress somewhere in the general direction of the flare sighting on the horizon but extinguishes within one minute or less. A hand-held flare burns for three minutes and can be used to localize or pinpoint more precisely the exact location or position of the party in trouble. An EPIRB both notifies or alerts authorities and at the same time provides position indication information. Maritime Distress signals at sea are defined in the International Regulations for Preventing Collisions at Sea and in the International Code of Signals. Mayday signals must only be used where there is grave and imminent danger to life. Otherwise, urgent signals such as pan-pan can be sent. Most jurisdictions have large penalties for false, unwarranted, or prank distress signals. Distress can be indicated by any of the following officially sanctioned methods: Transmitting a spoken voice Mayday message by radio over very high frequency channel 16 (156.8 MHz) or medium frequency on 2182 kHz Transmitting a digital distress signal by activating (or pressing) the distress
https://en.wikipedia.org/wiki/Broadcast%20auxiliary%20service
A broadcast auxiliary service or BAS is any radio frequency system used by a radio station or TV station, which is not part of its direct broadcast to listeners or viewers. These are essentially internal-use backhaul channels not intended for actual reception by the public, but part of the airchain required to get those signals back to the broadcast studio from the field. usually to be integrated into a live production. Examples include: studio/transmitter link (STL) transmitter/studio link (TSL) remote pickup unit (RPU) electronic news gathering (ENG) Several of these bands exist, but the most frequently used band is the 2 GHz microwave BAS band for point-to-point transmission from mobile newsgathering units to mountaintop receivers. Seven 12-MHz wide channels exist in the band. In North America, DVB-T, precisely the same modulation technique as European Broadcast, is used, using a constellation of QPSK, 16QAM, or 64QAM, enabling sufficient digital bandwidths at 6 MHz deviation for transmission of an MPEG transport stream at 10 or more megabits per second, producing three "lower", "center", and "upper" overlapping 6 MHz channels within each 12 MHz channel. 2 GHz relocation In the United States between 2005 and 2010, the Federal Communications Commission (FCC) moved TV channels in the 2 GHz TV BAS band at the request of Sprint Nextel, so that it could use a portion which was adjacent to PCS frequencies it already uses. The report and order resulting from this rulemaking specified that Sprint/Nextel must pay for every TV station using the band to buy and install new BAS equipment to work in the new band structure. Previously, there had been seven analog TV channels, each 17 or 18 MHz wide, between 1990 and 2110 MHz. The new allocation created seven digital TV channels, each 12 MHz wide, from 2025.5 to 2109.5 MHz. (There was also a "narrowed in place" bandplan used as an interim measure, as the two bands overlap.) Begun in 2005, the relocation was 94% complet
https://en.wikipedia.org/wiki/Comcast
Comcast Corporation (formerly known as American Cable Systems and Comcast Holdings, stylized in all caps), incorporated and headquartered in Philadelphia, is the largest American multinational telecommunications and media conglomerate. The corporation is the second-largest broadcasting and cable television company in the world by revenue (behind AT&T), and is also the largest pay-TV company, the largest cable TV company, and largest home Internet service provider in the United States. Comcast is additionally the nation's third-largest home telephone service provider. It provides services to U.S. residential and commercial customers in 40 states and the District of Columbia. As the owner of the international media company NBCUniversal since 2011, Comcast is also a high-volume producer of feature films for theatrical exhibition and television programming, and a theme park operator. Comcast owns and operates the Xfinity residential cable communications business segment and division; Comcast Business, a commercial services provider; and Xfinity Mobile, an MVNO of Verizon. Through NBCUniversal, it also owns and operates over-the-air national broadcast network channels such as NBC, Telemundo, TeleXitos, and Cozi TV; multiple cable-only channels such as MSNBC, CNBC, USA Network, Syfy, Oxygen, Bravo, and E!; the film studio Universal Pictures; the VOD streaming service Peacock; animation studios DreamWorks Animation, Illumination, and Universal Animation Studios; and Universal Destinations & Experiences. It also has significant holdings in digital distribution, such as thePlatform, which it acquired in 2006; and ad-tech company FreeWheel, which it acquired in 2014. Since October 2018, it has also been the parent company of Sky Group. Comcast has been criticized and put under intense public scrutiny for a variety of reasons. Its customer satisfaction ratings were among the lowest in the cable industry during the years 2008–2010. It has violated net neutrality practices in
https://en.wikipedia.org/wiki/James%20Mercer%20%28mathematician%29
James Mercer FRS (15 January 1883 – 21 February 1932) was a mathematician, born in Bootle, close to Liverpool, England. He was educated at University of Manchester, and then University of Cambridge. He became a Fellow, saw active service at the Battle of Jutland in World War I and, after decades of ill health, died in London. He proved Mercer's theorem, which states that positive-definite kernels can be expressed as a dot product in a high-dimensional space. This theorem is the basis of the kernel trick (applied by Aizerman), which allows linear algorithms to be easily converted into non-linear algorithms. References 1883 births 1932 deaths 19th-century British mathematicians 20th-century British mathematicians Mathematical analysts People from Bootle Alumni of the University of Manchester Senior Wranglers Scientists from Liverpool Fellows of the Royal Society Alumni of the University of Cambridge
https://en.wikipedia.org/wiki/X-No-Archive
X-No-Archive, also known colloquially as xna, is a newsgroup message header field used to prevent a Usenet message from being archived in various servers. Origin The need for X-No-Archive began when DejaNews debuted in 1995. DejaNews was the first large-scale commercial attempt to archive the Usenet news feed, and several newsgroup participants were concerned about privacy rights and about the possibility that their messages could be re-posted through DejaNews in the future. DejaNews addressed these concerns by announcing that it would not archive Usenet messages containing the X-No-Archive header field. How it works X-No-Archive was designed to follow the standard message header protocol, RFC 1036 and RFC 977, used in existing newsgroups. In addition to the standard header fields used in all newsgroup messages (including Path:, From:, Subject:, and Date:), news reader software allows a user to add optional fields to a header. According to RFC 822, these additional fields are prefixed with the label X- so that they can be ignored by news servers and newsreaders. The phrase "No Archive" was coined as a way to state "Do not archive this message," and the X- prefix was added to complete the term X-No-Archive. The proper field to prevent a message from being archived is: X-No-Archive: Yes (abbreviated as "XNAY"). Some software systems also do not archive if the first line in the body of the message contains this text. This is useful for those users who cannot change the header of messages they send out. If the X-No-Archive field is set to "No", or the field is absent, a Usenet archive will not recognize a prohibition on archiving the message. Newsreader software programs When DejaNews was purchased by Google, Google continued to honor the X-No-Archive directive. Other newsgroup archiving services have also followed in DejaNews' footsteps, though the decision not to archive X-No-Archive messages has been entirely voluntary. Many popular newsreader and postin
https://en.wikipedia.org/wiki/Annatto
Annatto ( or ) is an orange-red condiment and food coloring derived from the seeds of the achiote tree (Bixa orellana), native to tropical parts of the Americas. It is often used to impart a yellow or orange color to foods, but sometimes also for its flavor and aroma. Its scent is described as "slightly peppery with a hint of nutmeg" and flavor as "slightly nutty, sweet and peppery". The color of annatto comes from various carotenoid pigments, mainly bixin and norbixin, found in the reddish waxy coating of the seeds. The condiment is typically prepared by grinding the seeds to a powder or paste. Similar effects can be obtained by extracting some of the color and flavor principles from the seeds with hot water, oil, or lard, which are then added to the food. Annatto and its extracts are now widely used in an artisanal or industrial scale as a coloring agent in many processed food products, such as cheeses, dairy spreads, butter and margarine, custards, cakes and other baked goods, potatoes, snack foods, breakfast cereals, smoked fish, sausages, and more. In these uses, annatto is a natural alternative to synthetic food coloring compounds, but it has been linked to rare cases of food-related allergies. Annatto is of particular commercial value in the United States because the Food and Drug Administration considers colorants derived from it to be "exempt of certification". History The annatto tree B. orellana is believed to originate in tropical regions from Mexico to Brazil. It was probably not initially used as a food additive, but for other purposes such as ritual and decorative body painting (still an important tradition in many Brazilian native tribes, such as the Wari'), sunscreen, and insect repellent, and for medical purposes. It was used for Mexican manuscript painting in the 16th century. Annatto has been traditionally used as both a coloring and flavoring agent in various cuisines from Latin America, the Caribbean, the Philippines, and other countries w
https://en.wikipedia.org/wiki/Local%20coordinates
Local coordinates are the ones used in a local coordinate system or a local coordinate space. Simple examples: Houses. In order to work in a house construction, the measurements are referred to a control arbitrary point that will allow to check it: stick/sticks on the ground, steel bar, nails... Addresses. Using house numbers to locate a house on a street; the street is a local coordinate system within a larger system composed of city townships, states, countries, postal codes, etc. Local systems exist for convenience. On ancient times, every work was made on relative bases as there was no conception of global systems. Practically, it is better to use local systems for small works as houses, buildings... For most of the applications, it is desired the position of one element relative to one building or location, and in a more local way, relative to one furniture or person. In a regular way, you will not give your position by geographical coordinates rather than "I am 15 meters away of the entry to the building". So it is a pretty common way to locate things. It is possible to bring latitude and longitude for all terrestrial locations, but unless one has a highly precise GPS device or you make astronomical observations, this is impractical. It is much simple to use a tape, a rope, a chain... The position information (global) should be transformed into a location. Position refers to a numeric or symbolic description within a spatial reference system, where as location refers to information about surrounding objects and their interrelationships. (Topological space) Use In computer graphics and computer animation, local coordinate spaces are also useful for their ability to model independently transformable aspects of geometrical scene graphs. When modeling a car, for example, it is desirable to describe the center of each wheel with respect to the car's coordinate system, but then specify the shape of each wheel in separate local spaces centered about these poi
https://en.wikipedia.org/wiki/Small%20office/home%20office
Small office/home office (or single office/home office; sometimes short SOHO) refers to the category of business or cottage industry that involves from 1 to 10 workers. In New Zealand, the Ministry of Business, Innovation and Employment (MBIE) defines a small office as 6–19 employees and a micro office as 1–5. History Before the 19th century, and the spread of the industrial revolution around the globe, nearly all offices were small offices and/or home offices, with only a few exceptions. Most businesses were small, and the paperwork that accompanied them was limited. The industrial revolution aggregated workers in factories, to mass-produce goods. In most circumstances, the white collar counterpart—office work—was aggregated as well in large buildings, usually in cities or densely populated suburban areas. Beginning in the mid-1980s, the advent of the personal computer and fax machine, plus breakthroughs in telecommunications, created opportunities for office workers to decentralize. Decentralization was also perceived as benefiting employers in terms of lower overheads and potentially greater productivity. Professions Many consultants and the members of such professions like lawyers, real estate agents, and surveyors in small and medium-sized towns operate from home offices. Several ranges of products, such as the armoire desk, all-in-one printer, virtual assistants, home servers and network-attached storage are designed specifically for the SOHO market. A number of books and magazines have been published and marketed specifically at this type of office. These range from general advice texts to specific guidebooks on such challenges as setting up a small PBX for the office telephones. Technology has also created a demand for larger businesses to employ individuals who work from home. Sometimes these people remain as independent businesspersons, and sometimes they become employees of a larger company. The small office home office has undergone a transformati
https://en.wikipedia.org/wiki/Medical%20prescription
A prescription, often abbreviated or Rx, is a formal communication from a physician or other registered healthcare professional to a pharmacist, authorizing them to dispense a specific prescription drug for a specific patient. Historically, it was a physician's instruction to an apothecary listing the materials to be compounded into a treatmentthe symbol (a capital letter R, crossed to indicate abbreviation) comes from the first word of a medieval prescription, Latin (), that gave the list of the materials to be compounded. Format and definition For a communication to be accepted as a legal medical prescription, it needs to be filed by a qualified dentist, advanced practice nurse, physician, or veterinarian, for whom the medication prescribed is within their scope of practice to prescribe. This is regardless of whether the prescription includes prescription drugs, controlled substances, or over-the-counter treatments. Prescriptions may be entered into an electronic medical record system and transmitted electronically to a pharmacy. Alternatively, a prescription may be handwritten on preprinted prescription forms that have been assembled into pads, or printed onto similar forms using a computer printer or even on plain paper, according to the circumstances. In some cases, a prescription may be transmitted orally by telephone from the physician to the pharmacist. The content of a prescription includes the name and address of the prescribing provider and any other legal requirements, such as a registration number (e.g., a DEA Number in the United States). Unique to each prescription is the name of the patient. In the United Kingdom and Ireland, the patient's name and address must also be recorded. Each prescription is dated, and some jurisdictions may place a time limit on the prescription. In the past, prescriptions contained instructions for the pharmacist to use for compounding the pharmaceutical product, but most prescriptions now specify pharmaceutical pro
https://en.wikipedia.org/wiki/Siemens%20and%20Halske%20T52
The Siemens & Halske T52, also known as the Geheimschreiber ("secret teleprinter"), or Schlüsselfernschreibmaschine (SFM), was a World War II German cipher machine and teleprinter produced by the electrical engineering firm Siemens & Halske. The instrument and its traffic were codenamed Sturgeon by British cryptanalysts. While the Enigma machine was generally used by field units, the T52 was an online machine used by Luftwaffe and German Navy units, which could support the heavy machine, teletypewriter and attendant fixed circuits. It fulfilled a similar role to the Lorenz cipher machines in the German Army. The British cryptanalysts of Bletchley Park codenamed the German teleprinter ciphers Fish, with individual cipher-systems being given further codenames: just as the T52 was called Sturgeon, the Lorenz machine was codenamed Tunny. Operation The teleprinters of the day emitted each character as five parallel bits on five lines, typically encoded in the Baudot code or something similar. The T52 had ten pinwheels, which were stepped in a complex nonlinear way, based in later models on their positions from various relays in the past, but in such a way that they could never stall. Each of the five plaintext bits was then XORed with the XOR sum of 3 taps from the pinwheels, and then cyclically adjacent pairs of plaintext bits were swapped or not, according to XOR sums of three (different) output bits. The numbers of pins on all the wheels were coprime, and the triplets of bits that controlled each XOR or swap were selectable through a plugboard. This produced a much more complex cipher than the Lorenz machine, and also means that the T52 is not just a pseudorandom number generator-and-XOR cipher. For example, if a cipher clerk erred and sent two different messages using exactly the same settings—a depth of two in Bletchley jargon—this could be detected statistically but was not immediately and trivially solvable as it would be with the Lorenz. Models Siemens prod
https://en.wikipedia.org/wiki/Mechatronics
Mechatronics engineering, also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical engineering, electrical engineering, electronic engineering and software engineering, and also includes a combination of robotics, computer science, telecommunications, systems, control, and product engineering. As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words "mechanics" and "electronics"; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas. The word mechatronics originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry. Many people treat mechatronics as a modern buzzword synonymous with automation, robotics and electromechanical engineering. French standard NF E 01-010 gives the following definition: "approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality". History The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word t
https://en.wikipedia.org/wiki/AMD%20K6-2
The K6-2 is an x86 microprocessor introduced by AMD on May 28, 1998, and available in speeds ranging from 266 to 550 MHz. An enhancement of the original K6, the K6-2 introduced AMD's 3DNow! SIMD instruction set and an upgraded system-bus interface called Super Socket 7, which was backward compatible with older Socket 7 motherboards. It was manufactured using a 250 nanometer process, ran at 2.2 volts, and had 9.3 million transistors. History The K6-2 was designed as a competitor to Intel's flagship processor, the significantly more expensive Pentium II. Performance of the two chips was similar: the previous K6 tended to be faster for general-purpose computing, while the Intel part was faster in x87 floating-point applications. To battle the Pentium II's dominance on floating point calculations the K6-2 was the first CPU to introduce a floating point SIMD instruction set (dubbed 3DNow! by AMD), which significantly boosted performance. However programs needed to be specifically tailored for the new instructions and despite beating Intel's SSE instruction set to market, 3DNow achieved only limited popularity. Super Socket 7, which increased the processor bus from 66 MHz to 100 MHz, allowed the K6-2 to withstand the effects of ever-increasing CPU multipliers fairly gracefully and in later life it remained surprisingly competitive. Nearly all K6-2s were designed to use 100 MHz Super Socket 7 mainboards, allowing the system-bus to keep pace with the K6-2's clock-frequency. The K6-2 was a very financially successful chip and enabled AMD to earn the revenue it would need to introduce the forthcoming Athlon. The introductory K6-2 300 was by far the best-selling variant. It rapidly established an excellent reputation in the marketplace and offered a favorable price/performance ratio versus Intel's Celeron 300A. While the K6-2 had mediocre floating-point performance compared to the Celeron, it offered faster system RAM access (courtesy of the Super 7 mainboard), as well as
https://en.wikipedia.org/wiki/Safety%20valve
A safety valve is a valve that acts as a fail-safe. An example of safety valve is a pressure relief valve (PRV), which automatically releases a substance from a boiler, pressure vessel, or other system, when the pressure or temperature exceeds preset limits. Pilot-operated relief valves are a specialized type of pressure safety valve. A leak tight, lower cost, single emergency use option would be a rupture disk. Safety valves were first developed for use on steam boilers during the Industrial Revolution. Early boilers operating without them were prone to explosion unless carefully operated. Vacuum safety valves (or combined pressure/vacuum safety valves) are used to prevent a tank from collapsing while it is being emptied, or when cold rinse water is used after hot CIP (clean-in-place) or SIP (sterilization-in-place) procedures. When sizing a vacuum safety valve, the calculation method is not defined in any norm, particularly in the hot CIP / cold water scenario, but some manufacturers have developed sizing simulations. The term safety valve is also used metaphorically. Function and design The earliest and simplest safety valve was used on a 1679 steam digester and utilized a weight to retain the steam pressure (this design is still commonly used on pressure cookers); however, these were easily tampered with or accidentally released. On the Stockton and Darlington Railway, the safety valve tended to go off when the engine hit a bump in the track. A valve less sensitive to sudden accelerations used a spring to contain the steam pressure, but these (based on a Salter spring balance) could still be screwed down to increase the pressure beyond design limits. This dangerous practice was sometimes used to marginally increase the performance of a steam engine. In 1856, John Ramsbottom invented a tamper-proof spring safety valve that became universal on railways. The Ramsbottom valve consisted of two plug-type valves connected to each other by a spring-laden pivotin
https://en.wikipedia.org/wiki/AMD%20K6-III
The K6-III (code name: "Sharptooth") was an x86 microprocessor line manufactured by AMD that launched on February 22, 1999. The launch consisted of both 400 and 450 MHz models and was based on the preceding K6-2 architecture. Its improved 256 KB on-chip L2 cache gave it significant improvements in system performance over its predecessor the K6-2. The K6-III was the last processor officially released for desktop Socket 7 systems, however later mobile K6-III+ and K6-2+ processors could be run unofficially in certain socket 7 motherboards if an updated BIOS was made available for a given board. The Pentium III processor from Intel launched 6 days later. At its release, the fastest available desktop processor from Intel was the Pentium II 450 MHz, and in integer application benchmarks a 400 MHz K6-III was able to beat it as the fastest processor available for business applications. Just days later on February 26 Intel released the Pentium III "Katmai" line at speeds of 500 MHz, slightly faster than the K6-III 450 MHz. It is important to note however that Intel retained the lead in 3D gaming performance. Many popular first person games at the time were specifically tuned to extract maximum performance from Intel's pipelined floating point unit in drawing their 3D environments. Since the K6-III inherits the same floating point unit as the K6-2 (low latency but not pipelined), unless the game was updated to use AMD's 3D-Now! SIMD instructions - performance could still remain significantly lower than when run on Intel. Architecture In conception, the design is simple: it was a K6-2 with on-die 256KiB L2 cache. In execution, however, the design was not simple, with 21.4 million transistors. The pipeline was short compared to that of the Pentium III and thus the design did not scale well past 500 MHz. Nevertheless, the K6-III 400 sold well, and the AMD K6-III 450 was clearly the fastest x86 chip on the market on introduction, comfortably outperforming AMD K6-2s and Intel
https://en.wikipedia.org/wiki/Smart%20Personal%20Objects%20Technology
The Smart Personal Objects Technology (SPOT) is a discontinued initiative by Microsoft to create intelligent and personal home appliances, consumer electronics, and other objects through new hardware capabilities and software features. Development of SPOT began as an incubation project initiated by the Microsoft Research division. SPOT was first announced by Bill Gates at the COMDEX computer exposition event in 2002, and additional details were revealed by Microsoft at the 2003 Consumer Electronics Show where Gates demonstrated a set of prototype smart watches—the first type of device that would support the technology. Unlike more recent technologies, SPOT did not use more traditional forms of connectivity, such as 3G or Wi-Fi, but relied on FM broadcasting subcarrier transmission as a method of data distribution. While several types of electronics would eventually support the technology throughout its lifecycle, SPOT was considered a commercial failure. Reasons that have been cited for its failure include its subscription-based business model, support limited to North America, the emergence of more efficient and popular forms of data distribution, and mobile feature availability that surpasses the features that SPOT offered. History Development Development of SPOT began as an incubation project led by Microsoft engineer, Bill Mitchell, and initiated by the Microsoft Research division. Mitchell would enlist the help of Larry Karr, president of SCA Data Systems, to develop the project. Karr had previously worked in the 1980s to develop technology for Atari that would distribute games in a manner distinct from the company's competitors; Karr proposed FM broadcasting subcarrier transmission as a method of distribution, technology which would also be used by Microsoft's SPOT. Microsoft Research and SCA Data Systems would ultimately develop the DirectBand subcarrier technology for SPOT. National Semiconductor would aid in the development of device chipsets, which wo
https://en.wikipedia.org/wiki/Coincidence%20circuit
In physics and electrical engineering, a coincidence circuit or coincidence gate is an electronic device with one output and two (or more) inputs. The output activates only when the circuit receives signals within a time window accepted as at the same time and in parallel at both inputs. Coincidence circuits are widely used in particle detectors and in other areas of science and technology. Walther Bothe shared the Nobel Prize for Physics in 1954 "...for his discovery of the method of coincidence and the discoveries subsequently made by it." Bruno Rossi invented the electronic coincidence circuit for implementing the coincidence method. History Bothe, 1924 In his Nobel Prize lecture, Bothe described how he had implemented the coincidence method in an experiment on Compton scattering in 1924. The experiment aimed to check whether Compton scattering produces a recoil electron simultaneously with the scattered gamma ray. Bothe used two point discharge counters connected to separate fibre electrometers and recorded the fibre deflections on a moving photographic film. On the film record he could discern coincident discharges with a time resolution of approximately 1 millisecond. Bothe and Kohlhörster, 1929 In 1929, Walther Bothe and Werner Kolhörster published the description of a coincidence experiment with tubular discharge counters that Hans Geiger and Wilhelm Müller had invented in 1928. The Bothe-Kohlhörster experiment showed penetrating charged particles in cosmic rays. They used the same mechanical-photographic method for recording simultaneous discharges which, in this experiment, signalled the passage of a charged cosmic ray particle through both counters and through thick wall of lead and iron that surrounded the counters. Their paper, entitled Das Wesen der Höhenstrahlung", was published in the Zeitschrift für Physik v.56, p.751 (1929). Rossi, 1930 Bruno Rossi, at the age of 24, was in his first job as assistant in the Physics Institute of the Universi
https://en.wikipedia.org/wiki/Digibox
The Digibox is a device marketed by Sky UK in the UK and Ireland to enable home users to receive digital satellite television broadcasts (satellite receiver) from the Astra satellites at 28.2° east. An Internet service was also available through the device, similar in some ways to the American MSN TV, before being discontinued in 2015. The first Digiboxes shipped to consumers in October 1998 when Sky Digital was launched, and the hardware reference design has been relatively unchanged since then. Compared to other satellite receivers, they are severely restricted. As of 2020, Sky Digiboxes have become largely outmoded, superseded by Sky's latest-generation Sky Q boxes and Sky Glass televisions; the previous generation Sky+HD boxes are still in use, however. Base technical details The Digibox's internal hardware specifications are not publicly disclosed, however some details are clearly visible on the system. All early boxes except the Pace Javelin feature dual SCART outputs, an RS-232 serial port, a dual-output RF modulator with passthrough, and RCA socketed audio outputs, as well as a 33.6 modem and an LNB cable socket. A VideoGuard card slot, as well as a second smart-card reader are fitted to the front (these are for the Sky viewing card and other interactive cards). All share an identical user interface and EPG, with the exception of Sky+ HD boxes which have used the new Sky+ HD Guide since early 2009. The DRX595 dropped the RF modulator outputs. A PC type interface was fitted internally to some early standard boxes but was never utilised by Sky. The latest HD boxes only have a single SCART socket but have a RCA/phono socket for composite video output. All Sky+ and HD boxes have an optical sound output. The serial port outputs data used for the Sky Gnome and Sky Talker. The Sky Gamepad sends data to the box via the serial port. Uniquely, the second RF port outputs a 9 V power signal which is used to power 'tvLINK' devices that can be attached to the RF cable
https://en.wikipedia.org/wiki/Heart%20rate
Heart rate (or pulse rate) is the frequency of the heartbeat measured by the number of contractions of the heart per minute (beats per minute, or bpm). The heart rate can vary according to the body's physical needs, including the need to absorb oxygen and excrete carbon dioxide, but is also modulated by numerous factors, including (but not limited to) genetics, physical fitness, stress or psychological status, diet, drugs, hormonal status, environment, and disease/illness as well as the interaction between and among these factors. It is usually equal or close to the pulse measured at any peripheral point. The American Heart Association states the normal resting adult human heart rate is 60-100 bpm. Tachycardia is a high heart rate, defined as above 100 bpm at rest. Bradycardia is a low heart rate, defined as below 60 bpm at rest. When a human sleeps, a heartbeat with rates around 40–50 bpm is common and is considered normal. When the heart is not beating in a regular pattern, this is referred to as an arrhythmia. Abnormalities of heart rate sometimes indicate disease. Physiology While heart rhythm is regulated entirely by the sinoatrial node under normal conditions, heart rate is regulated by sympathetic and parasympathetic input to the sinoatrial node. The accelerans nerve provides sympathetic input to the heart by releasing norepinephrine onto the cells of the sinoatrial node (SA node), and the vagus nerve provides parasympathetic input to the heart by releasing acetylcholine onto sinoatrial node cells. Therefore, stimulation of the accelerans nerve increases heart rate, while stimulation of the vagus nerve decreases it. As water and blood are incompressible fluids, one of the physiological ways to deliver more blood to an organ is to increase heart rate. Normal resting heart rates range from 60 to 100 bpm. Bradycardia is defined as a resting heart rate below 60 bpm. However, heart rates from 50 to 60 bpm are common among healthy people and do not necessarily
https://en.wikipedia.org/wiki/Telerobotics
Telerobotics is the area of robotics concerned with the control of semi-autonomous robots from a distance, chiefly using television, wireless networks (like Wi-Fi, Bluetooth and the Deep Space Network) or tethered connections. It is a combination of two major subfields, which are teleoperation and telepresence. Teleoperation Teleoperation indicates operation of a machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academic and technical environments. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance. Teleoperation is the most standard term, used both in research and technical communities, for referring to operation at a distance. This is opposed to "telepresence", which refers to the subset of telerobotic systems configured with an immersive interface such that the operator feels present in the remote environment, projecting his or her presence through the remote robot. One of the first telepresence systems that enabled operators to feel present in a remote environment through all of the primary senses (sight, sound, and touch) was the Virtual Fixtures system developed at US Air Force Research Laboratories in the early 1990s. The system enabled operators to perform dexterous tasks (inserting pegs into holes) remotely such that the operator would feel as if he or she was inserting the pegs when in fact it was a robot remotely performing the task. A telemanipulator (or teleoperator) is a device that is controlled remotely by a human operator. In simple cases the controlling operator's command actions correspond directly to actions in the device controlled, as for example in a radio-controlled model aircraft or a tethered deep submergence vehicle. Where communications delays make direct control impractical (such as a remote planetary rover), or it is desired to red
https://en.wikipedia.org/wiki/3B%20series%20computers
The 3B series computers are a line of minicomputers made between the late 1970s and 1993 by AT&T Computer Systems' Western Electric subsidiary, for use with the company's UNIX operating system. The line primarily consists of the models 3B20, 3B5, 3B15, 3B2, and 3B4000. The series is notable for controlling a series of electronic switching systems for telecommunication, for general computing purposes, and for serving as the historical software porting base for commercial UNIX. History The first 3B20D was installed in Fresno, California at Pacific Bell. Within two years, several hundred were in place throughout the Bell System. Some of the units came with "small, slow hard disks". The general purpose family of 3B computer systems includes the 3B2, 3B5, 3B15, 3B20S, and 3B4000. They run the AT&T UNIX operating system and were named after the successful 3B20D High Availability processor. In 1984, after regulatory constraints were lifted, AT&T introduced the 3B20D, 3B20S, 3B5, and 3B2 to the general computer market, a move that some commentators saw as an attempt to compete with IBM. In Europe, the 3B computers were distributed by Italian firm Olivetti, in which AT&T had a minority shareholding. After AT&T bought NCR Corporation, effective January 1992, the computers were marketed through NCR sales channels. Having produced 70,000 units, the AT&T Oklahoma City plant stopped manufacturing 3B machines at the end of 1993, with the 3B20D to be the last units manufactured. 3B high-availability processors The original series of 3B computers includes the models 3B20C, 3B20D, 3B21D, and 3B21E. These systems are 32-bit microprogrammed duplex (redundant) high availability processor units running a real-time operating system. They were first produced in the late 1970s at the WECo factory in Lisle, Illinois, for telecommunications applications including the 4ESS and 5ESS systems. They use the Duplex Multi Environment Real Time (DMERT) operating system which was renamed UNIX-R
https://en.wikipedia.org/wiki/Multiset
In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that element in the multiset. As a consequence, an infinite number of multisets exist which contain only elements and , but vary in the multiplicities of their elements: The set contains only elements and , each having multiplicity 1 when is seen as a multiset. In the multiset , the element has multiplicity 2, and has multiplicity 1. In the multiset , and both have multiplicity 3. These objects are all different when viewed as multisets, although they are the same set, since they all consist of the same elements. As with sets, and in contrast to tuples, the order in which elements are listed does not matter in discriminating multisets, so and denote the same multiset. To distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used: the multiset can be denoted by . The cardinality of a multiset is the sum of the multiplicities of all its elements. For example, in the multiset the multiplicities of the members , , and are respectively 2, 3, and 1, and therefore the cardinality of this multiset is 6. Nicolaas Govert de Bruijn coined the word multiset in the 1970s, according to Donald Knuth. However, the concept of multisets predates the coinage of the word multiset by many centuries. Knuth himself attributes the first study of multisets to the Indian mathematician Bhāskarāchārya, who described permutations of multisets around 1150. Other names have been proposed or used for this concept, including list, bunch, bag, heap, sample, weighted set, collection, and suite. History Wayne Blizard traced multisets back to the very origin of numbers, arguing that "in ancient times, the number n was often represented by a collection of n strokes, tally marks, or units." These and sim
https://en.wikipedia.org/wiki/Quadratic%20irrational%20number
In mathematics, a quadratic irrational number (also known as a quadratic irrational or quadratic surd) is an irrational number that is the solution to some quadratic equation with rational coefficients which is irreducible over the rational numbers. Since fractions in the coefficients of a quadratic equation can be cleared by multiplying both sides by their least common denominator, a quadratic irrational is an irrational root of some quadratic equation with integer coefficients. The quadratic irrational numbers, a subset of the complex numbers, are algebraic numbers of degree 2, and can therefore be expressed as for integers ; with , and non-zero, and with square-free. When is positive, we get real quadratic irrational numbers, while a negative gives complex quadratic irrational numbers which are not real numbers. This defines an injection from the quadratic irrationals to quadruples of integers, so their cardinality is at most countable; since on the other hand every square root of a prime number is a distinct quadratic irrational, and there are countably many prime numbers, they are at least countable; hence the quadratic irrationals are a countable set. Quadratic irrationals are used in field theory to construct field extensions of the field of rational numbers . Given the square-free integer , the augmentation of by quadratic irrationals using produces a quadratic field ). For example, the inverses of elements of ) are of the same form as the above algebraic numbers: Quadratic irrationals have useful properties, especially in relation to continued fractions, where we have the result that all real quadratic irrationals, and only real quadratic irrationals, have periodic continued fraction forms. For example The periodic continued fractions can be placed in one-to-one correspondence with the rational numbers. The correspondence is explicitly provided by Minkowski's question mark function, and an explicit construction is given in that article. It is en
https://en.wikipedia.org/wiki/Conway%20chained%20arrow%20notation
Conway chained arrow notation, created by mathematician John Horton Conway, is a means of expressing certain extremely large numbers. It is simply a finite sequence of positive integers separated by rightward arrows, e.g. . As with most combinatorial notations, the definition is recursive. In this case the notation eventually resolves to being the leftmost number raised to some (usually enormous) integer power. Definition and overview A "Conway chain" is defined as follows: Any positive integer is a chain of length . A chain of length n, followed by a right-arrow → and a positive integer, together form a chain of length . Any chain represents an integer, according to the six rules below. Two chains are said to be equivalent if they represent the same integer. Let denote positive integers and let denote the unchanged remainder of the chain. Then: An empty chain (or a chain of length 0) is equal to The chain represents the number . The chain represents the number . The chain represents the number (see Knuth's up-arrow notation) The chain represents the same number as the chain Else, the chain represents the same number as the chain . Properties A chain evaluates to a perfect power of its first number Therefore, is equal to is equivalent to is equal to is equivalent to (not to be confused with ) Interpretation One must be careful to treat an arrow chain as a whole. Arrow chains do not describe the iterated application of a binary operator. Whereas chains of other infixed symbols (e.g. 3 + 4 + 5 + 6 + 7) can often be considered in fragments (e.g. (3 + 4) + 5 + (6 + 7)) without a change of meaning (see associativity), or at least can be evaluated step by step in a prescribed order, e.g. 34567 from right to left, that is not so with Conway's arrow chains. For example: The sixth definition rule is the core: A chain of 4 or more elements ending with 2 or higher becomes a chain of the same length with a (usually vastly) increased penult
https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Moser%20notation
In mathematics, Steinhaus–Moser notation is a notation for expressing certain large numbers. It is an extension (devised by Leo Moser) of Hugo Steinhaus's polygon notation. Definitions a number in a triangle means nn. a number in a square is equivalent to "the number inside triangles, which are all nested." a number in a pentagon is equivalent with "the number inside squares, which are all nested." etc.: written in an ()-sided polygon is equivalent with "the number inside nested -sided polygons". In a series of nested polygons, they are associated inward. The number inside two triangles is equivalent to nn inside one triangle, which is equivalent to nn raised to the power of nn. Steinhaus defined only the triangle, the square, and the circle , which is equivalent to the pentagon defined above. Special values Steinhaus defined: mega is the number equivalent to 2 in a circle: megiston is the number equivalent to 10 in a circle: ⑩ Moser's number is the number represented by "2 in a megagon". Megagon is here the name of a polygon with "mega" sides (not to be confused with the polygon with one million sides). Alternative notations: use the functions square(x) and triangle(x) let be the number represented by the number in nested -sided polygons; then the rules are: and mega =  megiston =  moser = Mega A mega, ②, is already a very large number, since ② = square(square(2)) = square(triangle(triangle(2))) = square(triangle(22)) = square(triangle(4)) = square(44) = square(256) = triangle(triangle(triangle(...triangle(256)...))) [256 triangles] = triangle(triangle(triangle(...triangle(256256)...))) [255 triangles] ~ triangle(triangle(triangle(...triangle(3.2317 × 10616)...))) [255 triangles] ... Using the other notation: mega = M(2,1,5) = M(256,256,3) With the function we have mega = where the superscript denotes a functional power, not a numerical power. We have (note the convention that powers are evaluated from right to left): M(25
https://en.wikipedia.org/wiki/John%20Walker%20%28programmer%29
John Walker is a computer programmer, author and co-founder of the computer-aided design software company Autodesk. He has more recently been recognized for his writing on his website Fourmilab. Early projects In 1974/1975, Walker wrote the ANIMAL software, which self-replicated on UNIVAC 1100 machines. It is considered one of the first computer viruses. Walker also founded the hardware integration manufacturing company Marinchip. Among other things, Marinchip pioneered the translation of numerous computer language compilers to Intel platforms. Autodesk In 1982, John Walker and 12 other programmers pooled US$59,000 to start Autodesk (AutoCAD), and began working on several computer applications. The first completed was AutoCAD, a software application for computer-aided design (CAD) and drafting. AutoCAD had begun life as Interact, a CAD, written by programmer Michael Riddle in a proprietary language. Walker and Riddle rewrote the program, and established a profit-sharing agreement for any product derived from InteractCAD. Walker subsequently paid Riddle US$10 million for all the rights. The company went public in 1985. By mid-1986, the company had grown to 255 employees with annual sales of over $40 million. That year, Walker resigned as chairman and president of the company, continuing to work as a programmer. In 1989, Walker's book, The Autodesk File, was published. It describes his experiences at Autodesk, based around internal documents (particularly email) of the company. Walker moved to Switzerland in 1991. By 1994, when he resigned from the company, it was the sixth-largest personal computer software company in the world, primarily from the sales of AutoCAD. Walker owned about $45 million of stock in Autodesk at the time. Fourmilab He publishes on his personal domain, "Fourmi Lab", designed to be a play on Fermilab and , French for “ant”, one of his early interests. On his Web site, Walker publishes about his personal projects, including a hardware ra
https://en.wikipedia.org/wiki/33%20%28number%29
33 (thirty-three) is the natural number following 32 and preceding 34. In mathematics 33 is: specifically, the 8th distinct semiprime, it being the 3rd of the form (3.q) where q is a higher prime. It also contains a semiprime aliquot sum of 15, within an aliquot sequence of four members (33, 15, 9, 4, 3, 1, 0) in the prime 3-aliquot tree. the largest positive integer that cannot be expressed as a sum of different triangular numbers. It is also the largest of twelve integers that are not the sum of five non-zero squares. the smallest odd repdigit that is not a prime number. the sum of the first four positive factorials. the sum of the sum of the divisors of the first six positive integers. the sum of three cubes: equal to the sum of the squares of the digits of its own square in bases 9, 16 and 31. For numbers greater than 1, this is a rare property to have in more than one base. the first member of the first cluster of three semiprimes 33, 34, 35; the next such cluster is 85, 86, 87. It is also the smallest integer such that it and the next two integers all have the same number of divisors. the first double digit centered dodecahedral number. divisible by the number of prime numbers (11) below 33. a palindrome in both decimal and binary. A positive definite quadratic integer matrix represents all odd numbers when it contains at least the set of seven integers: {1, 3, 5, 7, 11, 15, 33}. In science The atomic number of arsenic. 33 is, according to the Newton scale, the temperature at which water boils. A normal human spine has, on average, 33 vertebrae when the bones that form the coccyx are counted individually. Astronomy Messier object M33, a magnitude 7.0 galaxy in the constellation Triangulum, also known as the Triangulum Galaxy. The New General Catalogue object NGC 33, a double star in the constellation Pisces The smallest dwarf planet in the Solar System is Ceres, which is also the 33rd largest celestial body in the Solar System, comprising a
https://en.wikipedia.org/wiki/78%20%28number%29
78 (seventy-eight) is the natural number following 77 and followed by 79. In mathematics 78 is: the 4th discrete tri-prime; or also termed Sphenic number, and the 4th of the form (2.3.r). an abundant number with an aliquot sum of 90. a semiperfect number, as a multiple of a perfect number. the 12th triangular number. a palindromic number in bases 5 (3035), 7 (1417), 12 (6612), 25 (3325), and 38 (2238). a Harshad number in bases 3, 4, 5, 6, 7, 13 and 14. an Erdős–Woods number, since it is possible to find sequences of 78 consecutive integers such that each inner member shares a factor with either the first or the last member. the dimension of the exceptional Lie group E6 and several related objects. the smallest number that can be expressed as the sum of four distinct nonzero squares in more than one way: , or (see image). 77 and 78 form a Ruth–Aaron pair. In science The atomic number of platinum. In other fields 78 is also: In reference to gramophone records, 78 refers those meant to be spun at 78 revolutions per minute. Compare: LP, and 45 rpm. 33 + 45 = 78 A typical tarot deck containing the 21 trump cards, the Fool and the 56 suit cards make up 78 cards The Rule of 78s is a method of yearly interest calculation The number used by Martin Truex Jr. and Furniture Row Racing to win the 2017 Monster Energy NASCAR Cup Series championship and 2016 Coca-Cola 600. The team and driver Regan Smith also won the 2011 Showtime Southern 500 with 78. The number is now used by owner-driver B.J. McLeod for Live Fast Motorsports. References Integers
https://en.wikipedia.org/wiki/45%20%28number%29
45 (forty-five) is the natural number following 44 and preceding 46. In mathematics Forty-five is the smallest odd number that has more divisors than , and that has a larger sum of divisors than . It is the sixth positive integer with a square-prime prime factorization of the form , with and prime, and first of the form . 45 has an aliquot sum of 33 that is part of an aliquot sequence composed of five composite numbers (45, 33, 15, 9, 4, 3, 1, and 0), all of-which are rooted in the 3-aliquot tree. This is the longest aliquot sequence for an odd number up to 45. Forty-five is the sum of all single-digit decimal digits: . It is, equivalently, the ninth triangle number. Forty-five is also the fourth hexagonal number and the second hexadecagonal number, or 16-gonal number. It is also the second smallest triangle number (after 1 and 10) that can be written as the sum of two squares. Forty-five is the smallest positive number that can be expressed as the difference of two nonzero squares in more than two ways: , or (see image). Since the greatest prime factor of is 1,013, which is much more than 45 twice, 45 is a Størmer number. In decimal, 45 is a Kaprekar number and a Harshad number. Forty-five is a little Schroeder number; the next such number is 197, which is the 45th prime number. Forty-five is conjectured from Ramsey number . Abstract algebra In the classification of finite simple groups, the Tits group is sometimes defined as a nonstrict group of Lie type or sporadic group, which yields a total of 45 classes of finite simple groups: two stem from cyclic and alternating groups, sixteen are families of groups of Lie type, twenty-six are strictly sporadic, and one is the exceptional case of . Inside the largest sporadic group, the Friendly Giant , there exist at least 45 conjugacy classes of maximal subgroups, which includes the double cover of the second largest sporadic group . In science The atomic number of rhodium Astronomy Messier object
https://en.wikipedia.org/wiki/Diversitas
Diversitas (the Latin word for “diversity”) was an international research programme aiming at integrating biodiversity science for human well-being. In December 2014 its work was transferred to the programme called Future Earth, which was sponsored by the Science and Technology Alliance for Global Sustainability, comprising the International Council for Science (ICSU), the International Social Science Council (ISSC), the Belmont Forum of funding agencies, the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the United Nations Environment Programme (UNEP), the United Nations University (UNU) and the World Meteorological Organization (WMO). Diversitas mission Biodiversity underpins the life-support system of our planet. Both natural and managed ecosystems deliver important ecological services such as the production of food and fibre, carbon storage, climate regulation and recreation opportunities. The program was established to address the complex scientific questions posed by the loss in biodiversity and ecosystem services and to offer science-based solutions to this crisis. The program is an international programme of biodiversity science with a dual mission: Promoting, facilitating and conducting integrative biodiversity science, that links biological, ecological and social disciplines; and Providing the sound scientific basis for decision making to secure the planet’s variety of life, while contributing to human well-being and poverty eradication. The program achieves its mission by: Fostering an integrated network of the world’s leading biodiversity scientists to address critical biodiversity issues; Producing new knowledge by catalysing exchanges between scientists across nations and disciplines; Synthesising new biodiversity knowledge to address the global science priorities; Ensuring an effective engagement of the biodiversity science community globally with policy and decision makers, especially with relevant internation
https://en.wikipedia.org/wiki/Text%20messaging
Text messaging, or texting, is the act of composing and sending electronic messages, typically consisting of alphabetic and numeric characters, between two or more users of mobile devices, desktops/laptops, or another type of compatible computer. Text messages may be sent over a cellular network or may also be sent via satellite or Internet connection. The term originally referred to messages sent using the Short Message Service (SMS). It has grown beyond alphanumeric text to include multimedia messages using the Multimedia Messaging Service (MMS) containing digital images, videos, and sound content, as well as ideograms known as emoji (happy faces, sad faces, and other icons), and instant messenger applications (usually the term is used when on mobile devices). Text messages are used for personal, family, business, and social purposes. Governmental and non-governmental organizations use text messaging for communication between colleagues. In the 2010s, the sending of short informal messages became an accepted part of many cultures, as happened earlier with emailing. This makes texting a quick and easy way to communicate with friends, family, and colleagues, including in contexts where a call would be impolite or inappropriate (e.g., calling very late at night or when one knows the other person is busy with family or work activities). Like e-mail and voicemail, and unlike calls (in which the caller hopes to speak directly with the recipient), texting does not require the caller and recipient to both be free at the same moment; this permits communication even between busy individuals. Text messages can also be used to interact with automated systems, for example, to order products or services from e-commerce websites or to participate in online contests. Advertisers and service providers use direct text marketing to send messages to mobile users about promotions, payment due dates, and other notifications instead of using postal mail, email, or voicemail. Terminol
https://en.wikipedia.org/wiki/Forward%20compatibility
Forward compatibility or upward compatibility is a design characteristic that allows a system to accept input intended for a later version of itself. The concept can be applied to entire systems, electrical interfaces, telecommunication signals, data communication protocols, file formats, and programming languages. A standard supports forward compatibility if a product that complies with earlier versions can "gracefully" process input designed for later versions of the standard, ignoring new parts which it does not understand. The objective for forward compatible technology is for old devices to recognise when data has been generated for new devices. Forward compatibility for the older system usually means backward compatibility for the new system, i.e. the ability to process data from the old system; the new system usually has full compatibility with the older one, by being able to both process and generate data in the format of the older system. Forward compatibility is not the same as extensibility. A forward compatible design can process at least some of the data from a future version of itself. An extensible design makes upgrading easy. An example of both design ideas can be found in web browsers. At any point in time, a current browser is forward compatible if it gracefully accepts a newer version of HTML. Whereas how easily the browser code can be upgraded to process the newer HTML determines how extensible it is. Examples Telecommunication standards The introduction of FM stereo transmission, or color television, allowed forward compatibility, since monophonic FM radio receivers and black-and-white TV sets still could receive a signal from a new transmitter. It also allowed backward compatibility since new receivers could receive monophonic or black-and-white signals generated by old transmitters. Video gaming The Game Boy is able to play certain games released for the Game Boy Color. These games utilize the same cartridge design as games for the orig
https://en.wikipedia.org/wiki/-yllion
-yllion (pronounced ) is a proposal from Donald Knuth for the terminology and symbols of an alternate decimal superbase system. In it, he adapts the familiar English terms for large numbers to provide a systematic set of names for much larger numbers. In addition to providing an extended range, -yllion also dodges the long and short scale ambiguity of -illion. Knuth's digit grouping is exponential instead of linear; each division doubles the number of digits handled, whereas the familiar system only adds three or six more. His system is basically the same as one of the ancient and now-unused Chinese numeral systems, in which units stand for 104, 108, 1016, 1032, ..., 102n, and so on (with an exception that the -yllion proposal does not use a word for thousand which the original Chinese numeral system has). Today the corresponding Chinese characters are used for 104, 108, 1012, 1016, and so on. Details and examples In Knuth's -yllion proposal: 1 to 999 have their usual names. 1000 to 9999 are divided before the 2nd-last digit and named "foo hundred bar." (e.g. 1234 is "twelve hundred thirty-four"; 7623 is "seventy-six hundred twenty-three") 104 to 108 − 1 are divided before the 4th-last digit and named "foo myriad bar". Knuth also introduces at this level a grouping symbol (comma) for the numeral. So 382,1902 is "three hundred eighty-two myriad nineteen hundred two." 108 to 1016 − 1 are divided before the 8th-last digit and named "foo myllion bar", and a semicolon separates the digits. So 1,0002;0003,0004 is "one myriad two myllion, three myriad four." 1016 to 1032 − 1 are divided before the 16th-last digit and named "foo byllion bar", and a colon separates the digits. So 12:0003,0004;0506,7089 is "twelve byllion, three myriad four myllion, five hundred six myriad seventy hundred eighty-nine." etc. Each new number name is the square of the previous one — therefore, each new name covers twice as many digits. Knuth continues borrowing the traditional names changing
https://en.wikipedia.org/wiki/Ecosophy
Ecosophy or ecophilosophy (a portmanteau of ecological philosophy) is a philosophy of ecological harmony or equilibrium. The term was coined by the French post-structuralist philosopher and psychoanalyst Félix Guattari and the Norwegian father of deep ecology, Arne Næss. Félix Guattari Ecosophy also refers to a field of practice introduced by psychoanalyst, poststructuralist philosopher, and political activist Félix Guattari. In part Guattari's use of the term demarcates a necessity for the proponents of social liberation, whose struggles in the 20th century were dominated by the paradigm of social revolution, to embed their arguments within an ecological framework which understands the interconnections of social and environmental spheres. Guattari holds that traditional environmentalist perspectives obscure the complexity of the relationship between humans and their natural environment through their maintenance of the dualistic separation of human (cultural) and nonhuman (natural) systems; he envisions ecosophy as a new field with a monistic and pluralistic approach to such study. Ecology in the Guattarian sense, then, is a study of complex phenomena, including human subjectivity, the environment, and social relations, all of which are intimately interconnected. Despite this emphasis on interconnection, throughout his individual writings and more famous collaborations with Gilles Deleuze, Guattari has resisted calls for holism, preferring to emphasize heterogeneity and difference, synthesizing assemblages and multiplicities in order to trace rhizomatic structures rather than creating unified and holistic structures. Guattari's concept of the three interacting and interdependent ecologies of mind, society, and environment stems from the outline of the three ecologies presented in Steps to an Ecology of Mind, a collection of writings by cyberneticist Gregory Bateson. Næss's definition Næss defined ecosophy in the following way: While a professor at University o
https://en.wikipedia.org/wiki/Digital%20sum
There are a number of common mathematical meanings of the term digital sum: Values The digit sum - add the digits of the representation of a number in a given base. For example, considering 84001 in base 10 the digit sum would be 8 + 4 + 0 + 0 + 1 = 13. The digital root - repeatedly apply the digit sum operation to the representation of a number in a given base until the outcome is a single digit. For example, considering 84001 in base 10 the digital root would be 4 (8 + 4 + 0 + 0 + 1 = 13, 1 + 3 = 4). Operations The mathematical operation digital sum in base b can also be called the digital sum. This is where each place is summed independently, ignoring digit carry. For example, 84001 + 56734 = (8 + 5 = 13)(4 + 6 = 10)(0 + 7 = 7)(0 + 3 = 3)(1 + 4 = 5) = 30735. Integers
https://en.wikipedia.org/wiki/Ethylene%20oxide
Ethylene oxide is an organic compound with the formula . It is a cyclic ether and the simplest epoxide: a three-membered ring consisting of one oxygen atom and two carbon atoms. Ethylene oxide is a colorless and flammable gas with a faintly sweet odor. Because it is a strained ring, ethylene oxide easily participates in a number of addition reactions that result in ring-opening. Ethylene oxide is isomeric with acetaldehyde and with vinyl alcohol. Ethylene oxide is industrially produced by oxidation of ethylene in the presence of a silver catalyst. The reactivity that is responsible for many of ethylene oxide's hazards also makes it useful. Although too dangerous for direct household use and generally unfamiliar to consumers, ethylene oxide is used for making many consumer products as well as non-consumer chemicals and intermediates. These products include detergents, thickeners, solvents, plastics, and various organic chemicals such as ethylene glycol, ethanolamines, simple and complex glycols, polyglycol ethers, and other compounds. Although it is a vital raw material with diverse applications, including the manufacture of products like polysorbate 20 and polyethylene glycol (PEG) that are often more effective and less toxic than alternative materials, ethylene oxide itself is a very hazardous substance. At room temperature it is a very flammable, carcinogenic, mutagenic, irritating, and anaesthetic gas. Ethylene oxide is a surface disinfectant that is widely used in hospitals and the medical equipment industry to replace steam in the sterilization of heat-sensitive tools and equipment, such as disposable plastic syringes. It is so flammable and extremely explosive that it is used as a main component of thermobaric weapons; therefore, it is commonly handled and shipped as a refrigerated liquid to control its hazardous nature. History Ethylene oxide was first reported in 1859 by the French chemist Charles-Adolphe Wurtz, who prepared it by treating 2-chloroethano
https://en.wikipedia.org/wiki/Rayleigh%20quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix and nonzero vector is defined as:For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose to the usual transpose . Note that for any non-zero scalar . Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value (the smallest eigenvalue of ) when is (the corresponding eigenvector). Similarly, and . The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation. The range of the Rayleigh quotient (for any matrix, not necessarily Hermitian) is called a numerical range and contains its spectrum. When the matrix is Hermitian, the numerical radius is equal to the spectral norm. Still in functional analysis, is known as the spectral radius. In the context of -algebras or algebraic quantum mechanics, the function that to associates the Rayleigh–Ritz quotient for a fixed and varying through the algebra would be referred to as vector state of the algebra. In quantum mechanics, the Rayleigh quotient gives the expectation value of the observable corresponding to the operator for a system whose state is given by . If we fix the complex matrix , then the resulting Rayleigh quotient map (considered as a function of ) completely determines via the polarization identity; indeed, this remains true even if we allow to be non-Hermitian. However, if we restrict the field of scalars to the real numbers, then the Rayleigh quotient only determines the symmetric part of . Bounds for Hermitian M As stated in the introduction, for any vector x, one has , where are respectively the smallest and largest eigenvalues of . Thi
https://en.wikipedia.org/wiki/Sine-Gordon%20equation
The sine-Gordon equation is a nonlinear hyperbolic partial differential equation for a function dependent on two variables typically denoted and , involving the wave operator and the sine of . It was originally introduced by in the course of study of surfaces of constant negative curvature as the Gauss–Codazzi equation for surfaces of constant Gaussian curvature −1 in 3-dimensional space. The equation was rediscovered by in their study of crystal dislocations known as the Frenkel–Kontorova model. This equation attracted a lot of attention in the 1970s due to the presence of soliton solutions, and is an example of an integrable PDE. Among well-known integrable PDEs, the sine-Gordon equation is the only relativistic system due to its Lorentz invariance. Origin of the equation in differential geometry There are two equivalent forms of the sine-Gordon equation. In the (real) space-time coordinates, denoted , the equation reads: where partial derivatives are denoted by subscripts. Passing to the light-cone coordinates (u, v), akin to asymptotic coordinates where the equation takes the form This is the original form of the sine-Gordon equation, as it was considered in the 19th century in the course of investigation of surfaces of constant Gaussian curvature K = −1, also called pseudospherical surfaces. There is a distinguished coordinate system for such a surface in which the coordinate mesh u = constant, v = constant is given by the asymptotic lines parameterized with respect to the arc length. The first fundamental form of the surface in these coordinates has a special form where expresses the angle between the asymptotic lines, and for the second fundamental form, . Then the Gauss–Codazzi equation expressing a compatibility condition between the first and second fundamental forms results in the sine-Gordon equation. This analysis shows that any pseudospherical surface gives rise to a solution of the sine-Gordon equation, although with some
https://en.wikipedia.org/wiki/Detonation
Detonation () is a type of combustion involving a supersonic exothermic front accelerating through a medium that eventually drives a shock front propagating directly in front of it. Detonations propagate supersonically through shock waves with speeds in the range of 1 km/sec and differ from deflagrations which have subsonic flame speeds in the range of 1 m/sec. Detonation is an explosion of fuel-air mixture. Compared to deflagration, detonation doesn't need to have an external oxidizer. Oxidizers and fuel mix when deflagration occurs. Detonation is more destructive than deflagrations. In detonation, flame front travels through air-fuel faster than sound, while in deflagrations, flame front travels through air-fuel slower than sound Detonations occur in both conventional solid and liquid explosives, as well as in reactive gases. TNT, dynamite, and C4 are examples of high power explosives that detonate. The velocity of detonation in solid and liquid explosives is much higher than that in gaseous ones, which allows the wave system to be observed with greater detail (higher resolution). A very wide variety of fuels may occur as gases (e.g. hydrogen), droplet fogs, or dust suspensions. In addition to dioxygen, oxidants can include halogen compounds, ozone, hydrogen peroxide, and oxides of nitrogen. Gaseous detonations are often associated with a mixture of fuel and oxidant in a composition somewhat below conventional flammability ratios. They happen most often in confined systems, but they sometimes occur in large vapor clouds. Other materials, such as acetylene, ozone, and hydrogen peroxide are detonable in the absence of an oxidant (or reductant). In these cases the energy released results from the rearrangement of the molecular constituents of the material. Detonation was discovered in 1881 by four French scientists Marcellin Berthelot and Paul Marie Eugène Vieille and Ernest-François Mallard and Henry Louis Le Chatelier. The mathematical predictions of propagation
https://en.wikipedia.org/wiki/Glossary%20of%20ecology
This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology, Glossary of evolutionary biology, and Glossary of environmental science. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Outline of ecology History of ecology References External links Ecology Ecology terminology Wikipedia glossaries using description lists
https://en.wikipedia.org/wiki/TACACS
Terminal Access Controller Access-Control System (TACACS, ) refers to a family of related protocols handling remote authentication and related services for network access control through a centralized server. The original TACACS protocol, which dates back to 1984, was used for communicating with an authentication server, common in older UNIX networks including but not limited to the ARPANET, MILNET and BBNNET. It spawned related protocols: Extended TACACS (XTACACS) is a proprietary extension to TACACS introduced by Cisco Systems in 1990 without backwards compatibility to the original protocol. TACACS and XTACACS both allow a remote access server to communicate with an authentication server in order to determine if the user has access to the network. TACACS Plus (TACACS+) is a protocol developed by Cisco and released as an open standard beginning in 1993. Although derived from TACACS, TACACS+ is a separate protocol that handles authentication, authorization, and accounting (AAA) services. TACACS+ has largely replaced its predecessors. History TACACS was originally developed in 1984 by BBN, later known as BBN Technologies, for administration of ARPANET and MILNET, which ran unclassified network traffic for DARPA at the time and would later evolve into the U.S. Department of Defense's NIPRNet. Originally designed as a means to automate authentication – allowing someone who was already logged into one host in the network to connect to another on the same network without needing to re-authenticate – it was first formally described by BBN's Brian Anderson TAC Access Control System Protocols, BBN Tech Memo CC-0045 with minor TELNET double login avoidance change in December 1984 in IETF RFC 927. Cisco Systems began supporting TACACS in its networking products in the late 1980s, eventually adding several extensions to the protocol. In 1990, Cisco's extensions on top of TACACS became a proprietary protocol called Extended TACACS (XTACACS). Although TACACS and XTACACS are
https://en.wikipedia.org/wiki/Disdrometer
A disdrometer is an instrument used to measure the drop size distribution and velocity of falling hydrometeors. Some disdrometers can distinguish between rain, graupel, and hail. The uses for disdrometers are numerous. They can be used for traffic control, scientific examination, airport observation systems, and hydrology. The latest disdrometers employ microwave or laser technologies. 2D video disdrometers can be used to analyze individual raindrops and snowflakes. See also Rain gauge Snow gauge References Measuring instruments Meteorological instrumentation and equipment Hydrology instrumentation
https://en.wikipedia.org/wiki/Two%27s%20complement
Two's complement is the most common method of representing signed (positive, negative, and zero) integers on computers, and more generally, fixed point binary values. Two's complement uses the binary digit with the greatest place value as the sign to indicate whether the binary number is positive or negative. When the most significant bit is 1, the number is signed as negative; and when the most significant bit is 0 the number is signed as positive. Unlike the one's complement scheme, the two's complement scheme has only one representation for zero. Furthermore, arithmetic implementations can be used on signed as well as unsigned integers and differ only in the integer overflow situations. Procedure Two's complement is achieved by: Step 1: starting with the equivalent positive number. Step 2: inverting (or flipping) all bits – changing every 0 to 1, and every 1 to 0; Step 3: adding 1 to the entire inverted number, ignoring any overflow. Accounting for overflow will produce the wrong value for the result. For example, to calculate the decimal number −6 in binary: Step 1: +6 in decimal is 0110 in binary; the leftmost significant bit (the first 0) is the sign (just 110 in binary would be -2 in decimal). Step 2: flip all bits in 0110, giving 1001. Step 3: add the place value 1 to the flipped number 1001, giving 1010. To verify that 1010 indeed has a value of −6, add the place values together, but subtract the sign value from the final calculation. Because the most significant value is the sign value, it must be subtracted to produce the correct result: 1010 = −(1×23) + (0×22) + (1×21) + (0×20) = 1×−8 + 0 + 1×2 + 0 = −6. Theory Two's complement is an example of a radix complement. The 'two' in the name refers to the term which, expanded fully in an -bit system, is actually "two to the power of N" - (the only case where exactly 'two' would be produced in this term is , so for a 1-bit system, but these do not have capacity for both a sign and a zero),
https://en.wikipedia.org/wiki/Pioneer%20species
Pioneer species are hardy species that are the first to colonize barren environments or previously biodiverse steady-state ecosystems that have been disrupted, such as by wildfire. Pioneer flora Some lichens grow on rocks without soil, so may be among the first of life forms, and break down the rocks into soil for plants. Since some uninhabited land may have thin, poor quality soils with few nutrients, pioneer species are often hardy plants with adaptations such as long roots, root nodes containing nitrogen-fixing bacteria, and leaves that employ transpiration. Note that they are often photosynthetic plants, as no other source of energy (such as other species) except light energy is often available in the early stages of succession, thus making it less likely for a pioneer species to be non-photosynthetic. The plants that are often pioneer species also tend to be wind-pollinated rather than insect-pollinated, as insects are unlikely to be present in the usually barren conditions in which pioneer species grow; however, pioneer species tend to reproduce asexually altogether, as the extreme or barren conditions present make it more favourable to reproduce asexually in order to increase reproductive success rather than invest energy into sexual reproduction. Pioneer species will eventually die, create plant litter, and break down as "leaf mold" after some time, making new soil for secondary succession (see below), and releasing nutrients for small fish and aquatic plants in adjacent bodies of water. Some examples of pioneering plant species: Barren sand - lyme grass (Leymus arenarius), sea couch grass (Agropyron pungens), Marram grass (Ammophila breviligulata) Salt water - green algae, marine eel grass (Zostera spp.), pickleweed (Salicornia virginica), and cordgrass (hybrid Spartina × townsendii) and (Spartina anglica). Clear water - algae, mosses, freshwater eel grass (Vallisneria americana). Solidified lava flows (volcanic rock) - in Hawaii: swordfern (Polysti
https://en.wikipedia.org/wiki/Regional%20lockout
A regional lockout (or region coding) is a class of digital rights management preventing the use of a certain product or service, such as multimedia or a hardware device, outside a certain region or territory. A regional lockout may be enforced through physical means, through technological means such as detecting the user's IP address or using an identifying code, or through unintentional means introduced by devices only supporting certain regional technologies (such as video formats, i.e., NTSC and PAL). A regional lockout may be enforced for several reasons, such as to stagger the release of a certain product, to avoid losing sales to the product's foreign publisher, to maximize the product's impact in a certain region through localization, to hinder grey market imports by enforcing price discrimination, or to prevent users from accessing certain content in their territory because of legal reasons (either due to censorship laws, or because a distributor does not have the rights to certain intellectual property outside their specified region). Multimedia Disc regions The DVD, Blu-ray Disc, and UMD media formats all support the use of region coding; DVDs use eight region codes (Region 7 is reserved for future use; Region 8 is used for "international venues", such as airplanes and cruise ships), and Blu-ray Discs use three region codes corresponding to different areas of the world. Most Blu-rays, however, are region-free. Ultra HD Blu-ray discs are also region-free. On computers, the DVD region can usually be changed five times. Windows uses three region counters: its own one, the one of the DVD drive, and the one of the player software (occasionally, the player software has no region counter of its own, but uses that of Windows). After the fifth region change, the system is locked to that region. In modern DVD drives (type RPC-2), the region lock is saved to its hardware, so that even reinstalling Windows or using the drive with a different computer will not u
https://en.wikipedia.org/wiki/Avaya
Avaya LLC, often shortened to Avaya (), is an American multinational technology company headquartered in Morristown, New Jersey, that provides cloud communications and workstream collaboration services. The company's platform includes unified communications and contact center services. In 2019, the company provided services to 220,000 customer locations in 190 countries. History In 1995, Lucent Technologies was spun off from AT&T, and Lucent subsequently spun off units of its own in an attempt to restructure its struggling operations. Avaya was then spun off from Lucent as its own company in 2000 (Lucent merged with Alcatel SA in 2006, becoming Alcatel-Lucent, which was purchased in turn by Nokia in 2016). It remained a public company from 2000 to 2007. In October 2007, Avaya was acquired by two private-equity firms, TPG Capital and Silver Lake Partners, for $8.2 billion. On January 19, 2017, Avaya filed for Chapter 11 bankruptcy. On December 15, 2017, it once again became a public company, trading under the NYSE stock ticker AVYA. On February 14, 2023, Avaya once again entered into financial restructuring via the Chapter 11 bankruptcy protection process. On May 1, 2023, Avaya completed its financial restructuring, emerging from bankruptcy as a private company. Management President & CEO - Alan Masarek Interim Chief Financial Officer - Becky Roof Executive Vice President - Shefali Shah Senior Vice President, Engineering - Todd Zerbe Acquisitions and partnerships Since 2001, Avaya has sold and acquired several companies. Through Nortel's bankruptcy proceedings, assets related to their Enterprise Voice and Data business units were auctioned. Avaya placed a $900 million bid, and was announced as the winner of the assets on September 14, 2009. In 1985, Performance Engineering Corporation (later PEC Solutions) was formed to offer technology services to government customers. On June 6, 2005, Nortel acquired PEC Solutions to form Nortel PEC Solutions. On January 18
https://en.wikipedia.org/wiki/Boilerplate%20text
Boilerplate text, or simply boilerplate, is any written text (copy) that can be reused in new contexts or applications without significant changes to the original. The term is used about statements, contracts, and computer code, and is often used in the media pejoratively to refer to cliché or unoriginal writing. Etymology "Boiler plate" originally referred to the rolled steel used to make boilers to heat water. Metal printing plates (type metal) used in hot metal typesetting of prepared text such as advertisements or syndicated columns were distributed to small, local newspapers, and became known as 'boilerplates' by analogy. One large supplier to newspapers of this kind of boilerplate was the Western Newspaper Union, which supplied "ready-to-print stories [which] contained national or international news" to papers with smaller geographic footprints, which could include advertisements pre-printed next to the conventional content. Boilerplate language In contract law, the term "boilerplate language" or "boilerplate clause" describes the parts of a contract that are considered standard. A standard form contract or boilerplate contract is a contract between two parties, where the terms and conditions of the contract are set by one of the parties, and the other party has little or no ability to negotiate more favorable terms and is thus placed in a "take it or leave it" position. Boilerplate language may also exist in pre-created form letters. The person sending the form letter then usually only needs to add his or her name at the end of the pre-written greeting and body. Boilerplate code In computer programming, boilerplate is the sections of code that have to be included in many places with little or no alteration. Such boilerplate code is particularly salient when the programmer must include a lot of code for minimal functionality. A related phenomenon, bookkeeping code, is code that is not part of the business logic, but is interleaved with it to keep data
https://en.wikipedia.org/wiki/Datapoint%202200
The Datapoint 2200 was a mass-produced programmable terminal usable as a computer, designed by Computer Terminal Corporation (CTC) founders Phil Ray and Gus Roche and announced by CTC in June 1970 (with units shipping in 1971). It was initially presented by CTC as a versatile and cost-efficient terminal for connecting to a wide variety of mainframes by loading various terminal emulations from tape rather than being hardwired as most contemporary terminals, including their earlier Datapoint 3300. However, Dave Gust, a CTC salesman, realized that the 2200 could meet Pillsbury Foods's need for a small computer in the field, after which the 2200 was marketed as a stand-alone computer. Its industrial designer John "Jack" Frassanito has later claimed that Ray and Roche always intended the Datapoint 2200 to be a full-blown personal computer, but that they chose to keep quiet about this so as not to concern investors and others. Also significant is the fact that the terminal's multi-chip CPU (processor)'s instruction set became the basis of the Intel 8008 instruction set, which inspired the Intel 8080 instruction set and the x86 instruction set used in the processors for the original IBM PC and its descendants. Technical description The Datapoint 2200 had a built-in full-travel keyboard, a built-in 12-line, 80-column green screen monitor, and two 47 character-per-inch cassette tape drives each with 130 KB capacity. Its size, , and shape—a box with protruding keyboard—approximated that of an IBM Selectric typewriter. Initially, a Diablo 2.5 MB 2315-type removable cartridge hard disk drive was available, along with modems, several types of serial interface, parallel interface, printers and a punched card reader. Later, an 8-inch floppy disk drive was also made available, along with other, larger hard disk drives. An industry-compatible 7/9-track (user selectable) magnetic tape drive was available by 1975. In late 1977, Datapoint introduced ARCNET local area networking. The
https://en.wikipedia.org/wiki/C-reactive%20protein
C-reactive protein (CRP) is an annular (ring-shaped) pentameric protein found in blood plasma, whose circulating concentrations rise in response to inflammation. It is an acute-phase protein of hepatic origin that increases following interleukin-6 secretion by macrophages and T cells. Its physiological role is to bind to lysophosphatidylcholine expressed on the surface of dead or dying cells (and some types of bacteria) in order to activate the complement system via C1q. CRP is synthesized by the liver in response to factors released by macrophages and fat cells (adipocytes). It is a member of the pentraxin family of proteins. It is not related to C-peptide (insulin) or protein C (blood coagulation). C-reactive protein was the first pattern recognition receptor (PRR) to be identified. History and etymology Discovered by Tillett and Francis in 1930, it was initially thought that CRP might be a pathogenic secretion since it was elevated in a variety of illnesses, including cancer. The later discovery of hepatic synthesis (made in the liver) demonstrated that it is a native protein. Initially, CRP was measured using the quellung reaction which gave a positive or a negative result. More precise methods nowadays use dynamic light scattering after reaction with CRP-specific antibodies. CRP was so named because it was first identified as a substance in the serum of patients with acute inflammation that reacted with the cell wall polysaccharide (C-polysaccharide) of pneumococcus. Genetics and structure It is a member of the small pentraxins family (also known as short pentraxins). The polypeptide encoded the this gene has 224 amino acids. The full-length polypeptide is not present in the body in significant quantities due to signal peptide, which is removed by signal peptidase before translation is completed. The complete protein, composed of five monomers, has a total mass of approximately 120,000 Da. In serum, it assembles into stable pentameric structure with a
https://en.wikipedia.org/wiki/ETA%20Systems
ETA Systems was a supercomputer company spun off from Control Data Corporation (CDC) in the early 1980s in order to regain a footing in the supercomputer business. They successfully delivered the ETA-10, but lost money continually while doing so. CDC management eventually gave up and folded the company. Historical development Seymour Cray left CDC in the early 1970s when they refused to continue funding of his CDC 8600 project. Instead they continued with the CDC STAR-100 while Cray went off to build the Cray-1. Cray's machine was much faster than the STAR, and soon CDC found itself pushed out of the supercomputing market. William Norris was convinced the only way to regain a foothold would be to spin off a division that would be free from management prodding. In order to regain some of the small-team flexibility that seemed essential to progress in the field, ETA was created in 1983 with the mandate to build a 10 GFLOPS machine by 1986. In April 1989 CDC decided to shut down the ETA operation and keep a bare-bones continuation effort alive at CDC. At shutdown, 7 liquid-cooled and 27 air-cooled machines had been sold. At this point ETA had the best price/performance ratio of any supercomputer on the market, and its initial software problems appeared to be finally sorted out. Nevertheless, shortly thereafter CDC exited the supercomputer market entirely, giving away remaining ETA machines free to high schools through the SuperQuest computer science competition. Products ETA had only one product, the ETA-10. It was a derivative of the CDC Cyber 205 supercomputer, and deliberately kept compatibility with it. Like the Cyber 205, the ETA-10 did not use vector registers as in the Cray machines, but instead used pipelined memory operations to a high-bandwidth main memory. The basic layout was a shared-memory multiprocessor with up to 8 CPUs, each capable of 4 double-precision or 8 single-precision operations per clock cycle, and up to 18 I/O processors. The main reason
https://en.wikipedia.org/wiki/ETA10
The ETA10 is a vector supercomputer designed, manufactured, and marketed by ETA Systems, a spin-off division of Control Data Corporation (CDC). The ETA10 was an evolution of the CDC Cyber 205, which can trace its origins back to the CDC STAR-100, one of the first vector supercomputers to be developed. CDC announced it was creating ETA Systems, and a successor to the Cyber 205, on 18 April 1983 at the Frontiers of Supercomputing conference, held at the Los Alamos National Laboratory. It was then referred to tentatively as the Cyber 2XX, and later as the GF-10, before it was named the ETA10. Prototypes were operational in mid-1986, and the first delivery was made in December 1986. The supercomputer was formally announced in April 1987 at an event held at its first customer installation, the Florida State University, Tallahassee's Scientific Computational Research Institute. On 17 April 1989, CDC abruptly closed ETA Systems due to ongoing financial losses, and discontinued production of the ETA10. Many of its users, such as Florida State University, negotiated Cray hardware in exchange. Historical development CDC had a strong history of creating powerful supercomputers, starting with the CDC 6600. One of the most famous computer architects to emerge from CDC was Seymour Cray. After a disagreement with CDC management regarding the development of the CDC 8600, he went on to form his own supercomputer company, Cray Research. Meanwhile, work continued at CDC in developing a high-end supercomputer, the CDC STAR-100—led by another famous architect, Neil Lincoln. Cray Research's Cray-1 vector supercomputer was successful, beating CDC's STAR-100. CDC responded with derivatives of the STAR, the Cyber 203 and 205. The Cyber 205 was moderately successful against the Cray-1's successor, the Cray X-MP. It became apparent to CDC's top management that it needed to decrease the development time for the next generation computer—thus a new approach was considered for the follow-on to
https://en.wikipedia.org/wiki/Overhead%20projector
An overhead projector (often abbreviated to OHP), like a film or slide projector, uses light to project an enlarged image on a screen, allowing the view of a small document or picture to be shared with a large audience. In the overhead projector, the source of the image is a page-sized sheet of transparent plastic film (also known as "foils" or "transparencies") with the image to be projected either printed or hand-written/drawn. These are placed on the glass platen of the projector, which has a light source below it and a projecting mirror and lens assembly above it (hence, "overhead"). They were widely used in education and business before the advent of video projectors. Optical system An overhead projector works on the same principle as a slide projector, in which a focusing lens projects light from an illuminated slide onto a projection screen where a real image is formed. However some differences are necessitated by the much larger size of the transparencies used (generally the size of a printed page), and the requirement that the transparency be placed face up (and readable to the presenter). For the latter purpose, the projector includes a mirror just before or after the focusing lens to fold the optical system toward the horizontal. That mirror also accomplishes a reversal of the image in order that the image projected onto the screen corresponds to that of the slide as seen by the presenter looking down at it, rather than a mirror image thereof. Therefore, the transparency is placed face up (toward the mirror and focusing lens), in contrast with a 35mm slide projector or film projector (which lack such a mirror) where the slide's image is non-reversed on the side opposite the focusing lens. A related invention for enlarging transparent images is the solar camera. The opaque projector, or episcope is a device which displays opaque materials by shining a bright lamp onto the object from above. The episcope must be distinguished from the diascope, which is
https://en.wikipedia.org/wiki/Magnetic%20declination
Magnetic declination, or magnetic variation, is the angle on the horizontal plane between magnetic north (the direction the north end of a magnetized compass needle points, corresponding to the direction of the Earth's magnetic field lines) and true north (the direction along a meridian towards the geographic North Pole). This angle varies depending on position on the Earth's surface and changes over time. Somewhat more formally, Bowditch defines variation as "the angle between the magnetic and geographic meridians at any place, expressed in degrees and minutes east or west to indicate the direction of magnetic north from true north. The angle between magnetic and grid meridians is called grid magnetic angle, grid variation, or grivation." By convention, declination is positive when magnetic north is east of true north, and negative when it is to the west. Isogonic lines are lines on the Earth's surface along which the declination has the same constant value, and lines along which the declination is zero are called agonic lines. The lowercase Greek letter δ (delta) is frequently used as the symbol for magnetic declination. The term magnetic deviation is sometimes used loosely to mean the same as magnetic declination, but more correctly it refers to the error in a compass reading induced by nearby metallic objects, such as iron on board a ship or aircraft. Magnetic declination should not be confused with magnetic inclination, also known as magnetic dip, which is the angle that the Earth's magnetic field lines make with the downward side of the horizontal plane. Declination change over time and location Magnetic declination varies both from place to place and with the passage of time. As a traveller cruises the east coast of the United States, for example, the declination varies from 16 degrees west in Maine, to 6 in Florida, to 0 degrees in Louisiana, to 4 degrees east in Texas. The declination at London, UK was one degree west (2014), reducing to zero as of e
https://en.wikipedia.org/wiki/Pipex
Pipex () was the United Kingdom's first commercial Internet service provider (ISP). It was formed in 1990 and helped to develop the ISP market in the UK. In 1992 it began operating a 64k transatlantic leased line and built a connection to the UK government's JANET network. One of its first customers was Demon Internet which popularised dial up modem based internet access in the UK. It was also one of the key players in the development of the London Internet Exchange through a meeting with BT in 1994. The company went through a number of mergers and acquisitions and by 2007 had dropped to be the sixth largest ISP in the UK. The Pipex name was used by a number of companies within the group, which were gradually renamed following the sale of its home broadband business to Tiscali UK in 2007. In 2009, the former Pipex Wireless business, rebranded as Freedom4, bought the former Pipex Business, known as Vialtus. Freedom4 also purchased Daisy Group through a reverse takeover, and the three companies were brought together as Daisy. History Formation The company was formed as the first commercial ISP in the UK by Unipalm in 1990 as The Public I.P. Exchange Ltd (PIPEX), founded by Peter Dawe. In mid 1992, it began operating a 64k transatlantic leased line to UUNET and another to JANET. One of its first customers was Demon Internet, shortly followed by the BBC. In November 1994, Keith Mitchell, then chief technical officer of PIPEX, initiated a meeting with BT to discuss the creation of a London-based Internet exchange. Pipex donated a Cisco Catalyst 1200 Network switch which formed the basis of the London Internet Exchange (LINX). Unipalm Pipex was sold to UUNet in November 1995 for £150 million and became UUNet/Pipex. The brand became known as Worldcom Pipex, after UUnet merged with MFS, which is later acquired by WorldCom before merging with MCI to form MCI WorldCom, later renaming back to MCI which was then taken over by Verizon Communications. Pipex retained contrac
https://en.wikipedia.org/wiki/Avionics%20software
Avionics software is embedded software with legally mandated safety and reliability concerns used in avionics. The main difference between avionic software and conventional embedded software is that the development process is required by law and is optimized for safety. It is claimed that the process described below is only slightly slower and more costly (perhaps 15 percent) than the normal ad hoc processes used for commercial software. Since most software fails because of mistakes, eliminating the mistakes at the earliest possible step is also a relatively inexpensive and reliable way to produce software. In some projects however, mistakes in the specifications may not be detected until deployment. At that point, they can be very expensive to fix. The basic idea of any software development model is that each step of the design process has outputs called "deliverables." If the deliverables are tested for correctness and fixed, then normal human mistakes can not easily grow into dangerous or expensive problems. Most manufacturers follow the waterfall model to coordinate the design product, but almost all explicitly permit earlier work to be revised. The result is more often closer to a spiral model. For an overview of embedded software see embedded system and software development models. The rest of this article assumes familiarity with that information, and discusses differences between commercial embedded systems and commercial development models. General overview Since most avionics manufacturers see software as a way to add value without adding weight, the importance of embedded software in avionic systems is increasing. Most modern commercial aircraft with auto-pilots use flight computers and so called flight management systems (FMS) that can fly the aircraft without the pilot's active intervention during certain phases of flight. Also under development or in production are unmanned vehicles: missiles and drones which can take off, cruise and land wi
https://en.wikipedia.org/wiki/Quartic%20function
In algebra, a quartic function is a function of the form where a is nonzero, which is defined by a polynomial of degree four, called a quartic polynomial. A quartic equation, or equation of the fourth degree, is an equation that equates a quartic polynomial to zero, of the form where . The derivative of a quartic function is a cubic function. Sometimes the term biquadratic is used instead of quartic, but, usually, biquadratic function refers to a quadratic function of a square (or, equivalently, to the function defined by a quartic polynomial without terms of odd degree), having the form Since a quartic function is defined by a polynomial of even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If a is positive, then the function increases to positive infinity at both ends; and thus the function has a global minimum. Likewise, if a is negative, it decreases to negative infinity and has a global maximum. In both cases it may or may not have another local maximum and another local minimum. The degree four (quartic case) is the highest degree such that every polynomial equation can be solved by radicals, according to the Abel–Ruffini theorem. History Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna. The Soviet historian I. Y. Depman (ru) claimed that even earlier, in 1486, Spanish mathematician Valmes was burned at the stake for claiming to have solved the quartic equation. Inquisitor General Tomás de Torquemada allegedly told Valmes that it was the will of God that such a solution be inaccessible to human understanding. However, Petr Beckmann, who popularized this story of Depman in the West
https://en.wikipedia.org/wiki/W.%20W.%20Rouse%20Ball
Walter William Rouse Ball (14 August 1850 – 4 April 1925), known as W. W. Rouse Ball, was a British mathematician, lawyer, and fellow at Trinity College, Cambridge, from 1878 to 1905. He was also a keen amateur magician, and the founding president of the Cambridge Pentacle Club in 1919, one of the world's oldest magic societies. Life Born 14 August 1850 in Hampstead, London, Ball was the son and heir of Walter Frederick Ball, of 3, St John's Park Villas, South Hampstead, London. Educated at University College School, he entered Trinity College, Cambridge, in 1870, became a scholar and first Smith's Prizeman, and gained his BA in 1874 as second Wrangler. He became a Fellow of Trinity in 1875, and remained one for the rest of his life. He died on 4 April 1925 in Elmside, Cambridge, and is buried at the Parish of the Ascension Burial Ground in Cambridge. He is commemorated in the naming of the small pavilion, now used as changing rooms and toilets, on Jesus Green in Cambridge. Books A History of the Study of Mathematics at Cambridge; Cambridge University Press, 1889 (reissued by the publisher, 2009, ) (1st ed. 1888 and later editions). Dover 1960 republication of fourth edition: . (1st ed. 1892; later editions with H.S.M. Coxeter)<ref>{{cite journal|author=Frame, J. S.|title=Review: Mathematical Recreations and Essays, 11th edition, by W. W. Rouse Ball; revised by H. S. M. Coxeter|journal=Bull. Amer. Math. Soc.|year=1940|volume=45|issue=3|pages=211–213|url=http://www.ams.org/journals/bull/1940-46-03/S0002-9904-1940-07170-8/S0002-9904-1940-07170-8.pdf|doi=10.1090/S0002-9904-1940-07170-8}}</ref>A History of the First Trinity Boat Club (1908) (1st ed. 1918). Macmillan and Co., Limited 1918: .String Figures; Cambridge, W. Heffer & Sons (1st ed. 1920, 2nd ed. 1921, 3rd ed. 1929, reprinted with supplements as Fun with String Figures'' by Dover Publications, 1971, ) See also Martin Gardner – another author of recreational mathematics Rouse Ball Professor of English La
https://en.wikipedia.org/wiki/John%20McCarthy%20%28computer%20scientist%29
John McCarthy (September 4, 1927 – October 24, 2011) was an American computer scientist and cognitive scientist. He was one of the founders of the discipline of artificial intelligence. He co-authored the document that coined the term "artificial intelligence" (AI), developed the programming language family Lisp, significantly influenced the design of the language ALGOL, popularized time-sharing, and invented garbage collection. McCarthy spent most of his career at Stanford University. He received many accolades and honors, such as the 1971 Turing Award for his contributions to the topic of AI, the United States National Medal of Science, and the Kyoto Prize. Early life and education John McCarthy was born in Boston, Massachusetts, on September 4, 1927, to an Irish immigrant father and a Lithuanian Jewish immigrant mother, John Patrick and Ida (Glatt) McCarthy. The family was obliged to relocate frequently during the Great Depression, until McCarthy's father found work as an organizer for the Amalgamated Clothing Workers in Los Angeles, California. His father came from Cromane, a small fishing village in County Kerry, Ireland. His mother died in 1957. Both parents were active members of the Communist Party during the 1930s, and they encouraged learning and critical thinking. Before he attended high school, he got interested in science by reading a translation of a Russian popular science book for children, called 100,000 Whys. John was fluent in the Russian language and made friends with Russian scientists during multiple trips to the Soviet Union but he distanced himself after making visits to the Soviet Bloc, which led to him becoming a conservative Republican. McCarthy graduated from Belmont High School two years early. McCarthy was accepted into Caltech in 1944. McCarthy showed an early aptitude for mathematics; during his teens he taught himself college mathematics by studying the textbooks used at the nearby California Institute of Technology (Caltech).
https://en.wikipedia.org/wiki/Amir%20Pnueli
Amir Pnueli (; April 22, 1941 – November 2, 2009) was an Israeli computer scientist and the 1996 Turing Award recipient. Biography Pnueli was born in Nahalal, in the British Mandate of Palestine (now in Israel) and received a Bachelor's degree in mathematics from the Technion in Haifa, and Ph.D. in applied mathematics from the Weizmann Institute of Science (1967). His thesis was on the topic of "Calculation of Tides in the Ocean". He switched to computer science during a stint as a post-doctoral fellow at Stanford University. His works in computer science focused on temporal logic and model checking, particularly regarding fairness properties of concurrent systems. He returned to Israel as a researcher; he was the founder and first chair of the computer science department at Tel Aviv University. He became a professor of computer science at the Weizmann Institute in 1981. From 1999 until his death, Pnueli also held a position at the Computer Science Department of New York University, New York, U.S. He's also served as an associate professor at the University of Pennsylvania and the Joseph Fourier University. Pnueli also founded two startup technology companies during his career. He had three children and, at his death, had four grandchildren. Pnueli died on November 2, 2009 of a brain hemorrhage. Awards and honours In 1996, Pnueli received the Turing Award for seminal work introducing temporal logic into computing science and for outstanding contributions to program and systems verification. On May 30, 1997 Pnueli received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden. In 1999, he was inducted as a Foreign Associate of the U.S. National Academy of Engineering. In 2000, he was awarded the Israel Prize, for computer science. In 2007, he was inducted as a Fellow of the Association for Computing Machinery. The Weizmann Institute of Science presents a memorial lecture series in his honour. See also List of I
https://en.wikipedia.org/wiki/50%20%28number%29
50 (fifty) is the natural number following 49 and preceding 51. In mathematics Fifty is the smallest number that is the sum of two non-zero square numbers in two distinct ways: 50 = 12 + 72 = 52 + 52 (see image). It is also the sum of three squares, 50 = 32 + 42 + 52, and the sum of four squares, 50 = 62 + 32 + 22 + 12. It is a Harshad number. 50 is a Stirling number of the first kind: and also a Narayana number: There is no solution to the equation φ(x) = 50, making 50 a nontotient. Nor is there a solution to the equation x − φ(x) = 50, making 50 a noncototient. Fifty is the sum of the number of faces that make up the Platonic solids (4 + 6 + 8 + 12 + 20 = 50). In science The atomic number of tin The fifth magic number in nuclear physics In religion In Kabbalah, there are 50 Gates of Wisdom (or Understanding) and 50 Gates of Impurity The traditional number of years in a jubilee period. The Christian Feast of Pentecost takes place on the 50th day of the Easter Season The Jewish Pentecost takes place 50 days after the Passover feast (the holiday of Shavuoth). In Hindu tantric tradition, the number 50 holds significance as the 50 Rudras in the Malinīvijayottara correlate with the 50 phonemes of Sanskrit, as well as with the 50 severed heads worn around goddess Kalis head. The mantra Aham ("I am"), as laid out in the Vijñāna Bhairava represents the first अ(a) and last ह(ha) phonemes of the Sanskrit alphabet and is believed to represent ultimate reality, in accordance with its non-dual philosophy. In sports In cricket one day internationals, each side may bat for 50 overs. In other fields Fifty is: There are 50 states in the United States of America. The TV show Hawaii Five-O and its reimagined version, Hawaii Five-0, are so called because Hawaii is the last (50th) of the states to officially become a state. 5-O (Five-Oh) - Slang for police officers and/or a warning that police are approaching. Derived from the television show Hawaii Five-O A cal
https://en.wikipedia.org/wiki/Arthur%20Prior
Arthur Norman Prior (4 December 1914 – 6 October 1969), usually cited as A. N. Prior, was a New Zealand–born logician and philosopher. Prior (1957) founded tense logic, now also known as temporal logic, and made important contributions to intensional logic, particularly in Prior (1971). Biography Prior was born in Masterton, New Zealand, on 4 December 1914, the only child of Australian-born parents: Norman Henry Prior (1882–1967) and his wife born Elizabeth Munton Rothesay Teague (1889–1914). His mother died less than three weeks after his birth and he was cared for by his father's sister. His father, a medical practitioner in general practice, after war service at Gallipoli and in Francewhere he was awarded the Military Crossremarried in 1920. There were three more children: Elaine, the epidemiologist Ian Prior, and Owen. Arthur Prior grew up in a prominent Methodist household. His two Wesleyan grandfathers, the Reverends Samuel Fowler Prior and Hugh Henwood Teague, were sent from England to South Australia as missionaries in 1875. The Prior family first moved to New Zealand in 1893. As the son of a doctor, Prior at first considered becoming a biologist, but ended up focusing on theology and philosophy, graduating from the University of Otago in 1935 with a B.A. in philosophy. While studying for his B.A., Prior attended the seminary at Dunedin's Knox Theological Hall but decided against entering the Presbyterian ministry. John Findlay, Professor of Philosophy at Otago, first opened up the study of logic for Prior. In 1936, Prior married Clare Hunter, a freelance journalist, and they spent several years in Europe, during which they tried to earn a living as writers. Daunted by the prospect of an invasion of Britain, he and Clare returned to New Zealand in 1940. At this point in his life he was a devout Presbyterian, though he became an atheist later in life. After divorce from his first wife, he remarried in 1943 to Mary Wilkinson, with whom he would have two chi
https://en.wikipedia.org/wiki/Tag%20system
In the theory of computation, a tag system is a deterministic model of computation published by Emil Leon Post in 1943 as a simple form of a Post canonical system. A tag system may also be viewed as an abstract machine, called a Post tag machine (not to be confused with Post–Turing machines)—briefly, a finite-state machine whose only tape is a FIFO queue of unbounded length, such that in each transition the machine reads the symbol at the head of the queue, deletes a constant number of symbols from the head, and appends to the tail a symbol-string that depends solely on the first symbol read in this transition. Because all of the indicated operations are performed in a single transition, a tag machine strictly has only one state. Definitions A tag system is a triplet (m, A, P), where m is a positive integer, called the deletion number. A is a finite alphabet of symbols, one of which can be a special halting symbol. All finite (possibly empty) strings on A are called words. P is a set of production rules, assigning a word P(x) (called a production) to each symbol x in A. The production (say P()) assigned to the halting symbol is seen below to play no role in computations, but for convenience is taken to be P() = . A halting word is a word that either begins with the halting symbol or whose length is less than m. A transformation t (called the tag operation) is defined on the set of non-halting words, such that if x denotes the leftmost symbol of a word S, then t(S) is the result of deleting the leftmost m symbols of S and appending the word P(x) on the right. Thus, the system processes the m-symbol head into a tail of variable length, but the generated tail depends solely on the first symbol of the head. A computation by a tag system is a finite sequence of words produced by iterating the transformation t, starting with an initially given word and halting when a halting word is produced. (By this definition, a computation is not considered to exist unles
https://en.wikipedia.org/wiki/Vestigiality
Vestigiality is the retention, during the process of evolution, of genetically determined structures or attributes that have lost some or all of the ancestral function in a given species. Assessment of the vestigiality must generally rely on comparison with homologous features in related species. The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment. The feature may be selected against more urgently when its function becomes definitively harmful, but if the lack of the feature provides no advantage, and its presence provides no disadvantage, the feature may not be phased out by natural selection and persist across species. Examples of vestigial structures (also called degenerate, atrophied, or rudimentary organs) are the loss of functional wings in island-dwelling birds; the human vomeronasal organ; and the hindlimbs of the snake and whale. Overview Vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. Like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood. Vestigiality, biologically speaking, refers to organisms retaining organs that have seemingly lost their original function. Vestigial organs are common evolutionary knowledge. In addition, the term vestigiality is useful in referring to many genetically determined features, either morphological, behavioral, or physiological; in any such context, however, it need not follow that a vestigial feature must be completely useless. A classic example at the level of gross anatomy is the human vermiform appendix, vestigial in the sense of retaining no significa
https://en.wikipedia.org/wiki/Antinuclear%20antibody
Antinuclear antibodies (ANAs, also known as antinuclear factor or ANF) are autoantibodies that bind to contents of the cell nucleus. In normal individuals, the immune system produces antibodies to foreign proteins (antigens) but not to human proteins (autoantigens). In some cases, antibodies to human antigens are produced. There are many subtypes of ANAs such as anti-Ro antibodies, anti-La antibodies, anti-Sm antibodies, anti-nRNP antibodies, anti-Scl-70 antibodies, anti-dsDNA antibodies, anti-histone antibodies, antibodies to nuclear pore complexes, anti-centromere antibodies and anti-sp100 antibodies. Each of these antibody subtypes binds to different proteins or protein complexes within the nucleus. They are found in many disorders including autoimmunity, cancer and infection, with different prevalences of antibodies depending on the condition. This allows the use of ANAs in the diagnosis of some autoimmune disorders, including systemic lupus erythematosus, Sjögren syndrome, scleroderma, mixed connective tissue disease, polymyositis, dermatomyositis, autoimmune hepatitis and drug-induced lupus. The ANA test detects the autoantibodies present in an individual's blood serum. The common tests used for detecting and quantifying ANAs are indirect immunofluorescence and enzyme-linked immunosorbent assay (ELISA). In immunofluorescence, the level of autoantibodies is reported as a titre. This is the highest dilution of the serum at which autoantibodies are still detectable. Positive autoantibody titres at a dilution equal to or greater than 1:160 are usually considered as clinically significant. Positive titres of less than 1:160 are present in up to 20% of the healthy population, especially the elderly. Although positive titres of 1:160 or higher are strongly associated with autoimmune disorders, they are also found in 5% of healthy individuals. Autoantibody screening is useful in the diagnosis of autoimmune disorders and monitoring levels helps to predict the progre
https://en.wikipedia.org/wiki/Payload%20fraction
In aerospace engineering, payload fraction is a common term used to characterize the efficiency of a particular design. Payload fraction is calculated by dividing the weight of the payload by the takeoff weight of aircraft. Fuel represents a considerable amount of the overall takeoff weight, and for shorter trips it is quite common to load less fuel in order to carry a lighter load. For this reason the useful load fraction calculates a similar number, but based on the combined weight of the payload and fuel together. Propeller-driven airliners had useful load fractions on the order of 25–35%. Modern jet airliners have considerably higher useful load fractions, on the order of 45–55%. For spacecraft the payload fraction is often less than 1%, while the useful load fraction is perhaps 90%. In this case the useful load fraction is not a useful term, because spacecraft typically cannot reach orbit without a full fuel load. For this reason the related term propellant mass fraction, is used instead. However, if the latter is large, the payload can only be small. Examples Note: the above table may incorrectly include the mass of the empty upper stage or stages. See also Tsiolkovsky rocket equation References Astrodynamics Aerospace engineering
https://en.wikipedia.org/wiki/WAITS
WAITS was a heavily modified variant of Digital Equipment Corporation's Monitor operating system (later renamed to, and better known as, "TOPS-10") for the PDP-6 and PDP-10 mainframe computers, used at the Stanford Artificial Intelligence Laboratory (SAIL) from the mid-1960s up until 1991; the mainframe computer it ran on also went by the name of "SAIL". Overview There was never an "official" expansion of WAITS, but a common variant was "West-coast Alternative to ITS"; another variant was "Worst Acronym Invented for a Timesharing System". The name was endorsed by the SAIL community in a public vote choosing among alternatives. Two of the other contenders were SAINTS ("Stanford AI New Timesharing System") and SINNERS ("Stanford Incompatible Non-New Extensively Rewritten System"), proposed by the systems programmers. Though WAITS was less visible than ITS, there was frequent exchange of people and ideas between the two communities, and innovations pioneered at WAITS exerted enormous indirect influence. WAITS alumni at Xerox PARC and elsewhere also played major roles in the developments that led to the Xerox Star, the Macintosh, and the SUN workstation (later sold by Sun Microsystems). The early screen modes of Emacs, for example, were directly inspired by WAITS' "E" editor – one of a family of editors that were the first to do real-time editing, in which the editing commands were invisible and where one typed text at the point of insertion/overwriting. The modern style of multi-region windowing is said to have originated there. The system also featured an unusual level of support for what is now called multimedia computing, allowing analog audio and video signals (including TV and radio) to be switched to programming terminals. This switching capability for terminal video even allowed users in separate offices to view and type on the same virtual terminal, or a single user to instantly switch among multiple full virtual terminals. Also invented there were "bucky
https://en.wikipedia.org/wiki/Heyting%20algebra
In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation a → b of implication such that (c ∧ a) ≤ b is equivalent to c ≤ (a → b). From a logical standpoint, A → B is by this definition the weakest proposition for which modus ponens, the inference rule A → B, A ⊢ B, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced by to formalize intuitionistic logic. As lattices, Heyting algebras are distributive. Every Boolean algebra is a Heyting algebra when a → b is defined as ¬a ∨ b, as is every complete distributive lattice satisfying a one-sided infinite distributive law when a → b is taken to be the supremum of the set of all c for which c ∧ a ≤ b. In the finite case, every nonempty distributive lattice, in particular every nonempty finite chain, is automatically complete and completely distributive, and hence a Heyting algebra. It follows from the definition that 1 ≤ 0 → a, corresponding to the intuition that any proposition a is implied by a contradiction 0. Although the negation operation ¬a is not part of the definition, it is definable as a → 0. The intuitive content of ¬a is the proposition that to assume a would lead to a contradiction. The definition implies that a ∧ ¬a = 0. It can further be shown that a ≤ ¬¬a, although the converse, ¬¬a ≤ a, is not true in general, that is, double negation elimination does not hold in general in a Heyting algebra. Heyting algebras generalize Boolean algebras in the sense that Boolean algebras are precisely the Heyting algebras satisfying a ∨ ¬a = 1 (excluded middle), equivalently ¬¬a = a. Those elements of a Heyting algebra H of the form ¬a comprise a Boolean lattice, but in general this is not a subalgebra of H (see below). Heyting algebras serve as the algebraic mo
https://en.wikipedia.org/wiki/CODASYL
CODASYL, the Conference/Committee on Data Systems Languages, was a consortium formed in 1959 to guide the development of a standard programming language that could be used on many computers. This effort led to the development of the programming language COBOL, the CODASYL Data Model, and other technical standards. CODASYL's members were individuals from industry and government involved in data processing activity. Its larger goal was to promote more effective data systems analysis, design, and implementation. The organization published specifications for various languages over the years, handing these over to official standards bodies (ISO, ANSI, or their predecessors) for formal standardization. History CODASYL is remembered almost entirely for two activities: its work on the development of the COBOL language and its activities in standardizing database interfaces. It also worked on a wide range of other topics, including end-user form interfaces and operating system control languages, but these projects had little lasting impact. The remainder of this section is concerned with CODASYL's database activities. In 1965 CODASYL formed a List Processing Task Force. This group was chartered to develop COBOL language extensions for processing collections of records; the name arose because Charles Bachman's IDS system (which was the main technical input to the project) managed relationships between records using chains of pointers. In 1967 the group renamed itself the Data Base Task Group (DBTG), and its first report in January 1968 was entitled COBOL extensions to handle data bases. In October 1969 the DBTG published its first language specifications for the network database model which became generally known as the CODASYL Data Model. This specification in fact defined several separate languages: a data definition language (DDL) to define the schema of the database, another DDL to create one or more subschemas defining application views of the database; and a d
https://en.wikipedia.org/wiki/Wigner%27s%20classification
In mathematics and theoretical physics, Wigner's classification is a classification of the nonnegative energy irreducible unitary representations of the Poincaré group which have either finite or zero mass eigenvalues. (These unitary representations are infinite-dimensional; the group is not semisimple and it does not satisfy Weyl's theorem on complete reducibility.) It was introduced by Eugene Wigner, to classify particles and fields in physics—see the article particle physics and representation theory. It relies on the stabilizer subgroups of that group, dubbed the Wigner little groups of various mass states. The Casimir invariants of the Poincaré group are (Einstein notation) where is the 4-momentum operator, and where is the Pauli–Lubanski pseudovector. The eigenvalues of these operators serve to label the representations. The first is associated with mass-squared and the second with helicity or spin. The physically relevant representations may thus be classified according to whether but or whether with Wigner found that massless particles are fundamentally different from massive particles. For the first case Note that the eigenspace (see generalized eigenspaces of unbounded operators) associated with is a representation of SO(3). In the ray interpretation, one can go over to Spin(3) instead. So, massive states are classified by an irreducible Spin(3) unitary representation that characterizes their spin, and a positive mass, . For the second case Look at the stabilizer of This is the double cover of SE(2) (see projective representation). We have two cases, one where irreps are described by an integral multiple of called the helicity, and the other called the "continuous spin" representation. For the third case The only finite-dimensional unitary solution is the trivial representation called the vacuum. Massive scalar fields As an example, let us visualize the irreducible unitary representation with and It corresponds to the space of ma
https://en.wikipedia.org/wiki/Code%2039
Code 39 (also known as Alpha39, Code 3 of 9, Code 3/9, Type 39, USS Code 39, or USD-3) is a variable length, discrete barcode symbology defined in ISO/IEC 16388:2007. The Code 39 specification defines 43 characters, consisting of uppercase letters (A through Z), numeric digits (0 through 9) and a number of special characters (-, ., $, /, +, %, and space). An additional character (denoted '*') is used for both start and stop delimiters. Each character is composed of nine elements: five bars and four spaces. Three of the nine elements in each character are wide (binary value 1), and six elements are narrow (binary value 0). The width ratio between narrow and wide is not critical, and may be chosen between 1:2 and 1:3. The barcode itself does not contain a check digit (in contrast to—for instance—Code 128), but it can be considered self-checking on the grounds that a single erroneously interpreted bar cannot generate another valid character. Possibly the most serious drawback of Code 39 is its low data density: It requires more space to encode data in Code 39 than, for example, in Code 128. This means that very small goods cannot be labeled with a Code 39 based barcode. However, Code 39 is still used by some postal services (although the Universal Postal Union recommends using Code 128 in all cases), and can be decoded with virtually any barcode reader. One advantage of Code 39 is that since there is no need to generate a check digit, it can easily be integrated into an existing printing system by adding a barcode font to the system or printer and then printing the raw data in that font. Code 39 was developed by Dr. David Allais and Ray Stevens of Intermec in 1974. Their original design included two wide bars and one wide space in each character, resulting in 40 possible characters. Setting aside one of these characters as a start and stop pattern left 39 characters, which was the origin of the name Code 39. Four punctuation characters were later added, using no wi
https://en.wikipedia.org/wiki/Barcode%20reader
A barcode reader or barcode scanner is an optical scanner that can read printed barcodes, decode the data contained in the barcode to a computer. Like a flatbed scanner, it consists of a light source, a lens and a light sensor for translating optical impulses into electrical signals. Additionally, nearly all barcode readers contain decoder circuitry that can analyse the barcode's image data provided by the sensor and send the barcode's content to the scanner's output port. Types of barcode scanners Technology Barcode readers can be differentiated by technologies as follows: Pen-type readers Pen-type readers consist of a light source and photodiode that are placed next to each other in the tip of a pen. To read a barcode, the person holding the pen must move the tip of it across the bars at a relatively uniform speed. The photodiode measures the intensity of the light reflected back from the light source as the tip crosses each bar and space in the printed code. The photodiode generates a waveform that is used to measure the widths of the bars and spaces in the barcode. Dark bars in the barcode absorb light and white spaces reflect light so that the voltage waveform generated by the photodiode is a representation of the bar and space pattern in the barcode. This waveform is decoded by the scanner in a manner similar to the way Morse code dots and dashes are decoded. Laser scanners Laser scanners direct the laser beam back and forth across the barcode. As with the pen-type reader, a photo-diode is used to measure the intensity of the light reflected back from the barcode. In both pen readers and laser scanners, the light emitted by the reader is rapidly varied in brightness with a data pattern and the photo-diode receive circuitry is designed to detect only signals with the same modulated pattern. CCD readers (also known as LED scanners) Charge-coupled device (CCD) readers use an array of hundreds of tiny light sensors lined up in a row in the head of the r
https://en.wikipedia.org/wiki/Str%C3%A4hle%20construction
Strähle's construction is a geometric method for determining the lengths for a series of vibrating strings with uniform diameters and tensions to sound pitches in a specific rational tempered musical tuning. It was first published in the 1743 Proceedings of the Royal Swedish Academy of Sciences by Swedish master organ maker Daniel Stråhle (1700–1746). The Academy's secretary Jacob Faggot appended a miscalculated set of pitches to the article, and these figures were reproduced by Friedrich Wilhelm Marpurg in Versuch über die musikalische Temperatur in 1776. Several German textbooks published about 1800 reported that the mistake was first identified by Christlieb Benedikt Funk in 1779, but the construction itself appears to have received little notice until the middle of the twentieth century when tuning theorist J. Murray Barbour presented it as a good method for approximating equal temperament and similar exponentials of small roots, and generalized its underlying mathematical principles. It has become known as a device for building fretted musical instruments through articles by mathematicians Ian Stewart and Isaac Jacob Schoenberg, and is praised by them as a unique and remarkably elegant solution developed by an unschooled craftsman. The name "Strähle" used in recent English language works appears to be due to a transcription error in Marpurg's text, where the old-fashioned diacritic raised "e" was substituted for the raised ring. Background Daniel P. Stråhle was active as an organ builder in central Sweden in the second quarter of the eighteenth century. He had worked as a journeyman for the important Stockholm organ builder Johan Niclas Cahman and, in 1741, four years after Cahman's death, Stråhle was granted his privilege for organ making. According to the system in force in Sweden at the time a privilege, a granted monopoly which was held by only a few of the most established makers of each type of musical instruments, gave him the legal right to build and
https://en.wikipedia.org/wiki/Trisodium%20phosphate
Trisodium phosphate (TSP) is the inorganic compound with the chemical formula . It is a white, granular or crystalline solid, highly soluble in water, producing an alkaline solution. TSP is used as a cleaning agent, builder, lubricant, food additive, stain remover, and degreaser. The item of commerce is often partially hydrated and may range from anhydrous to the dodecahydrate . Most often found in white powder form, it can also be called trisodium orthophosphate or simply sodium phosphate. Production Trisodium phosphate is produced by neutralization of phosphoric acid using sodium carbonate, which produces disodium hydrogen phosphate. The disodium hydrogen phosphate is reacted with sodium hydroxide to form trisodium phosphate and water. Uses Cleaning Trisodium phosphate was at one time extensively used in formulations for a variety of consumer-grade soaps and detergents, and the most common use for trisodium phosphate has been in cleaning agents. The pH of a 1% solution is 12 (i.e., very basic), and the solution is sufficiently alkaline to saponify grease and oils. In combination with surfactants, TSP is an excellent agent for cleaning everything from laundry to concrete driveways. This versatility and low manufacturing price made TSP the basis for a plethora of cleaning products sold in the mid-20th century. TSP is still sold and used as a cleaning agent, but since the late 1960s, its use has diminished in the United States and many other parts of the world because, like many phosphate-based cleaners, it is known to cause extensive eutrophication of lakes and rivers once it enters a water system. TSP is commonly used after cleaning a surface with mineral spirits to remove hydrocarbon residues and may be used with household chlorine bleach in the same solution without hazardous reactions. This mixture is particularly effective for removing mildew, but is less effective at removing mold. Although it is still the active ingredient in some toilet bowl-clea
https://en.wikipedia.org/wiki/List%20of%20chordate%20orders
This article contains a list of all of the classes and orders that are located in the Phylum Chordata. Subphylum Cephalochordata Class Leptocardii: Lancelets Order Amphioxiformes Family Pikaiidae † Genus Pikaia † Olfactores (unranked) Subphylum Tunicata Class Ascidiacea: Ascideans and sessile tunicates Order Enterogona Order Pleurogona Order Aspiraculata Class Thaliacea: Pelagic tunicates Order Doliolida Order Pyrosomida Order Salpida: salps Class Appendicularia: Solitary, free-swimming tunicates Order Copelata Subphylum Vertebrates Infraphylum Cyclostomata, Superclass Agnatha: Paraphyletic jawless vertebrates Class Myxini: Hagfish Order Myxiniformes Family Myxinidae Class Hyperoartia: Lampreys and their † kin Order Petromyzontiformes Infraphylum Gnathostomata: Jawed vertebrates Class Placodermi † Order Acanthothoraci Order Arthrodira Order Antiarchi Order Brindabellaspida Order Petalichthyida Order Phyllolepida Order Ptyctodontida Order Rhenanida Order Pseudopetalichthyida (The placement of this order is debated.) Order Stensioellida (The placement of this monotypic order is debated.) Class Chondrichthyes: Cartilaginous fish Subclass Elasmobranchii Superorder Batoidea Order Rajiformes: rays and skates Order Rhinopristiformes: sawfishes Order Torpediniformes: electric rays Order Myliobatiformes: (sting)rays Superorder Selachimorpha (sharks) Order Heterodontiformes: bullhead sharks Order Orectolobiformes: carpet sharks Order Carcharhiniformes: ground sharks Order Lamniformes: mackerel sharks Order Hexanchiformes: frilled and cow sharks Order Squaliformes: dogfish sharks Order Squatiniformes: angel sharks Order Pristiophoriformes: saw sharks Subclass Holocephali Order Chimaeriformes: chimaeras Class Acanthodii † Order Climatiiformes Order Ischnacanthiformes Order Acanthodiformes Superclass Osteichthyes: Bony fish Class Actinopterygii: Ray-finned fish Order Asarotiformes † Order Discordichthyif
https://en.wikipedia.org/wiki/Network%20Computer
The Network Computer (or NC) was a diskless desktop computer device made by Oracle Corporation from about 1996 to 2000. The devices were designed and manufactured by an alliance, which included Sun Microsystems (acquired by Oracle in 2010), IBM, and others. The devices were designed with minimum specifications, based on the Network Computer Reference Profile. The brand was also employed as a marketing term to try to popularize this design of computer within enterprise and among consumers. The NC brand was mainly intended to inspire a range of desktop computers from various suppliers that, by virtue of their diskless design and use of inexpensive components and software, were cheaper and easier to manage than standard fat client desktops. However, due to the commoditization of standard desktop components, and due to the increasing availability and popularity of various software options for using full desktops as diskless nodes, thin clients, and hybrid clients, the Network Computer brand never achieved the popularity hoped for by Oracle and was eventually mothballed. The term "network computer" is now used for any diskless desktop computer or a thin client. History The failure of the NC to impact on the scale predicted by Larry Ellison may have been caused by a number of factors. Firstly, prices of PCs quickly fell below $1000, making the competition very hard. Secondly, the software available for NCs was neither mature nor open. Thirdly, the idea could simply have been ahead of its time, as at the NC's launch in 1996, the typical home Internet connection was only a 28.8 kbit/s modem dialup. This was simply insufficient for the delivery of executable content. The World Wide Web itself was not considered mainstream until its breakout year, 1998. Prior to this, very few Internet service providers advertised in mainstream press (at least outside of the US), and knowledge of the Internet was limited. This could have held back uptake of what would be seen as a very
https://en.wikipedia.org/wiki/Runt%20pulse
In digital circuits, a runt pulse is a narrow pulse that, due to non-zero rise and fall times of the signal, does not reach a valid high or low level. A runt pulse may occur when switching between asynchronous clocks; or as the result of a race condition in which a signal takes two separate paths through a circuit, which may have different delays, and is then recombined to form a glitch; or when the output of a flip-flop becomes metastable. Example Some oscilloscopes provide a method for triggering on runt pulses. The oscilloscope triggers when the signal crosses one of two voltage thresholds, but not both. References Digital electronics
https://en.wikipedia.org/wiki/UXF
In computing, UML eXchange Format (UXF) is a XML-based model interchange format for Unified Modeling Language (UML), which is a standard software modeling language. UXF is a structured format described in 1998 and intended to encode, publish, access and exchange UML models. More recent alternatives include XML Metadata Interchange and OMG's Diagram Definition standard. Known uses UMLet is an application that uses UXF as its native file format. References Unified Modeling Language
https://en.wikipedia.org/wiki/Glitch
A glitch is a short-lived fault in a system, such as a transient fault that corrects itself, making it difficult to troubleshoot. The term is particularly common in the computing and electronics industries, in circuit bending, as well as among players of video games. More generally, all types of systems including human organizations and nature experience glitches. A glitch, which is slight and often temporary, differs from a more serious bug which is a genuine functionality-breaking problem. Alex Pieschel, writing for Arcade Review, said: bug' is often cast as the weightier and more blameworthy pejorative, while 'glitch' suggests something more mysterious and unknowable inflicted by surprise inputs or stuff outside the realm of code." The word itself is sometimes humorously described as being short for "gremlins lurking in the computer hardware." Etymology Some reference books, including Random House's American Slang, claim that the term comes from the German word and the Yiddish word . Either way, it is a relatively new term. It was first widely defined for the American people by Bennett Cerf on the June 20, 1965, episode of What's My Line as "a kink ... when anything goes wrong down there [Cape Kennedy], they say there's been a slight glitch." The astronaut John Glenn explained the term in his section of the book Into Orbit, writing that Another term we adopted to describe some of our problems was "glitch." Literally, a glitch is a spike or change in voltage in an electrical circuit which takes place when the circuit suddenly has a new load put on it. You have probably noticed a dimming of lights in your home when you turn a switch or start the dryer or the television set. Normally, these changes in voltage are protected by fuses. A glitch, however, is such a minute change in voltage that no fuse could protect against it. John Daily further defined the word on the July 4, 1965, episode of What's My Line, saying that it's a term used by the Air Force at Cape
https://en.wikipedia.org/wiki/Multinational%20Character%20Set
The Multinational Character Set (DMCS or MCS) is a character encoding created in 1983 by Digital Equipment Corporation (DEC) for use in the popular VT220 terminal. It was an 8-bit extension of ASCII that added accented characters, currency symbols, and other character glyphs missing from 7-bit ASCII. It is only one of the code pages implemented for the VT220 National Replacement Character Set (NRCS). MCS is registered as IBM code page/CCSID 1100 (Multinational Emulation) since 1992. Depending on associated sorting Oracle calls it WE8DEC, N8DEC, DK8DEC, S8DEC, or SF8DEC. Such "extended ASCII" sets were common (the National Replacement Character Set provided sets for more than a dozen European languages), but MCS has the distinction of being the ancestor of ECMA-94 in 1985 and ISO 8859-1 in 1987. The code chart of MCS with ECMA-94, ISO 8859-1 and the first 256 code points of Unicode have many more similarities than differences. In addition to unused code points, differences from ISO 8859-1 are: Character set See also Lotus International Character Set (LICS), a very similar character set BraSCII, a very similar character set 8-bit DEC Greek (Code page 1287) 8-bit DEC Turkish (Code page 1288) 8-bit DEC Hebrew 8-bit DEC Cyrillic (KOI-8 Cyrillic) 8-bit DEC Special Graphics (VT100 Line Drawing) (DEC-SPECIAL) 8-bit DEC Technical Character Set (DEC-TECHNICAL) DEC Kanji (JIS X 0208) References Character sets Digital Equipment Corporation Computer-related introductions in 1983
https://en.wikipedia.org/wiki/Polyphyly
A polyphyletic group is an assemblage that includes organisms with mixed evolutionary origin but does not include their most recent common ancestor. The term is often applied to groups that share similar features known as homoplasies, which are explained as a result of convergent evolution. The arrangement of the members of a polyphyletic group is called a polyphyly . It is contrasted with monophyly and paraphyly. For example, the biological characteristic of warm-bloodedness evolved separately in the ancestors of mammals and the ancestors of birds; "warm-blooded animals" is therefore a polyphyletic grouping. Other examples of polyphyletic groups are algae, C4 photosynthetic plants, and edentates. Many taxonomists aim to avoid homoplasies in grouping taxa together, with a goal to identify and eliminate groups that are found to be polyphyletic. This is often the stimulus for major revisions of the classification schemes. Researchers concerned more with ecology than with systematics may take polyphyletic groups as legitimate subject matter; the similarities in activity within the fungus group Alternaria, for example, can lead researchers to regard the group as a valid genus while acknowledging its polyphyly. In recent research, the concepts of monophyly, paraphyly, and polyphyly have been used in deducing key genes for barcoding of diverse groups of species. Etymology The term polyphyly, or polyphyletic, derives from the two Ancient Greek words, (), meaning "many, a lot of", and (), meaning "genus, species", and refers to the fact that a polyphyletic group includes organisms (e.g., genera, species) arising from multiple ancestral sources. Conversely, the term monophyly, or monophyletic, builds on the ancient Greek prefix (), meaning "alone, only, unique", and refers to the fact that a monophyletic group includes organisms consisting of all the descendants of a unique common ancestor. By comparison, the term paraphyly, or paraphyletic, uses the ancient Greek p
https://en.wikipedia.org/wiki/NCR%20CRAM
CRAM, or Card Random-Access Memory, model 353-1, was a data storage device invented by NCR, which first appeared on their model NCR-315 mainframe computer in 1962. It was also available for NCR's third generation NCR Century series as the NCR/653-100. A CRAM cartridge contained 256 3x14 inch cards with a PET film magnetic recording surface. Each "deck" of cards could contain up to 5.5 MB of alphanumeric characters. The cards were suspended from eight d-section rods, which were selectively rotated to release a specific card, each card having a unique pattern of notches at one end. The selected card was dropped and wrapped around a rotating drum to be read or written. Each cartridge could store 5.5 MB. Later versions of the CRAM, the 353-2 and 353-3, used decks of 512 cards, thus doubling the storage capacity of each unit. Each card contains seven tracks containing 1550 slabs (12 bits each). Normally the track was initialized with a four slab header containing the cartridge number (two slabs), the card number and the track number. Cards were dropped by changing the card rods to a binary configuration and release the two outside "release" rods. Air was blown over the top of the cards to keep them separated, and to increase the dropping speed. Once on the rotating "drum" a series of positive and negative air pressure chambers pulled the card across a magnetic read-write head. After one or more passes over the head, where data is written to or read from the card, a release gate allow the card to be "thrown" along a raceway over the card deck, and onto a "loader" mechanism. The loader used a group of electro-magnetic solenoids to slam the card back onto the control rods. The unit was a monster with two large electric motors that drove four large vacuum/blowers. It was possible to have up to five cards in motion at any point in time; one dropping, one on the drum, two in the return transport, and one being loaded back onto the deck. If the card didn't succeed in
https://en.wikipedia.org/wiki/Development%20hell
Development hell, also known as development purgatory or development limbo, is media and software industry jargon for a project, concept, or idea that remains in a stage of early development for a long time because of legal, technical, or artistic challenges. A work may move between many sets of artistic leadership, crews, scripts, game engines, or studios. Many projects which end up in development hell never progress into production, and are gradually abandoned by the involved parties. Projects in development hell generally have ambitious goals, which may or may not be underestimated in the design phase, and are delayed in an attempt to meet those goals to a high degree. Production hell refers to when a film has entered production but remains in that state for a long time without progressing to post-production. The term can also apply generally to any project that has languished unexpectedly in its planning or construction phases, rather than being completed in a realistic amount of time, or otherwise having diverted from its original timely expected date of completion. Overview Film Film industry companies often buy the film rights to many popular novels, video games, and comic books, but it may take years for such properties to be successfully brought to the screen, and often with considerable changes to the plot, characters, and general tone. This pre-production process can last for months or years. More often than not, a project trapped in this state for a prolonged period of time will be abandoned by all interested parties or canceled outright. As Hollywood starts ten times as many projects as are released, many scripts will end up in this limbo state. Less than two percent of all books which are optioned actually make it to the big screen. David Hughes, the author of a book titled Tales From Development Hell, states that once producers, directors, and actors are attached to the project, they may request script rewrites, which delays production. Developm
https://en.wikipedia.org/wiki/TOPS
Total Operations Processing System (TOPS) is a computer system for managing railway locomotives and rolling stock, known for many years of use in the United Kingdom. TOPS was originally developed between the Southern Pacific Railroad (SP), Stanford University and IBM as a replacement for paper-based systems for managing rail logistics. A jointly-owned consultancy company, TOPS On-Line Inc., was established in 1960 with the goal of implementing TOPS, as well as selling it to third parties. Development was protracted, requiring around 660 man-years of effort to produce a releasable build. During mid-1968, the first phase of the system was introduced on the SP, and quickly proved its advantages over the traditional methods practiced prior to its availability. In addition to SP, TOPS was widely adopted throughout North America and beyond. While it was at one point in widespread use across many of the United States railroads, the system has been perhaps most prominently used in the United Kingdom. During 1971, the country's nationalised rail operation, British Rail (BR), opted to procure and integrate TOPS into its operations. The acquisition of an existing system rather than develop an indigenous programme was reasoned to be both cheaper and quicker to implement; it was noted, however, that TOPS was not capable of performing all desired functions. Since its implementation during the mid 1970s, both BR and its successors have continued to operate the system. SP itself has developed a newer system called the Terminal Information Processing System (TIPS), which replaced TOPS entirely during 1980. Early development During the 1950s and 1960s, it was increasingly recognised that the adoption of computer-based management systems could provide substantial benefits in various operations, particularly those involving logistics. Consequently, by the 1960s, various railways in various countries, including Japan, Canada, and the United States had begun to develop and introduce
https://en.wikipedia.org/wiki/Cluster%20headache
Cluster headache (CH) is a neurological disorder characterized by recurrent severe headaches on one side of the head, typically around the eye(s). There is often accompanying eye watering, nasal congestion, or swelling around the eye on the affected side. These symptoms typically last 15 minutes to 3 hours. Attacks often occur in clusters which typically last for weeks or months and occasionally more than a year. The cause is unknown. Risk factors include a history of exposure to tobacco smoke and a family history of the condition. Exposures which may trigger attacks include alcohol, nitroglycerin, and histamine. They are a primary headache disorder of the trigeminal autonomic cephalalgias type. Diagnosis is based on symptoms. Recommended management includes lifestyle adaptations such as avoiding potential triggers. Treatments for acute attacks include oxygen or a fast-acting triptan. Measures recommended to decrease the frequency of attacks include steroid injections, civamide, or verapamil. Nerve stimulation or surgery may occasionally be used if other measures are not effective. The condition affects about 0.1% of the general population at some point in their life and 0.05% in any given year. The condition usually first occurs between 20 and 40 years of age. Men are affected about four times more often than women. Cluster headaches are named for the occurrence of groups of headache attacks (clusters). They have also been referred to as "suicide headaches". Signs and symptoms Cluster headaches are recurring bouts of severe unilateral headache attacks. The duration of a typical CH attack ranges from about 15 to 180 minutes. About 75% of untreated attacks last less than 60 minutes. However, women may have longer and more severe CH. The onset of an attack is rapid and typically without an aura. Preliminary sensations of pain in the general area of attack, referred to as "shadows", may signal an imminent CH, or these symptoms may linger after an attack has passed
https://en.wikipedia.org/wiki/Lower%20limit%20topology
In mathematics, the lower limit topology or right half-open interval topology is a topology defined on , the set of real numbers; it is different from the standard topology on (generated by the open intervals) and has a number of interesting properties. It is the topology generated by the basis of all half-open intervals [a,b), where a and b are real numbers. The resulting topological space is called the Sorgenfrey line after Robert Sorgenfrey or the arrow and is sometimes written . Like the Cantor set and the long line, the Sorgenfrey line often serves as a useful counterexample to many otherwise plausible-sounding conjectures in general topology. The product of with itself is also a useful counterexample, known as the Sorgenfrey plane. In complete analogy, one can also define the upper limit topology, or left half-open interval topology. Properties The lower limit topology is finer (has more open sets) than the standard topology on the real numbers (which is generated by the open intervals). The reason is that every open interval can be written as a (countably infinite) union of half-open intervals. For any real and , the interval is clopen in (i.e., both open and closed). Furthermore, for all real , the sets and are also clopen. This shows that the Sorgenfrey line is totally disconnected. Any compact subset of must be an at most countable set. To see this, consider a non-empty compact subset . Fix an , consider the following open cover of : Since is compact, this cover has a finite subcover, and hence there exists a real number such that the interval contains no point of apart from . This is true for all . Now choose a rational number . Since the intervals , parametrized by , are pairwise disjoint, the function is injective, and so is at most countable. It could be observed that a subset is compact if and only if it bounded from below and is well-ordered when endowed with the order "" (which in particular implies that it is bounded fr
https://en.wikipedia.org/wiki/Stovepipe%20system
In engineering and computing, "stovepipe system" is a pejorative term for a system that has the potential to share data or functionality with other systems but which does not do so. The term evokes the image of stovepipes rising above buildings, each functioning individually. A simple example of a stovepipe system is one that implements its own user IDs and passwords, instead of relying on a common user ID and password shared with other systems. Stovepipes are A stovepipe system is generally considered an example of an anti-pattern, particularly found in legacy systems. This is due to the lack of code reuse, and resulting software brittleness due to potentially general functions only being used on limited input. However, in certain cases stovepipe systems are considered appropriate, due to benefits from vertical integration and avoiding dependency hell. For example, the Microsoft Excel team has avoided dependencies and even maintained its own C compiler, which helped it to ship on time, have high-quality code, and generate small, cross-platform code. See also Not invented here Reinventing the wheel Stovepipe (organisation) References Anti-patterns Software maintenance
https://en.wikipedia.org/wiki/Equaliser%20%28mathematics%29
In mathematics, an equaliser is a set of arguments where two or more functions have equal values. An equaliser is the solution set of an equation. In certain contexts, a difference kernel is the equaliser of exactly two functions. Definitions Let X and Y be sets. Let f and g be functions, both from X to Y. Then the equaliser of f and g is the set of elements x of X such that f(x) equals g(x) in Y. Symbolically: The equaliser may be denoted Eq(f, g) or a variation on that theme (such as with lowercase letters "eq"). In informal contexts, the notation {f = g} is common. The definition above used two functions f and g, but there is no need to restrict to only two functions, or even to only finitely many functions. In general, if F is a set of functions from X to Y, then the equaliser of the members of F is the set of elements x of X such that, given any two members f and g of F, f(x) equals g(x) in Y. Symbolically: This equaliser may be written as Eq(f, g, h, ...) if is the set {f, g, h, ...}. In the latter case, one may also find {f = g = h = ···} in informal contexts. As a degenerate case of the general definition, let F be a singleton {f}. Since f(x) always equals itself, the equaliser must be the entire domain X. As an even more degenerate case, let F be the empty set. Then the equaliser is again the entire domain X, since the universal quantification in the definition is vacuously true. Difference kernels A binary equaliser (that is, an equaliser of just two functions) is also called a difference kernel. This may also be denoted DiffKer(f, g), Ker(f, g), or Ker(f − g). The last notation shows where this terminology comes from, and why it is most common in the context of abstract algebra: The difference kernel of f and g is simply the kernel of the difference f − g. Furthermore, the kernel of a single function f can be reconstructed as the difference kernel Eq(f, 0), where 0 is the constant function with value zero. Of course, all of this presumes an
https://en.wikipedia.org/wiki/Genetic%20testing
Genetic testing, also known as DNA testing, is used to identify changes in DNA sequence or chromosome structure. Genetic testing can also include measuring the results of genetic changes, such as RNA analysis as an output of gene expression, or through biochemical analysis to measure specific protein output. In a medical setting, genetic testing can be used to diagnose or rule out suspected genetic disorders, predict risks for specific conditions, or gain information that can be used to customize medical treatments based on an individual's genetic makeup. Genetic testing can also be used to determine biological relatives, such as a child's biological parentage (genetic mother and father) through DNA paternity testing, or be used to broadly predict an individual's ancestry. Genetic testing of plants and animals can be used for similar reasons as in humans (e.g. to assess relatedness/ancestry or predict/diagnose genetic disorders), to gain information used for selective breeding, or for efforts to boost genetic diversity in endangered populations. The variety of genetic tests has expanded throughout the years. Early forms of genetic testing which began in the 1950s involved counting the number of chromosomes per cell. Deviations from the expected number of chromosomes (46 in humans) could lead to a diagnosis of certain genetic conditions such as trisomy 21 (Down syndrome) or monosomy X (Turner syndrome). In the 1970s, a method to stain specific regions of chromosomes, called chromosome banding, was developed that allowed more detailed analysis of chromosome structure and diagnosis of genetic disorders that involved large structural rearrangements. In addition to analyzing whole chromosomes (cytogenetics), genetic testing has expanded to include the fields of molecular genetics and genomics which can identify changes at the level of individual genes, parts of genes, or even single nucleotide "letters" of DNA sequence. According to the National Institutes of Health,
https://en.wikipedia.org/wiki/Distributive%20lattice
In mathematics, a distributive lattice is a lattice in which the operations of join and meet distribute over each other. The prototypical examples of such structures are collections of sets for which the lattice operations can be given by set union and intersection. Indeed, these lattices of sets describe the scenery completely: every distributive lattice is—up to isomorphism—given as such a lattice of sets. Definition As in the case of arbitrary lattices, one can choose to consider a distributive lattice L either as a structure of order theory or of universal algebra. Both views and their mutual correspondence are discussed in the article on lattices. In the present situation, the algebraic description appears to be more convenient. A lattice (L,∨,∧) is distributive if the following additional identity holds for all x, y, and z in L: x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z). Viewing lattices as partially ordered sets, this says that the meet operation preserves non-empty finite joins. It is a basic fact of lattice theory that the above condition is equivalent to its dual: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z)   for all x, y, and z in L. In every lattice, if one defines the order relation p≤q as usual to mean p∧q=p, then the inequality x ∧ (y ∨ z) ≥ (x ∧ y) ∨ (x ∧ z) and its dual x ∨ (y ∧ z) ≤ (x ∨ y) ∧ (x ∨ z) are always true. A lattice is distributive if one of the converse inequalities holds, too. More information on the relationship of this condition to other distributivity conditions of order theory can be found in the article Distributivity (order theory). Morphisms A morphism of distributive lattices is just a lattice homomorphism as given in the article on lattices, i.e. a function that is compatible with the two lattice operations. Because such a morphism of lattices preserves the lattice structure, it will consequently also preserve the distributivity (and thus be a morphism of distributive lattices). Examples Distributive lattices are ubiquitous but also
https://en.wikipedia.org/wiki/Dual%20%28category%20theory%29
In category theory, a branch of mathematics, duality is a correspondence between the properties of a category C and the dual properties of the opposite category Cop. Given a statement regarding the category C, by interchanging the source and target of each morphism as well as interchanging the order of composing two morphisms, a corresponding dual statement is obtained regarding the opposite category Cop. Duality, as such, is the assertion that truth is invariant under this operation on statements. In other words, if a statement is true about C, then its dual statement is true about Cop. Also, if a statement is false about C, then its dual has to be false about Cop. Given a concrete category C, it is often the case that the opposite category Cop per se is abstract. Cop need not be a category that arises from mathematical practice. In this case, another category D is also termed to be in duality with C if D and Cop are equivalent as categories. In the case when C and its opposite Cop are equivalent, such a category is self-dual. Formal definition We define the elementary language of category theory as the two-sorted first order language with objects and morphisms as distinct sorts, together with the relations of an object being the source or target of a morphism and a symbol for composing two morphisms. Let σ be any statement in this language. We form the dual σop as follows: Interchange each occurrence of "source" in σ with "target". Interchange the order of composing morphisms. That is, replace each occurrence of with Informally, these conditions state that the dual of a statement is formed by reversing arrows and compositions. Duality is the observation that σ is true for some category C if and only if σop is true for Cop. Examples A morphism is a monomorphism if implies . Performing the dual operation, we get the statement that implies For a morphism , this is precisely what it means for f to be an epimorphism. In short, the property of being a
https://en.wikipedia.org/wiki/Hodge%20star%20operator
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge. For example, in an oriented 3-dimensional Euclidean space, an oriented plane can be represented by the exterior product of two basis vectors, and its Hodge dual is the normal vector given by their cross product; conversely, any vector is dual to the oriented plane perpendicular to it, endowed with a suitable bivector. Generalizing this to an -dimensional vector space, the Hodge star is a one-to-one mapping of -vectors to -vectors; the dimensions of these spaces are the binomial coefficients . The naturalness of the star operator means it can play a role in differential geometry, when applied to the cotangent bundle of a pseudo-Riemannian manifold, and hence to differential -forms. This allows the definition of the codifferential as the Hodge adjoint of the exterior derivative, leading to the Laplace–de Rham operator. This generalizes the case of 3-dimensional Euclidean space, in which divergence of a vector field may be realized as the codifferential opposite to the gradient operator, and the Laplace operator on a function is the divergence of its gradient. An important application is the Hodge decomposition of differential forms on a closed Riemannian manifold. Formal definition for k-vectors Let be an -dimensional oriented vector space with a nondegenerate symmetric bilinear form , referred to here as an inner product. This induces an inner product on k-vectors for , by defining it on decomposable -vectors and to equal the Gram determinant extended to through linearity. The unit -vector is defined in terms of an oriented orthonormal basis of as: (Note: In the general pseudo-Riemannian case, orthonormality means: .) Th
https://en.wikipedia.org/wiki/Lie%20algebroid
In mathematics, a Lie algebroid is a vector bundle together with a Lie bracket on its space of sections and a vector bundle morphism , satisfying a Leibniz rule. A Lie algebroid can thus be thought of as a "many-object generalisation" of a Lie algebra. Lie algebroids play a similar same role in the theory of Lie groupoids that Lie algebras play in the theory of Lie groups: reducing global problems to infinitesimal ones. Indeed, any Lie groupoid gives rise to a Lie algebroid, which is the vertical bundle of the source map restricted at the units. However, unlike Lie algebras, not every Lie algebroid arises from a Lie groupoid. Lie algebroids were introduced in 1967 by Jean Pradines. Definition and basic concepts A Lie algebroid is a triple consisting of a vector bundle over a manifold a Lie bracket on its space of sections a morphism of vector bundles , called the anchor, where is the tangent bundle of such that the anchor and the bracket satisfy the following Leibniz rule: where and is the derivative of along the vector field . One often writes when the bracket and the anchor are clear from the context; some authors denote Lie algebroids by , suggesting a "limit" of a Lie groupoids when the arrows denoting source and target become "infinitesimally close". First properties It follows from the definition that for every , the kernel is a Lie algebra, called the isotropy Lie algebra at the kernel is a (not necessarily locally trivial) bundle of Lie algebras, called the isotropy Lie algebra bundle the image is a singular distribution which is integrable, i.e. its admits maximal immersed submanifolds , called the orbits, satisfying for every . Equivalently, orbits can be explicitly described as the sets of points which are joined by A-paths, i.e. pairs of paths in and in such that and the anchor map descends to a map between sections which is a Lie algebra morphism, i.e. for all . The property that induces a Lie algebra morphism
https://en.wikipedia.org/wiki/Lie%20groupoid
In mathematics, a Lie groupoid is a groupoid where the set of objects and the set of morphisms are both manifolds, all the category operations (source and target, composition, identity-assigning map and inversion) are smooth, and the source and target operations are submersions. A Lie groupoid can thus be thought of as a "many-object generalization" of a Lie group, just as a groupoid is a many-object generalization of a group. Accordingly, while Lie groups provide a natural model for (classical) continuous symmetries, Lie groupoids are often used as model for (and arise from) generalised, point-dependent symmetries. Extending the correspondence between Lie groups and Lie algebras, Lie groupoids are the global counterparts of Lie algebroids. Lie groupoids were introduced by Charles Ehresmann under the name differentiable groupoids. Definition and basic concepts A Lie groupoid consists of two smooth manifolds and two surjective submersions (called, respectively, source and target projections) a map (called multiplication or composition map), where we use the notation a map (called unit map or object inclusion map), where we use the notation a map (called inversion), where we use the notation such that the composition satisfies and for every for which the composition is defined the composition is associative, i.e. for every for which the composition is defined works as an identity, i.e. for every and and for every works as an inverse, i.e. and for every . Using the language of category theory, a Lie groupoid can be more compactly defined as a groupoid (i.e. a small category where all the morphisms are invertible) such that the sets of objects and of morphisms are manifolds, the maps , , , and are smooth and and are submersions. A Lie groupoid is therefore not simply a groupoid object in the category of smooth manifolds: one has to ask the additional property that and are submersions. Lie groupoids are often denoted by
https://en.wikipedia.org/wiki/Quotient%20%28universal%20algebra%29
In mathematics, a quotient algebra is the result of partitioning the elements of an algebraic structure using a congruence relation. Quotient algebras are also called factor algebras. Here, the congruence relation must be an equivalence relation that is additionally compatible with all the operations of the algebra, in the formal sense described below. Its equivalence classes partition the elements of the given algebraic structure. The quotient algebra has these classes as its elements, and the compatibility conditions are used to give the classes an algebraic structure. The idea of the quotient algebra abstracts into one common notion the quotient structure of quotient rings of ring theory, quotient groups of group theory, the quotient spaces of linear algebra and the quotient modules of representation theory into a common framework. Compatible relation Let A be the set of the elements of an algebra , and let E be an equivalence relation on the set A. The relation E is said to be compatible with (or have the substitution property with respect to) an n-ary operation f, if for implies for any with . An equivalence relation compatible with all the operations of an algebra is called a congruence with respect to this algebra. Quotient algebras and homomorphisms Any equivalence relation E in a set A partitions this set in equivalence classes. The set of these equivalence classes is usually called the quotient set, and denoted A/E. For an algebra , it is straightforward to define the operations induced on the elements of A/E if E is a congruence. Specifically, for any operation of arity in (where the superscript simply denotes that it is an operation in , and the subscript enumerates the functions in and their arities) define as , where denotes the equivalence class of generated by E ("x modulo E"). For an algebra , given a congruence E on , the algebra is called the quotient algebra (or factor algebra) of modulo E. There is a natural homomorphism from
https://en.wikipedia.org/wiki/Subcategory
In mathematics, specifically category theory, a subcategory of a category C is a category S whose objects are objects in C and whose morphisms are morphisms in C with the same identities and composition of morphisms. Intuitively, a subcategory of C is a category obtained from C by "removing" some of its objects and arrows. Formal definition Let C be a category. A subcategory S of C is given by a subcollection of objects of C, denoted ob(S), a subcollection of morphisms of C, denoted hom(S). such that for every X in ob(S), the identity morphism idX is in hom(S), for every morphism f : X → Y in hom(S), both the source X and the target Y are in ob(S), for every pair of morphisms f and g in hom(S) the composite f o g is in hom(S) whenever it is defined. These conditions ensure that S is a category in its own right: its collection of objects is ob(S), its collection of morphisms is hom(S), and its identities and composition are as in C. There is an obvious faithful functor I : S → C, called the inclusion functor which takes objects and morphisms to themselves. Let S be a subcategory of a category C. We say that S is a full subcategory of C if for each pair of objects X and Y of S, A full subcategory is one that includes all morphisms in C between objects of S. For any collection of objects A in C, there is a unique full subcategory of C whose objects are those in A. Examples The category of finite sets forms a full subcategory of the category of sets. The category whose objects are sets and whose morphisms are bijections forms a non-full subcategory of the category of sets. The category of abelian groups forms a full subcategory of the category of groups. The category of rings (whose morphisms are unit-preserving ring homomorphisms) forms a non-full subcategory of the category of rngs. For a field K, the category of K-vector spaces forms a full subcategory of the category of (left or right) K-modules. Embeddings Given a subcategory S of C, the inclusion fun
https://en.wikipedia.org/wiki/Green%27s%20function
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions. This means that if is the linear differential operator, then the Green's function is the solution of the equation , where is Dirac's delta function; the solution of the initial-value problem is the convolution (). Through the superposition principle, given a linear ordinary differential equation (ODE), , one can first solve , for each , and realizing that, since the source is a sum of delta functions, the solution is a sum of Green's functions as well, by linearity of . Green's functions are named after the British mathematician George Green, who first developed the concept in the 1820s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead. Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators. Definition and uses A Green's function, , of a linear differential operator acting on distributions over a subset of the Euclidean space , at a point , is any solution of where is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form If the kernel of is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Green's functions may be categorized, by the type of boundary conditions satisfied, by a Green's function number. Also, Green's functions in general are
https://en.wikipedia.org/wiki/Rank-into-rank
In set theory, a branch of mathematics, a rank-into-rank embedding is a large cardinal property defined by one of the following four axioms given in order of increasing consistency strength. (A set of rank < λ is one of the elements of the set Vλ of the von Neumann hierarchy.) Axiom I3: There is a nontrivial elementary embedding of Vλ into itself. Axiom I2: There is a nontrivial elementary embedding of V into a transitive class M that includes Vλ where λ is the first fixed point above the critical point. Axiom I1: There is a nontrivial elementary embedding of Vλ+1 into itself. Axiom I0: There is a nontrivial elementary embedding of L(Vλ+1) into itself with critical point below λ. These are essentially the strongest known large cardinal axioms not known to be inconsistent in ZFC; the axiom for Reinhardt cardinals is stronger, but is not consistent with the axiom of choice. If j is the elementary embedding mentioned in one of these axioms and κ is its critical point, then λ is the limit of as n goes to ω. More generally, if the axiom of choice holds, it is provable that if there is a nontrivial elementary embedding of Vα into itself then α is either a limit ordinal of cofinality ω or the successor of such an ordinal. The axioms I0, I1, I2, and I3 were at first suspected to be inconsistent (in ZFC) as it was thought possible that Kunen's inconsistency theorem that Reinhardt cardinals are inconsistent with the axiom of choice could be extended to them, but this has not yet happened and they are now usually believed to be consistent. Every I0 cardinal κ (speaking here of the critical point of j) is an I1 cardinal. Every I1 cardinal κ (sometimes called ω-huge cardinals) is an I2 cardinal and has a stationary set of I2 cardinals below it. Every I2 cardinal κ is an I3 cardinal and has a stationary set of I3 cardinals below it. Every I3 cardinal κ has another I3 cardinal above it and is an n-huge cardinal for every n<ω. Axiom I1 implies that Vλ+1 (equivalently,
https://en.wikipedia.org/wiki/SSE2
SSE2 (Streaming SIMD Extensions 2) is one of the Intel SIMD (Single Instruction, Multiple Data) processor supplementary instruction sets introduced by Intel with the initial version of the Pentium 4 in 2000. It extends the earlier SSE instruction set, and is intended to fully replace MMX. Intel extended SSE2 to create SSE3 in 2004. SSE2 added 144 new instructions to SSE, which has 70 instructions. Competing chip-maker AMD added support for SSE2 with the introduction of their Opteron and Athlon 64 ranges of AMD64 64-bit CPUs in 2003. Features Most of the SSE2 instructions implement the integer vector operations also found in MMX. Instead of the MMX registers they use the XMM registers, which are wider and allow for significant performance improvements in specialized applications. Another advantage of replacing MMX with SSE2 is avoiding the mode switching penalty for issuing x87 instructions present in MMX because it is sharing register space with the x87 FPU. The SSE2 also complements the floating-point vector operations of the SSE instruction set by adding support for the double precision data type. Other SSE2 extensions include a set of cache control instructions intended primarily to minimize cache pollution when processing infinite streams of information, and a sophisticated complement of numeric format conversion instructions. AMD's implementation of SSE2 on the AMD64 (x86-64) platform includes an additional eight registers, doubling the total number to 16 (XMM0 through XMM15). These additional registers are only visible when running in 64-bit mode. Intel adopted these additional registers as part of their support for x86-64 architecture (or in Intel's parlance, "Intel 64") in 2004. Differences between x87 FPU and SSE2 FPU (x87) instructions provide higher precision by calculating intermediate results with 80 bits of precision, by default, to minimise roundoff error in numerically unstable algorithms (see IEEE 754 design rationale and references therein