source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Patch%20%28computing%29
A patch is a set of changes to a computer program or its supporting data designed to update, fix, or improve it. This includes fixing security vulnerabilities and other bugs, with such patches usually being called bugfixes or bug fixes. Patches are often written to improve the functionality, usability, or performance of a program. The majority of patches are provided by software vendors for operating system and application updates. Patches may be installed either under programmed control or by a human programmer using an editing tool or a debugger. They may be applied to program files on a storage device, or in computer memory. Patches may be permanent (until patched again) or temporary. Patching makes possible the modification of compiled and machine language object programs when the source code is unavailable. This demands a thorough understanding of the inner workings of the object code by the person creating the patch, which is difficult without close study of the source code. Someone unfamiliar with the program being patched may install a patch using a patch utility created by another person who is the Admin. Even when the source code is available, patching makes possible the installation of small changes to the object program without the need to recompile or reassemble. For minor changes to software, it is often easier and more economical to distribute patches to users rather than redistributing a newly recompiled or reassembled program. Although meant to fix problems, poorly designed patches can sometimes introduce new problems (see software regressions). In some special cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed. Patch management is a part of lifecycle management, and is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Types Binary patches Patches for proprietary softwar
https://en.wikipedia.org/wiki/Low-pressure%20area
In meteorology, a low-pressure area, low area or low is a region where the atmospheric pressure is lower than that of surrounding locations. Low-pressure areas are commonly associated with inclement weather (such as cloudy, windy, with possible rain or storms), while high-pressure areas are associated with lighter winds and clear skies. Winds circle anti-clockwise around lows in the northern hemisphere, and clockwise in the southern hemisphere, due to opposing Coriolis forces. Low-pressure systems form under areas of wind divergence that occur in the upper levels of the atmosphere (aloft). The formation process of a low-pressure area is known as cyclogenesis. In meteorology, atmospheric divergence aloft occurs in two kinds of places: The first is in the area on the east side of upper troughs, which form half of a Rossby wave within the Westerlies (a trough with large wavelength that extends through the troposphere). A second is an area where wind divergence aloft occurs ahead of embedded shortwave troughs, which are of smaller wavelength. Diverging winds aloft, ahead of these troughs, cause atmospheric lift within the troposphere below as air flows upwards away from the surface, which lowers surface pressures as this upward motion partially counteracts the force of gravity packing the air close to the ground. Thermal lows form due to localized heating caused by greater solar incidence over deserts and other land masses. Since localized areas of warm air are less dense than their surroundings, this warmer air rises, which lowers atmospheric pressure near that portion of the Earth's surface. Large-scale thermal lows over continents help drive monsoon circulations. Low-pressure areas can also form due to organized thunderstorm activity over warm water. When this occurs over the tropics in concert with the Intertropical Convergence Zone, it is known as a monsoon trough. Monsoon troughs reach their northerly extent in August and their southerly extent in February. Wh
https://en.wikipedia.org/wiki/Reality%20Check%20%28American%20TV%20series%29
Reality Check was a 1995 television show starring Ryan Seacrest as Jack Craft, a 19-year-old inventor who gets stuck in his computer mainframe project on June 8, 1995. The two Bonner siblings (Samantha and Nicholas) reactivate the computer on September 17, 1995, attempting to get Jack Craft out of the mainframe, while also encountering additional members of the project. The show was broadcast under syndication with each episode running for 15 minutes including commercials. It was produced in association with S & S Productions and ran for fourteen episodes before ending. Characters Abigail Gustafson - Samantha Bonner John Aaron Bennett - Nicholas Bonner Ryan Seacrest - Jack Craft Tom Greer - Will Maria Cabini - Isis Blake Heron - Bud McNeight Yasmine Seyfi - Yasmine Shanna Marsha Crenshaw - DEV the computer, and additional voices Mike Dyche - Glitch and voices Episodes "Note Of A Different Color" - Samantha composes an Earth Day song with the help of animated computer program Mr. Re. "The Great Escape" "The Ole Ballgame" - Nicholas learns about swinging strategies with the help of Jack and guest star Terry Pendleton. ? - This episode travelled through time for what had happened in the 1960s, 1970s (featuring Jack Craft), 1980s (featuring Isis), and 1990s. External links 1995 American television series debuts 1990s American children's television series Mainframe computers
https://en.wikipedia.org/wiki/Logistics%20automation
Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems. Logistics automation systems can powerfully complement the facilities provided by these higher level computer systems. The focus on an individual node within a wider logistics network allows systems to be highly tailored to the requirements of that node. Components Logistics automation systems comprise a variety of hardware and software components: Fixed machinery Automated storage and retrieval systems, including: Cranes serve a rack of locations, allowing many levels of stock to be stacked vertically, and allowing for higher storage densities and better space utilization than alternatives. In systems produced by Amazon Robotics, automated guided vehicles move items to a human picker. Conveyors: Containers can enter automated conveyors in one area of the warehouse and, either through hard-coded rules or data input, be moved to a selected destination. Vertical carousels based on the paternoster lift system or using space optimization, similar to vending machines, but on a larger scale. Sortation systems: similar to conveyors but typically with higher capacity and able to divert containers more quickly. Typically used to distribute high volumes of small cartons to a large set of locations. Industrial robots: four- to six-axis industrial robots, e.g. palleting robots, are used for palleting, depalleting, packaging, commissioning and order picking. Typically all of these will automatically identify and track containers using barcodes or, increasingly, RFID tags. Motion check weighers may be used to reject cases or individual products that are under or over their specified weight. They are often used in kitting conveyor lines to ensure a
https://en.wikipedia.org/wiki/Oscillation%20%28mathematics%29
In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval (or open set). Definitions Oscillation of a sequence Let be a sequence of real numbers. The oscillation of that sequence is defined as the difference (possibly infinite) between the limit superior and limit inferior of : . The oscillation is zero if and only if the sequence converges. It is undefined if and are both equal to +∞ or both equal to −∞, that is, if the sequence tends to +∞ or −∞. Oscillation of a function on an open set Let be a real-valued function of a real variable. The oscillation of on an interval in its domain is the difference between the supremum and infimum of : More generally, if is a function on a topological space (such as a metric space), then the oscillation of on an open set is Oscillation of a function at a point The oscillation of a function of a real variable at a point is defined as the limit as of the oscillation of on an -neighborhood of : This is the same as the difference between the limit superior and limit inferior of the function at , provided the point is not excluded from the limits. More generally, if is a real-valued function on a metric space, then the oscillation is Examples has oscillation ∞ at = 0, and oscillation 0 at other finite and at −∞ and +∞. (the topologist's sine curve) has oscillation 2 at = 0, and 0 elsewhere. has oscillation 0 at every finite , and 2 at −∞ and +∞. or 1, -1, 1, -1, 1, -1... has oscillation 2. In the last example the sequence is periodic, and any sequence that is periodic without being constant
https://en.wikipedia.org/wiki/Disk%20array
A disk array is a disk storage system which contains multiple disk drives. It is differentiated from a disk enclosure, in that an array has cache memory and advanced functionality, like RAID, deduplication, encryption and virtualization. Components of a disk array include: Disk array controllers Cache in form of both volatile random-access memory and non-volatile flash memory. Disk enclosures for both magnetic rotational hard disk drives and electronic solid-state drives. Power supplies Typically a disk array provides increased availability, resiliency, and maintainability by using additional redundant components (controllers, power supplies, fans, etc.), often up to the point where all single points of failure (SPOFs) are eliminated from the design. Additionally, disk array components are often hot-swappable. Traditionally disk arrays were divided into categories: Network attached storage (NAS) arrays Storage area network (SAN) arrays: Modular SAN arrays Monolithic SAN arrays Utility Storage Arrays Storage virtualization Primary vendors of storage systems include Coraid, Inc., DataDirect Networks, Dell EMC, Fujitsu, Hewlett Packard Enterprise, Hitachi Data Systems, Huawei, IBM, Infortrend, NetApp, Oracle Corporation, Panasas, Pure Storage and other companies that often act as OEM for the above vendors and do not themselves market the storage components they manufacture. References Computer data storage Fault-tolerant computer systems RAID
https://en.wikipedia.org/wiki/Disk%20array%20controller
A disk array controller is a device that manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache. Disk array controller is often improperly shortened to disk controller. The two should not be confused as they provide very different functionality. Front-end and back-end side A disk array controller provides front-end interfaces and back-end interfaces. The back-end interface communicates with the controlled disks. Hence, its protocol is usually ATA (a.k.a. PATA), SATA, SCSI, FC or SAS. The front-end interface communicates with a computer's host adapter (HBA, Host Bus Adapter) and uses: one of ATA, SATA, SCSI, FC; these are popular protocols used by disks, so by using one of them a controller may transparently emulate a disk for a computer. somewhat less popular dedicated protocols for specific solutions: FICON/ESCON, iSCSI, HyperSCSI, ATA over Ethernet or InfiniBand. A single controller may use different protocols for back-end and for front-end communication. Many enterprise controllers use FC on front-end and SATA on back-end. Enterprise controllers In a modern enterprise architecture disk array controllers (sometimes also called storage processors, or SPs) are parts of physically independent enclosures, such as disk arrays placed in a storage area network (SAN) or network-attached storage (NAS) servers. Those external disk arrays are usually purchased as an integrated subsystem of RAID controllers, disk drives, power supplies, and management software. It is up to controllers to provide advanced functionality (various vendors name these differently): Automatic failover to another controller (transparent to computers transmitting data) Long-running operations performed without downtime Forming a new RAID set Reconstructing degraded RAID set (after a disk failure) Adding a disk to onl
https://en.wikipedia.org/wiki/IEEE%201164
The IEEE 1164 standard (Multivalue Logic System for VHDL Model Interoperability) is a technical standard published by the IEEE in 1993. It describes the definitions of logic values to be used in electronic design automation, for the VHDL hardware description language. It was sponsored by the Design Automation Standards Committee of the Institute of Electrical and Electronics Engineers (IEEE). The standardization effort was based on the donation of the Synopsys MVL-9 type declaration. The primary data type (standard unresolved logic) consists of nine character literals (see table on the right). This system promoted a useful set of logic values that typical CMOS logic designs could implement in the vast majority of modeling situations, including: 'Z' literal to make tri-state buffer logic easy 'H' and 'L' weak drives to permit wired-AND and wired-OR logic. 'U' for default value for all object declarations so that during simulations uninitialized values are easily detectable and thus easily corrected if necessary. In VHDL, the hardware designer makes the declarations visible via the following library and use statements: library IEEE; use IEEE.std_logic_1164.all; Using values in simulation Use of 'U' Many hardware description language (HDL) simulation tools, such as Verilog and VHDL, support an unknown value like that shown above during simulation of digital electronics. The unknown value may be the result of a design error, which the designer can correct before synthesis into an actual circuit. The unknown also represents uninitialised memory values and circuit inputs before the simulation has asserted what the real input value should be. HDL synthesis tools usually produce circuits that operate only on binary logic. Use of '-' When designing a digital circuit, some conditions may be outside the scope of the purpose that the circuit will perform. Thus, the designer does not care what happens under those conditions. In addition, the situation occurs that in
https://en.wikipedia.org/wiki/Race%20condition
A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of the possible behaviors is undesirable. The term race condition was already in use by 1954, for example in David A. Huffman's doctoral thesis "The synthesis of sequential switching circuits". Race conditions can occur especially in logic circuits, multithreaded, or distributed software programs. In electronics A typical example of a race condition may occur when a logic gate combines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate such glitches but if this output functions as a clock signal for further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch). Consider, for example, a two-input AND gate fed with the following logic: A logic signal on one input and its negation, (the ¬ is a boolean negation), on another input in theory never output a true value: . If, however, changes in the value of take longer to propagate to the second input than the first when changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true. A practical example of a race condition can occur when logic circuitry is used to detect certain outputs of a counter. If all the bits of the counter do not change exactly simultaneously, there will be intermediate patterns that can trigger false matches. Critical and non-critical forms A critical race condition occurs when the order in which internal variables
https://en.wikipedia.org/wiki/Phoenix%20%28computer%29
Phoenix (February 1973 – September 30, 1995) was an IBM mainframe computer at Cambridge University's Computer Laboratory. "Phoenix/MVS" was also the name of the computer's operating system, written in-house by Computer Laboratory members. Its DNS hostname was . Hardware The Phoenix system was an IBM 370/165. It was made available for test purposes to 20 selected users, via consoles in the public console room, in February 1973. The following month, the Computing Service petitioned the Computer Board for an extra mebibyte of store, to double the amount of storage that the machine had. The petition was accepted and the extra store was delivered in September 1973. Communications The IBM-supplied Telecommunications Access Method (TCAM) and communications controller were replaced in 1975 by a system, called Parrot, that was created locally by the staff of the Computer Laboratory, comprising their own software and a PDP-11 complex. Their goal in doing so was to provide a better user interface than was available with a standard IBM system, alongside greater flexibility, reliability, and efficiency. They wanted to support 300 terminals. The initial system, supplied in 1972, comprised the PDP-11 emulating an IBM 2703 transmission control unit, which TCAM communicated with just as though it were a 2703. The PDP-11 was used instead of a bank of 2703s because for a projected 300 terminals a bank of 2703s was not scalable, too expensive, and inadequate for the Computing Service's needs, since it required paper tape readers and card punches as well. Even this solution proved to be unsatisfactory, and in 1975 TCAM was replaced by Parrot, with 200 terminals connected to the PDP-11, of which 80 could be simultaneously active. For full technical details of Parrot, see the technical report by Hazel and Stoneley. Software The staff were motivated to write their own system software for the IBM installation as a result of their dissatisfaction with IBM's own interactive com
https://en.wikipedia.org/wiki/Apple%20Developer
Apple Developer (formerly Apple Developer Connection) is Apple Inc.'s website for software development tools, application programming interfaces (APIs), and technical resources. It contains resources to help software developers write software for the macOS, iOS, iPadOS, watchOS, and tvOS platforms. The applications are created in Xcode, or sometimes using other supported 3rd party programs. The apps can then be submitted to App Store Connect (formerly iTunes Connect), another one of Apple's website for approval the internal review team. Once approved, they can be distributed publicly via the respective app stores, i.e. App Store (iOS) for iOS and iPadOS apps, iMessage app store for Messages apps and Sticker pack apps, App Store (tvOS) for Apple TV apps, watchOS app store for Apple Watch apps with watchOS 6 and later, and via App Store (iOS) for earlier versions of watchOS. macOS apps are a notable exception to this, as they can be distributed similarly via Apple's Mac App Store or independently on the World Wide Web. Programs Mac The Mac developer program is a way for developers of Apple's macOS operating system to distribute their apps through the Mac App Store. It costs US$99/year. Unlike iOS, developers are not required to sign up for the program in order to distribute their applications. Mac applications can freely be distributed via the developer's website and/or any other method of distribution excluding the Mac App Store. The Mac Developer Program also provides developers with resources to help them distribute their Mac applications. Software leaks There have been several leaks of secret Apple software through the prerelease program, most notably the Mac OS X 10.4 Tiger leaks, in which Apple sued three men who allegedly obtained advance copies of Mac OS X 10.4 prerelease builds from the site and leaked it to BitTorrent. Attempted hacks On July 18, 2013, an intruder attempted to access sensitive personal information on Apple's developer servers. The inf
https://en.wikipedia.org/wiki/Microcystin
Microcystins—or cyanoginosins—are a class of toxins produced by certain freshwater cyanobacteria, commonly known as blue-green algae. Over 250 different microcystins have been discovered so far, of which microcystin-LR is the most common. Chemically they are cyclic heptapeptides produced through nonribosomal peptide synthases. Cyanobacteria can produce microcystins in large quantities during algal blooms which then pose a major threat to drinking and irrigation water supplies, and the environment at large. Characteristics Microcystins—or cyanoginosins—are a class of toxins produced by certain freshwater cyanobacteria; primarily Microcystis aeruginosa but also other Microcystis, as well as members of the Planktothrix, Anabaena, Oscillatoria and Nostoc genera. Over 250 different microcystins have been discovered so far, of which microcystin-LR is the most common. Chemically they are cyclic heptapeptides produced through nonribosomal peptide synthases. Microcystin-LR (i.e. X = leucine, Z = arginine) is the most toxic form of over 80 known toxic variants, and is also the most studied by chemists, pharmacologists, biologists, and ecologists. Microcystin-containing 'blooms' are a problem worldwide, including China, Brazil, Australia, South Africa, the United States and much of Europe. Hartebeespoort Dam in South Africa is one of the most contaminated sites in Africa, and possibly in the world. Chemistry Microcystins have a common structural framework of D-Ala1-X2-3-Z4-Adda5-D-γ-Glu6-7, where X and Z are variable amino acids; the systematic name "microcystin-XZ" (MC-XZ in short) is then assigned based on the one letter codes (if available; longer codes otherwise) of the amino acids. If the molecule show any other modification, the differences are noted in square brackets before "MC". Of these, several are uncommon non-proteinogenic amino acids: D-Masp is D-erythro-β-methyl-isoaspartic acid, a derivative of aspartic acid in β-amino acid form; Adda is (all-S,all-E)-
https://en.wikipedia.org/wiki/MirOS%20BSD
MirOS BSD (originally called MirBSD) is a free and open source operating system which started as a fork of OpenBSD 3.1 in August 2002. It was intended to maintain the security of OpenBSD with better support for European localisation. Since then it has also incorporated code from other free BSD descendants, including NetBSD, MicroBSD and FreeBSD. Code from MirOS BSD was also incorporated into ekkoBSD, and when ekkoBSD ceased to exist, artwork, code and developers ended up working on MirOS BSD for a while. Unlike the three major BSD distributions, MirOS BSD supports only the x86 and SPARC architectures. One of the project's goals was to be able to port the MirOS userland to run on the Linux kernel, hence the deprecation of the MirBSD name in favour of MirOS. History MirOS BSD originated as OpenBSD-current-mirabilos, an OpenBSD patchkit, but soon grew on its own after some differences in opinion between the OpenBSD project leader Theo de Raadt and Thorsten Glaser. Despite the forking, MirOS BSD was synchronised with the ongoing development of OpenBSD, thus inheriting most of its good security history, as well as NetBSD and other BSD flavours. One goal was to provide a faster integration cycle for new features and software than OpenBSD. According to the developers, "controversial decisions are often made differently from OpenBSD; for instance, there won't be any support for SMP in MirOS". There will also be a more tolerant software inclusion policy, and "the end result is, hopefully, a more refined BSD experience". Another goal of MirOS BSD was to create a more "modular" base BSD system, similar to Debian. While MirOS Linux (linux kernel + BSD userland) was discussed by the developers sometime in 2004, it has not materialised. Features Development snapshots are live and installation CD for x86 and SPARC architectures on one media, via the DuaLive technology. Latest snapshots have been extended to further boot a grml (a Linux-based rescue system, x86 only) via
https://en.wikipedia.org/wiki/Inulin
Inulins are a group of naturally occurring polysaccharides produced by many types of plants, industrially most often extracted from chicory. The inulins belong to a class of dietary fibers known as fructans. Inulin is used by some plants as a means of storing energy and is typically found in roots or rhizomes. Most plants that synthesize and store inulin do not store other forms of carbohydrate such as starch. In the United States in 2018, the Food and Drug Administration approved inulin as a dietary fiber ingredient used to improve the nutritional value of manufactured food products. Using inulin to measure kidney function is the "gold standard" for comparison with other means of estimating glomerular filtration rate. Origin and history Inulin is a natural storage carbohydrate present in more than 36,000 species of plants, including agave, wheat, onion, bananas, garlic, asparagus, Jerusalem artichoke, and chicory. For these plants, inulin is used as an energy reserve and for regulating cold resistance. Because it is soluble in water, it is osmotically active. Certain plants can change the osmotic potential of their cells by changing the degree of polymerization of inulin molecules by hydrolysis. By changing osmotic potential without changing the total amount of carbohydrate, plants can withstand cold and drought during winter periods. Inulin was discovered in 1804 by German scientist Valentin Rose. He found "a peculiar substance" from Inula helenium roots by boiling-water extraction. In the 1920s, J. Irvine used chemical methods such as methylation to study the molecular structure of inulin, and he designed the isolation method for this new anhydrofructose. During studies of renal tubules in the 1930s, researchers searched for a substance that could serve as a biomarker that is not reabsorbed or secreted after introduction into tubules. A. N. Richards introduced inulin because of its high molecular weight and its resistance to enzymes. Inulin is used to determin
https://en.wikipedia.org/wiki/Lichtenberg%20figure
A Lichtenberg figure (German Lichtenberg-Figuren), or Lichtenberg dust figure, is a branching electric discharge that sometimes appears on the surface or in the interior of insulating materials. Lichtenberg figures are often associated with the progressive deterioration of high voltage components and equipment. The study of planar Lichtenberg figures along insulating surfaces and 3D electrical trees within insulating materials often provides engineers with valuable insights for improving the long-term reliability of high-voltage equipment. Lichtenberg figures are now known to occur on or within solids, liquids, and gases during electrical breakdown. Lichtenberg figures are natural phenomena which exhibit fractal properties. History Lichtenberg figures are named after the German physicist Georg Christoph Lichtenberg, who originally discovered and studied them. When they were first discovered, it was thought that their characteristic shapes might help to reveal the nature of positive and negative electric "fluids". In 1777, Lichtenberg built a large electrophorus to generate high voltage static electricity through induction. After discharging a high voltage point to the surface of an insulator, he recorded the resulting radial patterns by sprinkling various powdered materials onto the surface. By then pressing blank sheets of paper onto these patterns, Lichtenberg was able to transfer and record these images, thereby discovering the basic principle of modern xerography. This discovery was also the forerunner of the modern day science of plasma physics. Although Lichtenberg only studied two-dimensional (2D) figures, modern high voltage researchers study 2D and 3D figures (electrical trees) on, and within, insulating materials. Formation Two-dimensional (2D) Lichtenberg figures can be produced by placing a sharp-pointed needle perpendicular to the surface of a non-conducting plate, such as of resin, ebonite, or glass. The point is positioned very near or contact
https://en.wikipedia.org/wiki/Noise%20reduction
Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an undesired signal component from the desired signal component, as with common-mode rejection ratio. All signal processing devices, both analog and digital, have traits that make them susceptible to noise. Noise can be random with an even frequency distribution (white noise), or frequency-dependent noise introduced by a device's mechanism or signal processing algorithms. In electronic systems, a major type of noise is hiss created by random electron motion due to thermal agitation. These agitated electrons rapidly add and subtract from the output signal and thus create detectable noise. In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger-sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level. In general Noise reduction algorithms tend to alter signals to a greater or lesser degree. The local signal-and-noise orthogonalization algorithm can be used to avoid changes to the signals. In seismic exploration Boosting signals in seismic data is especially crucial for seismic imaging, inversion, and interpretation, thereby greatly improving the success rate in oil & gas exploration. The useful signal that is smeared in the ambient random noise is often neglected and thus may cause fake discontinuity of seismic events and artifacts in the final migrated image. Enhancing the useful signal while preserving edg
https://en.wikipedia.org/wiki/Synchronous%20Data%20Link%20Control
Synchronous Data Link Control (SDLC) is a computer communications protocol. It is the layer 2 protocol for IBM's Systems Network Architecture (SNA). SDLC supports multipoint links as well as error correction. It also runs under the assumption that an SNA header is present after the SDLC header. SDLC was mainly used by IBM mainframe and midrange systems; however, implementations exist on many platforms from many vendors. In the United States and Canada, SDLC can be found in traffic control cabinets. In 1975, IBM developed the first bit-oriented protocol, SDLC, from work done for IBM in the early 1970s. This de facto standard has been adopted by ISO as High-Level Data Link Control (HDLC) in 1979 and by ANSI as Advanced Data Communication Control Procedures (ADCCP). The latter standards added features such as the Asynchronous Balanced Mode, frame sizes that did not need to be multiples of bit-octets, but also removed some of the procedures and messages (such as the TEST message). SDLC operates independently on each communications link, and can operate on point-to-point multipoint or loop facilities, on switched or dedicated, two-wire or four-wire circuits, and with full-duplex and half-duplex operation. A unique characteristic of SDLC is its ability to mix half-duplex secondary stations with full-duplex primary stations on four-wire circuits, thus reducing the cost of dedicated facilities. Intel used SDLC as a base protocol for BITBUS, still popular in Europe as fieldbus and included support in several controllers (i8044/i8344, i80152). The 8044 controller is still in production by third-party vendors. Other vendors putting hardware support for SDLC (and the slightly different HDLC) into communication controller chips of the 1980s included Zilog, Motorola, and National Semiconductor. As a result, a wide variety of equipment in the 1980s used it and it was very common in the mainframe centric corporate networks which were the norm in the 1980s. The most common
https://en.wikipedia.org/wiki/Thermoelectric%20materials
Thermoelectric materials show the thermoelectric effect in a strong or convenient form. The thermoelectric effect refers to phenomena by which either a temperature difference creates an electric potential or an electric current creates a temperature difference. These phenomena are known more specifically as the Seebeck effect (creating a voltage from temperature difference), Peltier effect (driving heat flow with an electric current), and Thomson effect (reversible heating or cooling within a conductor when there is both an electric current and a temperature gradient). While all materials have a nonzero thermoelectric effect, in most materials it is too small to be useful. However, low-cost materials that have a sufficiently strong thermoelectric effect (and other required properties) are also considered for applications including power generation and refrigeration. The most commonly used thermoelectric material is based on bismuth telluride (). Thermoelectric materials are used in thermoelectric systems for cooling or heating in niche applications, and are being studied as a way to regenerate electricity from waste heat. Research in the field is still driven by materials development, primarily in optimizing transport and thermoelectric properties. Thermoelectric figure of merit The usefulness of a material in thermoelectric systems is determined by the device efficiency. This is determined by the material's electrical conductivity (σ), thermal conductivity (κ), and Seebeck coefficient (S), which change with temperature (T). The maximum efficiency of the energy conversion process (for both power generation and cooling) at a given temperature point in the material is determined by the thermoelectric materials figure of merit , given by Device efficiency The efficiency of a thermoelectric device for electricity generation is given by , defined as The maximum efficiency of a thermoelectric device is typically described in terms of its device figure of merit wh
https://en.wikipedia.org/wiki/Apple%20Display%20Connector
The Apple Display Connector (ADC) is a proprietary modification of the DVI connector that combines analog and digital video signals, USB, and power all in one cable. It was used in later versions of the Apple Studio Display, including the final 17" CRT model, and most versions of the widescreen Apple Cinema Display, after which Apple adopted standard DVI connectors on later models. First implemented in the July 2000 Power Mac G4 and G4 Cube, ADC disappeared from displays in June 2004 when Apple introduced the aluminum-clad 20" (51 cm), 23" (58 cm), and 30" (76 cm) Apple Cinema Displays, which feature separate DVI, USB and FireWire connectors, and their own power supplies. An ADC port was still included with the Power Mac G5 until April 2005, when new models meant the only remaining Apple product with an ADC interface was the single processor Power Mac G5 introduced in October 2004. This single processor Power Mac G5 was discontinued soon after in June 2005. Compatibility The Apple Display Connector is physically incompatible with a standard DVI connector. The Apple DVI to ADC Adapter, which cost $149US at launch but was in 2002 available for $99US, takes USB and DVI connections from the computer, adds power from its own integrated power supply, and combines them into an ADC output, allowing ADC monitors to be used with DVI-based machines. On some models of Power Mac G4 the ADC connector replaced the DVI connector. This change necessitated a passive ADC to DVI adapter to use a DVI monitor. The ADC carries up to 100 W of power, an insufficient amount to run most 19-inch (48 cm) or bigger CRTs widely available during ADC's debut, nor can it run contemporary flat panels marketed for home entertainment (many of which support DVI or VGA connections) without an adapter. The power limit was an important factor for Apple to abandon ADC when it launched the 30-inch (76 cm) Apple Cinema HD Display. On newer DVI-based displays lacking ADC, Apple still opted for a singl
https://en.wikipedia.org/wiki/Cybiko
The Cybiko is a handheld computer introduced in the United States by David Yang's company Cybiko Inc. as a retail test market in New York on April 2000, and rolled out nationwide in May 2000. It is designed for teens, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. It features a rubber QWERTY keyboard. An MP3 player add-on with a SmartMedia card slot was made for the unit as well. The company stopped manufacturing the units after two product versions and a few years on the market. Cybikos can communicate with each other up to a maximum range of . Several Cybikos can chat with each other in a wireless chatroom. By the end of 2000, the Cybiko Classic had sold over 500,000 units. Models Cybiko Classic There are two models of the Classic Cybiko. Visually, the only difference is that the original version has a power switch on the side, while the updated version uses the "escape" key for power management. Internally, the differences between the two models are in the internal memory and the firmware location. The CPU is a Hitachi H8S/2241 clocked at 11.0592 MHz and the Cybiko Classic also has an Atmel AT90S2313 co-processor, clocked at 4 MHz to provide some support for RF communications. It has 512KB flash memory-based ROM flash memory and 256KB RAM installed. An add-on slot is located in the rear. The Cybiko Classics were sold in five colors: blue, purple, neon green, white, and black. The black version has a yellow keypad, instead of the white unit found on other Cybikos. The add-on slot has the same pin arrangement as a PC card, but it is not electrically compatible. Cybiko Xtreme The Cybiko Xtreme is the second-generation Cybiko handheld. It features various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size. The CPU is
https://en.wikipedia.org/wiki/Species%20reintroduction
Species reintroduction is the deliberate release of a species into the wild, from captivity or other areas where the organism is capable of survival. The goal of species reintroduction is to establish a healthy, genetically diverse, self-sustaining population to an area where it has been extirpated, or to augment an existing population. Species that may be eligible for reintroduction are typically threatened or endangered in the wild. However, reintroduction of a species can also be for pest control; for example, wolves being reintroduced to a wild area to curb an overpopulation of deer. Because reintroduction may involve returning native species to localities where they had been extirpated, some prefer the term "reestablishment". Humans have been reintroducing species for food and pest control for thousands of years. However, the practice of reintroducing for conservation is much younger, starting in the 20th century. Methods for sourcing individuals There are a variety of approaches to species reintroduction. The optimal strategy will depend on the biology of the organism. The first matter to address when beginning a species reintroduction is whether to source individuals in situ, from wild populations, or ex situ, from captivity in a zoo or botanic garden, for example. In situ sourcing In situ sourcing for restorations involves moving individuals from an existing wild population to a new site where the species was formerly extirpated. Ideally, populations should be sourced in situ when possible due to the numerous risks associated with reintroducing organisms from captive populations to the wild. To ensure that reintroduced populations have the best chance of surviving and reproducing, individuals should be sourced from populations that genetically and ecologically resemble the recipient population. Generally, sourcing from populations with similar environmental conditions to the reintroduction site will maximize the chance that reintroduced individuals are we
https://en.wikipedia.org/wiki/Helmholtz%27s%20theorems
In fluid mechanics, Helmholtz's theorems, named after Hermann von Helmholtz, describe the three-dimensional motion of fluid in the vicinity of vortex lines. These theorems apply to inviscid flows and flows where the influence of viscous forces are small and can be ignored. Helmholtz's three theorems are as follows: Helmholtz's first theorem The strength of a vortex line is constant along its length. Helmholtz's second theorem A vortex line cannot end in a fluid; it must extend to the boundaries of the fluid or form a closed path. Helmholtz's third theorem A fluid element that is initially irrotational remains irrotational. Helmholtz's theorems apply to inviscid flows. In observations of vortices in real fluids the strength of the vortices always decays gradually due to the dissipative effect of viscous forces. Alternative expressions of the three theorems are as follows: The strength of a vortex tube does not vary with time. Fluid elements lying on a vortex line at some instant continue to lie on that vortex line. More simply, vortex lines move with the fluid. Also vortex lines and tubes must appear as a closed loop, extend to infinity or start/end at solid boundaries. Fluid elements initially free of vorticity remain free of vorticity. Helmholtz's theorems have application in understanding: Generation of lift on an airfoil Starting vortex Horseshoe vortex Wingtip vortices. Helmholtz's theorems are now generally proven with reference to Kelvin's circulation theorem. However Helmholtz's theorems were published in 1858, nine years before the 1867 publication of Kelvin's theorem. Notes References M. J. Lighthill, An Informal Introduction to Theoretical Fluid Mechanics, Oxford University Press, 1986, P. G. Saffman, Vortex Dynamics, Cambridge University Press, 1995, G. K. Batchelor, An Introduction to Fluid Dynamics, Cambridge University Press (1967, reprinted in 2000). Kundu, P and Cohen, I, Fluid Mechanics, 2nd edition, Academic Press 2002. George B
https://en.wikipedia.org/wiki/Gene%20duplication
Gene duplication (or chromosomal duplication or gene amplification) is a major mechanism through which new genetic material is generated during molecular evolution. It can be defined as any duplication of a region of DNA that contains a gene. Gene duplications can arise as products of several types of errors in DNA replication and repair machinery as well as through fortuitous capture by selfish genetic elements. Common sources of gene duplications include ectopic recombination, retrotransposition event, aneuploidy, polyploidy, and replication slippage. Mechanisms of duplication Ectopic recombination Duplications arise from an event termed unequal crossing-over that occurs during meiosis between misaligned homologous chromosomes. The chance of it happening is a function of the degree of sharing of repetitive elements between two chromosomes. The products of this recombination are a duplication at the site of the exchange and a reciprocal deletion. Ectopic recombination is typically mediated by sequence similarity at the duplicate breakpoints, which form direct repeats. Repetitive genetic elements such as transposable elements offer one source of repetitive DNA that can facilitate recombination, and they are often found at duplication breakpoints in plants and mammals. Replication slippage Replication slippage is an error in DNA replication that can produce duplications of short genetic sequences. During replication DNA polymerase begins to copy the DNA. At some point during the replication process, the polymerase dissociates from the DNA and replication stalls. When the polymerase reattaches to the DNA strand, it aligns the replicating strand to an incorrect position and incidentally copies the same section more than once. Replication slippage is also often facilitated by repetitive sequences, but requires only a few bases of similarity. Retrotransposition Retrotransposons, mainly L1, can occasionally act on cellular mRNA. Transcripts are reverse tr
https://en.wikipedia.org/wiki/Ens%C5%8D
In Zen, an is a circle hand-drawn in one or two uninhibited brushstrokes to express a moment when the mind is free to let the body create. Description The symbolizes absolute enlightenment, strength, elegance, the universe, and (the void). It is characterised by a minimalism born of Japanese aesthetics. Drawing is a disciplined-creative practice of Japanese ink painting, . The tools and mechanics of drawing the are the same as those used in traditional Japanese calligraphy: One uses an ink brush to apply ink to (a thin Japanese paper). The circle may be open or closed. In the former case, the circle is incomplete, allowing for movement and development and the perfection of all things. Zen practitioners relate the idea to , the beauty of imperfection. When the circle is closed, it represents perfection, akin to Plato's perfect form, the reason why the circle was used for centuries in the construction of cosmological models (see Ptolemy). Usually, a person draws the in one fluid, expressive stroke. When drawn according to the (cursive) style of Japanese calligraphy, the brushstroke is especially swift. Once the is drawn, one does not change it. It evidences the character of its creator and the context of its creation in a brief, continuous period. Drawing is a spiritual practice that one might perform as often as once per day. This spiritual practice of drawing or writing Japanese calligraphy for self-realization is called . exemplifies the various dimensions of the Japanese wabi-sabi perspective and aesthetic: fukinsei (asymmetry, irregularity), kanso (simplicity), koko (basic; weathered), shizen (without pretense; natural), yugen (subtly profound grace), datsuzoku (freedom), and seijaku (tranquility). See also Wuji Abstract expressionism, a 20th-century American art movement Buddhism in Japan Dhyāna in Buddhism, a meditation practice in which the observer detaches from several qualities of the mind Ink wash painting, an East Asian style of b
https://en.wikipedia.org/wiki/Password%20cracking
In cryptanalysis and computer security, password cracking is the process of recovering passwords from data that has been stored in or transmitted by a computer system in scrambled form. A common approach (brute-force attack) is to repeatedly try guesses for the password and to check them against an available cryptographic hash of the password. Another type of approach is password spraying, which is often automated and occurs slowly over time in order to remain undetected, using a list of common passwords. The purpose of password cracking might be to help a user recover a forgotten password (due to the fact that installing an entirely new password would involve System Administration privileges), to gain unauthorized access to a system, or to act as a preventive measure whereby system administrators check for easily crackable passwords. On a file-by-file basis, password cracking is utilized to gain access to digital evidence to which a judge has allowed access, when a particular file's permissions restricted. Time needed for password searches The time to crack a password is related to bit strength , which is a measure of the password's entropy, and the details of how the password is stored. Most methods of password cracking require the computer to produce many candidate passwords, each of which is checked. One example is brute-force cracking, in which a computer tries every possible key or password until it succeeds. With multiple processors, this time can be optimized through searching from the last possible group of symbols and the beginning at the same time, with other processors being placed to search through a designated selection of possible passwords. More common methods of password cracking, such as dictionary attacks, pattern checking, word list substitution, etc. attempt to reduce the number of trials required and will usually be attempted before brute force. Higher password bit strength exponentially increases the number of candidate passwords that must
https://en.wikipedia.org/wiki/Pancake%20sorting
Pancake sorting is the mathematical problem of sorting a disordered stack of pancakes in order of size when a spatula can be inserted at any point in the stack and used to flip all pancakes above it. A pancake number is the minimum number of flips required for a given number of pancakes. In this form, the problem was first discussed by American geometer Jacob E. Goodman. A variant of the problem is concerned with burnt pancakes, where each pancake has a burnt side and all pancakes must, in addition, end up with the burnt side on bottom. All sorting methods require pairs of elements to be compared. For the traditional sorting problem, the usual problem studied is to minimize the number of comparisons required to sort a list. The number of actual operations, such as swapping two elements, is then irrelevant. For pancake sorting problems, in contrast, the aim is to minimize the number of operations, where the only allowed operations are reversals of the elements of some prefix of the sequence. Now, the number of comparisons is irrelevant. The pancake problems The original pancake problem The minimum number of flips required to sort any stack of pancakes has been shown to lie between and (approximately 1.07n and 1.64n,) but the exact value is not known. The simplest pancake sorting algorithm performs at most flips. In this algorithm, a kind of selection sort, we bring the largest pancake not yet sorted to the top with one flip; take it down to its final position with one more flip; and repeat this process for the remaining pancakes. In 1979, Bill Gates and Christos Papadimitriou gave a lower bound of 1.06n flips and an upper bound of . The upper bound was improved, thirty years later, to by a team of researchers at the University of Texas at Dallas, led by Founders Professor Hal Sudborough. In 2011, Laurent Bulteau, Guillaume Fertin, and Irena Rusu proved that the problem of finding the shortest sequence of flips for a given stack of pancakes is NP-hard, th
https://en.wikipedia.org/wiki/2520%20%28number%29
2520 (two thousand five hundred twenty) is the natural number following 2519 and preceding 2521. In mathematics 2520 is: the smallest number divisible by all integers from one to ten, i.e., it is their least common multiple. half of 7! (5040), meaning 7 factorial, or . the product of five consecutive numbers, namely . a superior highly composite number. a colossally abundant number. the last highly composite number that is half of the next highly composite number. the last highly composite number that is a divisor of all following highly composite numbers. palindromic in undecimal (199111) and a repdigit in bases 55, 59, and 62. a Harshad number in all bases between binary and hexadecimal. the aliquot sum of 1080. part of the 53-aliquot tree. The complete aliquot sequence starting at 1080 is 1080, 2520, 6840, 16560, 41472, 82311, 27441, 12209, 451, 53, 1, 0. Factors The factors, also called divisors, of 2520 are: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 18, 20, 21, 24, 28, 30, 35, 36, 40, 42, 45, 56, 60, 63, 70, 72, 84, 90, 105, 120, 126, 140, 168, 180, 210, 252, 280, 315, 360, 420, 504, 630, 840, 1260, 2520. References Integers
https://en.wikipedia.org/wiki/Von%20Neumann%20architecture
The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by John von Neumann, and by others, in the First Draft of a Report on the EDVAC. The document describes a design architecture for an electronic digital computer with these components: A processing unit with both an arithmetic logic unit and processor registers A control unit that includes an instruction register and a program counter Memory that stores data and instructions External mass storage Input and output mechanisms The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system. The design of a von Neumann architecture machine is simpler than in a Harvard architecture machine—which is also a stored-program system, yet has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions. A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use a mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC. These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units. The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instru
https://en.wikipedia.org/wiki/Algebraically%20compact%20module
In mathematics, algebraically compact modules, also called pure-injective modules, are modules that have a certain "nice" property which allows the solution of infinite systems of equations in the module by finitary means. The solutions to these systems allow the extension of certain kinds of module homomorphisms. These algebraically compact modules are analogous to injective modules, where one can extend all module homomorphisms. All injective modules are algebraically compact, and the analogy between the two is made quite precise by a category embedding. Definitions Let be a ring, and a left -module. Consider a system of infinitely many linear equations where both sets and may be infinite, and for each the number of nonzero is finite. The goal is to decide whether such a system has a solution, that is whether there exist elements of such that all the equations of the system are simultaneously satisfied. (It is not required that only finitely many are non-zero.) The module M is algebraically compact if, for all such systems, if every subsystem formed by a finite number of the equations has a solution, then the whole system has a solution. (The solutions to the various subsystems may be different.) On the other hand, a module homomorphism is a pure embedding if the induced homomorphism between the tensor products is injective for every right -module . The module is pure-injective if any pure injective homomorphism splits (that is, there exists with ). It turns out that a module is algebraically compact if and only if it is pure-injective. Examples All modules with finitely many elements are algebraically compact. Every vector space is algebraically compact (since it is pure-injective). More generally, every injective module is algebraically compact, for the same reason. If R is an associative algebra with 1 over some field k, then every R-module with finite k-dimension is algebraically compact. This, together with the fact that all fin
https://en.wikipedia.org/wiki/Low-discrepancy%20sequence
In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of N, its subsequence x1, ..., xN has a low discrepancy. Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value). Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage. Applications Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly. Two useful applications are in finding the characteristic function of a probability density function, and in finding the derivative function of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-order moments to be calculated to high accuracy very quickly. Applications that don't involve sorting would be in finding the mean, standard deviation, skewness and kurtosis of a statistical distribution, and in finding the integral and global maxima and minima of difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as Newton–Raphson iteration. Quasirandom numbers can also be combined with search alg
https://en.wikipedia.org/wiki/Sphere%20eversion
In differential topology, sphere eversion is the process of turning a sphere inside out in a three-dimensional space (the word eversion means "turning inside out"). Remarkably, it is possible to smoothly and continuously turn a sphere inside out in this way (allowing self-intersections of the sphere's surface) without cutting or tearing it or creating any crease. This is surprising, both to non-mathematicians and to those who understand regular homotopy, and can be regarded as a veridical paradox; that is something that, while being true, on first glance seems false. More precisely, let be the standard embedding; then there is a regular homotopy of immersions such that ƒ0 = ƒ and ƒ1 = −ƒ. History An existence proof for crease-free sphere eversion was first created by . It is difficult to visualize a particular example of such a turning, although some digital animations have been produced that make it somewhat easier. The first example was exhibited through the efforts of several mathematicians, including Arnold S. Shapiro and Bernard Morin, who was blind. On the other hand, it is much easier to prove that such a "turning" exists, and that is what Smale did. Smale's graduate adviser Raoul Bott at first told Smale that the result was obviously wrong . His reasoning was that the degree of the Gauss map must be preserved in such "turning"—in particular it follows that there is no such turning of S1 in R2. But the degrees of the Gauss map for the embeddings f and −f in R3 are both equal to 1, and do not have opposite sign as one might incorrectly guess. The degree of the Gauss map of all immersions of S2 in R3 is 1, so there is no obstacle. The term "veridical paradox" applies perhaps more appropriately at this level: until Smale's work, there was no documented attempt to argue for or against the eversion of S2, and later efforts are in hindsight, so there never was a historical paradox associated with sphere eversion, only an appreciation of the subtleties in vis
https://en.wikipedia.org/wiki/Threatened%20species
Threatened species are any species (including animals, plants and fungi) which are vulnerable to extinction in the near future. Species that are threatened are sometimes characterised by the population dynamics measure of critical depensation, a mathematical measure of biomass related to population growth rate. This quantitative metric is one method of evaluating the degree of endangerment. IUCN definition The International Union for Conservation of Nature (IUCN) is the foremost authority on threatened species, and treats threatened species not as a single category, but as a group of three categories, depending on the degree to which they are threatened: Vulnerable species Endangered species Critically endangered species Less-than-threatened categories are near threatened, least concern, and the no longer assigned category of conservation dependent. Species which have not been evaluated (NE), or do not have sufficient data (data deficient) also are not considered "threatened" by the IUCN. Although threatened and vulnerable may be used interchangeably when discussing IUCN categories, the term threatened is generally used to refer to the three categories (critically endangered, endangered and vulnerable), while vulnerable is used to refer to the least at risk of those three categories. They may be used interchangeably in most contexts however, as all vulnerable species are threatened species (vulnerable is a category of threatened species); and, as the more at-risk categories of threatened species (namely endangered and critically endangered) must, by definition, also qualify as vulnerable species, all threatened species may also be considered vulnerable. Threatened species are also referred to as a red-listed species, as they are listed in the IUCN Red List of Threatened Species. Subspecies, populations and stocks may also be classified as threatened. By country Australia Federal The Commonwealth of Australia (federal government) has legislation for categori
https://en.wikipedia.org/wiki/ChucK
ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance, which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control. ChucK was created and chiefly designed by Ge Wang as a graduate student working with Perry R. Cook. ChucK is distributed freely under the terms of the GNU General Public License on Mac OS X, Linux and Microsoft Windows. On iPhone and iPad, ChiP (ChucK for iPhone) is distributed under a limited, closed source license, and is not currently licensed to the public. However, the core team has stated that it would like to explore "ways to open ChiP by creating a beneficial environment for everyone". Language features The ChucK programming language is a loosely C-like object-oriented language, with strong static typing. ChucK is distinguished by the following characteristics: Direct support for real-time audio synthesis A powerful and simple concurrent programming model A unified timing mechanism for multi-rate event and control processing. A language syntax that encourages left-to-right syntax and semantics within program statements. Precision timing: a strongly timed sample-synchronous timing model. Programs are dynamically compiled to ChucK virtual machine bytecode. A runtime environment that supports on-the-fly programming. The ChucK Ope
https://en.wikipedia.org/wiki/Serial%20Storage%20Architecture
Serial Storage Architecture (SSA) was a serial transport protocol used to attach disk drives to server computers. History SSA was invented by Ian Judd of IBM in 1990. IBM produced a number of successful products based upon this standard before it was overtaken by the more widely adopted Fibre Channel protocol. SSA was promoted as an open standard by the SSA Industry Association, unlike its predecessor the first generation Serial Disk Subsystem. A number of vendors including IBM, Pathlight Technology and Vicom Systems produced products based on SSA. It was also adopted as an American National Standards Institute (ANSI) X3T10.1 standard. SSA devices are logically SCSI devices and conform to all of the SCSI command protocols. SSA provides data protection for critical applications by helping to ensure that a single cable failure will not prevent access to data. All the components in a typical SSA subsystem are connected by bi-directional cabling. Data sent from the adaptor can travel in either direction around the loop to its destination. SSA detects interruptions in the loop and automatically reconfigures the system to help maintain connection while a link is restored. Up to 192 hot swappable hard disk drives can be supported per system. Drives can be designated for use by an array in the event of hardware failure. Up to 32 separate RAID arrays can be supported per adaptor, and arrays can be mirrored across servers to provide cost-effective protection for critical applications. Furthermore, arrays can be sited up to 25 metres apart - connected by thin, low-cost copper cables - allowing subsystems to be located in secure, convenient locations, far from the server itself. SSA was deployed in server RAID environments, where it was capable of providing for up to 80 Mbyte/s of data throughput, with sustained data rates as high as 60 Mbytes/s in non-RAID mode and 35 Mbytes/s in RAID mode. Link characteristics The copper cables used in SSA configurations are round b
https://en.wikipedia.org/wiki/Mozilla%20Sunbird
Mozilla Sunbird is a discontinued free and open-source, cross-platform calendar application that was developed by the Mozilla Foundation, Sun Microsystems and many volunteers. Mozilla Sunbird was described as "a cross platform standalone calendar application based on Mozilla's XUL user interface language". Announced in July 2003, Sunbird was a standalone version of the Mozilla Calendar Project. It was developed as a standalone version of the Lightning calendar and scheduling extension for the Mozilla Thunderbird and SeaMonkey mail clients. Development of Sunbird was ended with release 1.0 beta 1 to focus on development of Mozilla Lightning. The latest development version of Sunbird remains 1.0b1 from January 2010, and no later version has been announced. Unlike Lightning, Sunbird no longer receives updates to its time zone database. Sun contributions Sun Microsystems contributed significantly to the Lightning extension project to provide users with a free and open-source alternative to Microsoft Office by combining OpenOffice.org and Thunderbird/Lightning. Sun's key focus areas in addition to general bug fixing were calendar views, team/collaboration features and support for the Sun Java System Calendar Server. Since both projects share the same code base, any contribution to one is a direct contribution to the other. Trademark issues and Iceowl Although it is released under a MPL, MPL/GPL/LGPL tri-license, there are trademark restrictions in place on Mozilla Sunbird which prevent the distribution of modified versions with the Mozilla branding. As a result, the Debian project created Iceowl, a virtually identical version without the branding restrictions. Release history See also Lightning for Mozilla Thunderbird and SeaMonkey List of personal information managers References External links MozillaWiki The Sunbird development blog Sunbird Portable by PortableApps.com Linux sunbird installer Mozilla Free calendaring software Personal information mana
https://en.wikipedia.org/wiki/Extracellular%20fluid
In cell biology, extracellular fluid (ECF) denotes all body fluid outside the cells of any multicellular organism. Total body water in healthy adults is about 50–60% (range 45 to 75%) of total body weight; women and the obese typically have a lower percentage than lean men. Extracellular fluid makes up about one-third of body fluid, the remaining two-thirds is intracellular fluid within cells. The main component of the extracellular fluid is the interstitial fluid that surrounds cells. Extracellular fluid is the internal environment of all multicellular animals, and in those animals with a blood circulatory system, a proportion of this fluid is blood plasma. Plasma and interstitial fluid are the two components that make up at least 97% of the ECF. Lymph makes up a small percentage of the interstitial fluid. The remaining small portion of the ECF includes the transcellular fluid (about 2.5%). The ECF can also be seen as having two components – plasma and lymph as a delivery system, and interstitial fluid for water and solute exchange with the cells. The extracellular fluid, in particular the interstitial fluid, constitutes the body's internal environment that bathes all of the cells in the body. The ECF composition is therefore crucial for their normal functions, and is maintained by a number of homeostatic mechanisms involving negative feedback. Homeostasis regulates, among others, the pH, sodium, potassium, and calcium concentrations in the ECF. The volume of body fluid, blood glucose, oxygen, and carbon dioxide levels are also tightly homeostatically maintained. The volume of extracellular fluid in a young adult male of 70 kg (154 lbs) is 20% of body weight – about fourteen liters. Eleven liters are interstitial fluid and the remaining three liters are plasma. Components The main component of the extracellular fluid (ECF) is the interstitial fluid, or tissue fluid, which surrounds the cells in the body. The other major component of the ECF is the intravascula
https://en.wikipedia.org/wiki/Zooxanthellae
Zooxanthellae is a colloquial term for single-celled dinoflagellates that are able to live in symbiosis with diverse marine invertebrates including demosponges, corals, jellyfish, and nudibranchs. Most known zooxanthellae are in the genus Symbiodinium, but some are known from the genus Amphidinium, and other taxa, as yet unidentified, may have similar endosymbiont affinities. The true Zooxanthella K.brandt is a mutualist of the radiolarian Collozoum inerme (Joh.Müll., 1856) and systematically placed in Peridiniales. Another group of unicellular eukaryotes that partake in similar endosymbiotic relationships in both marine and freshwater habitats are green algae zoochlorellae. Zooxanthellae are photosynthetic organisms, which contain chlorophyll a and chlorophyll c, as well as the dinoflagellate pigments peridinin and diadinoxanthin. These provide the yellowish and brownish colours typical of many of the host species. During the day, they provide their host with the organic carbon products of photosynthesis, sometimes providing up to 90% of their host's energy needs for metabolism, growth and reproduction. In return, they receive nutrients, carbon dioxide, and an elevated position with access to sunshine. Morphology and classification Zooxanthellae can be grouped in the classes of Bacillariophyceae, Cryptophyceae, Dinophyceae, and Rhodophycaeae and of the genera Amphidinium, Gymnodinium, Aureodinium, Gyrodinium, Prorocentrum, Scrippsiella, Gloeodinium, and most commonly, Symbiodinium. Zooxanthellae of genus Symbiodinium belong to a total of eight phylogenetic clades A-H, differentiated via their nuclear ribosomal DNA and chloroplast DNA. Zooxanthellae are autotrophs containing chloroplasts composed of thylakoids present in clusters of three. A pyrenoid protrudes from each chloroplast and is encased along with the chloroplast by a thick, starchy covering. Within the cell’s cytoplasm also exists lipid vacuoles, calcium oxalate crystals, dictyosomes, and mitochondria
https://en.wikipedia.org/wiki/Md5sum
is a computer program that calculates and verifies 128-bit MD5 hashes, as described in RFC 1321. The MD5 hash functions as a compact digital fingerprint of a file. As with all such hashing algorithms, there is theoretically an unlimited number of files that will have any given MD5 hash. However, it is very unlikely that any two non-identical files in the real world will have the same MD5 hash, unless they have been specifically created to have the same hash. The underlying MD5 algorithm is no longer deemed secure. Thus, while is well-suited for identifying known files in situations that are not security related, it should not be relied on if there is a chance that files have been purposefully and maliciously tampered. In the latter case, the use of a newer hashing tool such as sha256sum is recommended. is used to verify the integrity of files, as virtually any change to a file will cause its MD5 hash to change. Most commonly, is used to verify that a file has not changed as a result of a faulty file transfer, a disk error or non-malicious meddling. The program is included in most Unix-like operating systems or compatibility layers such as Cygwin. The original C code was written by Ulrich Drepper and extracted from a 2001 release of . Examples All of the following files are assumed to be in the current directory. Create MD5 hash file hash.md5 $ md5sum filetohashA.txt filetohashB.txt filetohashC.txt > hash.md5 File produced File contains hash and filename pairs: $ cat hash.md5 595f44fec1e92a71d3e9e77456ba80d1 filetohashA.txt 71f920fa275127a7b60fa4d4d41432a3 filetohashB.txt 43c191bf6d6c3f263a8cd0efd4a058ab filetohashC.txt Please note: After the value there must be a space followed by either a second space (for text mode) or an asterisk (for binary mode); otherwise, the following error will result: no properly formatted MD5 checksum lines found. Many programs don't distinguish between the two modes, but some utils do. The file must also be UNIX line end
https://en.wikipedia.org/wiki/Stateless%20protocol
A stateless protocol is a communication protocol in which the receiver must not retain session state from previous requests. The sender transfers relevant session state to the receiver in such a way that every request can be understood in isolation, that is without reference to session state from previous requests retained by the receiver. In contrast, a stateful protocol is a communication protocol in which the receiver may retain session state from previous requests. In computer networks, examples of stateless protocols include the Internet Protocol (IP), which is the foundation for the Internet, and the Hypertext Transfer Protocol (HTTP), which is the foundation of the World Wide Web. Examples of stateful protocols include the Transmission Control Protocol (TCP) and the File Transfer Protocol (FTP). Stateless protocols improve the properties of visibility, reliability, and scalability. Visibility is improved because a monitoring system does not have to look beyond a single request in order to determine its full nature. Reliability is improved because it eases the task of recovering from partial failures. Scalability is improved because not having to store session state between requests allows the server to quickly free resources and further simplifies implementation. The disadvantage of stateless protocols is that they may decrease network performance by increasing the repetitive data sent in a series of requests, since that data cannot be left on the server and reused. Examples An HTTP server can understand each request in isolation. Contrast this with a traditional FTP server that conducts an interactive session with the user. During the session, a user is provided a means to be authenticated and set various variables (working directory, transfer mode), all stored on the server as part of the session state. Stacking of stateless and stateful protocol layers There can be complex interactions between stateful and stateless protocols among different proto
https://en.wikipedia.org/wiki/Princeton%20Sound%20Lab
The Princeton Sound Lab is a research laboratory in the Department of Computer Science at Princeton University, in collaboration with the Department of Music. The Sound Lab conducts research in a variety of areas in computer music, including physical modeling, audio analysis, audio synthesis, programming languages for audio and multimedia, interactive controller design, psychoacoustics, and real-time systems for composition and performance. External links Princeton University Audio engineering
https://en.wikipedia.org/wiki/Attribute%20grammar
An attribute grammar is a formal way to supplement a formal grammar with semantic information processing. Semantic information is stored in attributes associated with terminal and nonterminal symbols of the grammar. The values of attributes are result of attribute evaluation rules associated with productions of the grammar. Attributes allow to transfer information from anywhere in the abstract syntax tree to anywhere else, in a controlled and formal way. Each semantic function deals with attributes of symbols occurring only in one production rule: both semantic function parameters and its result are attributes of symbols from one particular rule. When a semantic function defines the value of an attribute of the symbol on the left hand side of the rule, the attribute is called synthesized; otherwise it is called inherited. Thus, synthesized attributes serve to pass semantic information up the parse tree, while inherited attributes allow values to be passed from the parent nodes down and across the syntax tree. In simple applications, such as evaluation of arithmetic expressions, attribute grammar may be used to describe the entire task to be performed besides parsing in straightforward way; in complicated systems, for instance, when constructing a language translation tool, such as a compiler, it may be used to validate semantic checks associated with a grammar, representing the rules of a language not explicitly imparted by the syntax definition. It may be also used by parsers or compilers to translate the syntax tree directly into code for some specific machine, or into some intermediate language. History Attribute grammars were invented by Donald Knuth and Peter Wegner. While Donald Knuth is credited for the overall concept, Peter Wegner invented inherited attributes during a conversation with Knuth. Some embryonic ideas trace back to the work of Edgar T. "Ned" Irons, the author of IMP. Example The following is a simple context-free grammar which can describe
https://en.wikipedia.org/wiki/Deduction%20theorem
In mathematical logic, a deduction theorem is a metatheorem that justifies doing conditional proofs from a hypothesis in systems that do not explicitly axiomatize that hypothesis, i.e. to prove an implication A → B, it is sufficient to assume A as a hypothesis and then proceed to derive B. Deduction theorems exist for both propositional logic and first-order logic. The deduction theorem is an important tool in Hilbert-style deduction systems because it permits one to write more comprehensible and usually much shorter proofs than would be possible without it. In certain other formal proof systems the same conveniency is provided by an explicit inference rule; for example natural deduction calls it implication introduction. In more detail, the propositional logic deduction theorem states that if a formula is deducible from a set of assumptions then the implication is deducible from ; in symbols, implies . In the special case where is the empty set, the deduction theorem claim can be more compactly written as: implies . The deduction theorem for predicate logic is similar, but comes with some extra constraints (that would for example be satisfied if is a closed formula). In general a deduction theorem needs to take into account all logical details of the theory under consideration, so each logical system technically needs its own deduction theorem, although the differences are usually minor. The deduction theorem holds for all first-order theories with the usual deductive systems for first-order logic. However, there are first-order systems in which new inference rules are added for which the deduction theorem fails. Most notably, the deduction theorem fails to hold in Birkhoff–von Neumann quantum logic, because the linear subspaces of a Hilbert space form a non-distributive lattice. Examples of deduction "Prove" axiom 1: P→(Q→P) {{efn|See explanation of Notation § below.}}P 1. hypothesisQ 2. hypothesisP 3. reiteration of 1Q→P 4. deduction from 2 to 3P→(Q→
https://en.wikipedia.org/wiki/Traffic%20analysis
Traffic analysis is the process of intercepting and examining messages in order to deduce information from patterns in communication. It can be performed even when the messages are encrypted. In general, the greater the number of messages observed, the greater information be inferred. Traffic analysis can be performed in the context of military intelligence, counter-intelligence, or pattern-of-life analysis, and is also a concern in computer security. Traffic analysis tasks may be supported by dedicated computer software programs. Advanced traffic analysis techniques which may include various forms of social network analysis. Traffic analysis has historically been a vital technique in cryptanalysis, especially when the attempted crack depends on successfully seeding a known-plaintext attack, which often requires an inspired guess based on how specific the operational context might likely influence what an adversary communicates, which may be sufficient to establish a short crib. Breaking the anonymity of networks Traffic analysis method can be used to break the anonymity of anonymous networks, e.g., TORs. There are two methods of traffic-analysis attack, passive and active. In passive traffic-analysis method, the attacker extracts features from the traffic of a specific flow on one side of the network and looks for those features on the other side of the network. In active traffic-analysis method, the attacker alters the timings of the packets of a flow according to a specific pattern and looks for that pattern on the other side of the network; therefore, the attacker can link the flows in one side to the other side of the network and break the anonymity of it. It is shown, although timing noise is added to the packets, there are active traffic analysis methods robust against such a noise. In military intelligence In a military context, traffic analysis is a basic part of signals intelligence, and can be a source of information about the intentions and ac
https://en.wikipedia.org/wiki/Index%20of%20wave%20articles
This is a list of wave topics. 0–9 21 cm line A Abbe prism Absorption spectroscopy Absorption spectrum Absorption wavemeter Acoustic wave Acoustic wave equation Acoustics Acousto-optic effect Acousto-optic modulator Acousto-optics Airy disc Airy wave theory Alfvén wave Alpha waves Amphidromic point Amplitude Amplitude modulation Animal echolocation Antarctic Circumpolar Wave Antiphase Aquamarine Power Arrayed waveguide grating Artificial wave Atmospheric diffraction Atmospheric wave Atmospheric waveguide Atom laser Atomic clock Atomic mirror Audience wave Autowave Averaged Lagrangian B Babinet's principle Backward wave oscillator Bandwidth-limited pulse beat Berry phase Bessel beam Beta wave Black hole Blazar Bloch's theorem Blueshift Boussinesq approximation (water waves) Bow wave Bragg diffraction Bragg's law Breaking wave Bremsstrahlung, Electromagnetic radiation Brillouin scattering Bullet bow shockwave Burgers' equation Business cycle C Capillary wave Carrier wave Cherenkov radiation Chirp Ernst Chladni Circular polarization Clapotis Closed waveguide Cnoidal wave Coherence (physics) Coherence length Coherence time Cold wave Collimated light Collimator Compton effect Comparison of analog and digital recording Computation of radiowave attenuation in the atmosphere Continuous phase modulation Continuous wave Convective heat transfer Coriolis frequency Coronal mass ejection Cosmic microwave background radiation Coulomb wave function Cutoff frequency Cutoff wavelength Cymatics D Damped wave Decollimation Delta wave Dielectric waveguide Diffraction Direction finding Dispersion (optics) Dispersion (water waves) Dispersion relation Dominant wavelength Doppler effect Doppler radar Douglas Sea Scale Draupner wave Droplet-shaped wave Duhamel's principle E E-skip Earthquake Echo (phenomenon) Echo sounding Echolocation (animal) Echolocation (human) Eddy (fluid dynamics) Edge wave Eikonal equation Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation El
https://en.wikipedia.org/wiki/Infinitesimal%20rotation%20matrix
An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation. While a rotation matrix is an orthogonal matrix representing an element of (the special orthogonal group), the differential of a rotation is a skew-symmetric matrix in the tangent space (the special orthogonal Lie algebra), which is not itself a rotation matrix. An infinitesimal rotation matrix has the form where is the identity matrix, is vanishingly small, and For example, if representing an infinitesimal three-dimensional rotation about the -axis, a basis element of The computation rules for infinitesimal rotation matrices are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. It turns out that the order in which infinitesimal rotations are applied is irrelevant. Discussion An infinitesimal rotation matrix is a skew-symmetric matrix where: As any rotation matrix has a single real eigenvalue, which is equal to +1, the corresponding eigenvector defines the rotation axis. Its module defines an infinitesimal angular displacement. The shape of the matrix is as follows: Associated quantities Associated to an infinitesimal rotation matrix is an infinitesimal rotation tensor : Dividing it by the time difference yields the angular velocity tensor: Order of rotations These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals . To understand what this means, consider First, test the orthogonality condition, . The product is differing from an identity matrix by second order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix. Next, examine the square of the matrix, Again discarding second order effects, note that the angle si
https://en.wikipedia.org/wiki/PicoBSD
PicoBSD is a discontinued single-floppy disk version of FreeBSD, one of the BSD operating system descendants. In its different variations, PicoBSD allows one to have secure Dial-up Internet access , a small diskless router, or a dial-in server, all on one standard floppy disc. It runs on a minimum 386SX CPU with of RAM (no hard drive required). PicoBSD is freely available under the BSD license. The main developer was Andrzej Bialecki, and the latest version is 0.42. Dinesh Nair had then backported the PicoBSD build scripts to FreeBSD 2.2.5, allowing the addition of a few more binaries in the dial-up flavor due to FreeBSD 2.2.5's smaller binary executable format. With flexibility that FreeBSD gives, along with the full source code being available, one can build a small installation performing various tasks, including: Diskless workstation Portable dial-up access solution Custom demo-disk Embedded controller (flash or EEPROM) Firewall Communication server Replacement for commercial router Diskless home automation system And many others PicoBSD is now included in the FreeBSD source files where it is used by embedded system developers to create their own system images. It can be used with recent versions of FreeBSD and it is located in /usr/src/release/picobsd/. In FreeBSD 5, it has been superseded by the NanoBSD framework References See also Comparison of BSD operating systems FreeBSD Lightweight Unix-like systems
https://en.wikipedia.org/wiki/Formal%20group%20law
In mathematics, a formal group law is (roughly speaking) a formal power series behaving as if it were the product of a Lie group. They were introduced by . The term formal group sometimes means the same as formal group law, and sometimes means one of several generalizations. Formal groups are intermediate between Lie groups (or algebraic groups) and Lie algebras. They are used in algebraic number theory and algebraic topology. Definitions A one-dimensional formal group law over a commutative ring R is a power series F(x,y) with coefficients in R, such that F(x,y) = x + y + terms of higher degree F(x, F(y,z)) = F(F(x,y), z) (associativity). The simplest example is the additive formal group law F(x, y) = x + y. The idea of the definition is that F should be something like the formal power series expansion of the product of a Lie group, where we choose coordinates so that the identity of the Lie group is the origin. More generally, an n-dimensional formal group law is a collection of n power series Fi(x1, x2, ..., xn, y1, y2, ..., yn) in 2n variables, such that F(x,y) = x + y + terms of higher degree F(x, F(y,z)) = F(F(x,y), z) where we write F for (F1, ..., Fn), x for (x1, ..., xn), and so on. The formal group law is called commutative if F(x,y) = F(y,x). If R is torsionfree, then one can embed R into a Q-algebra and use the exponential and logarithm to write any one-dimensional formal group law F as F(x,y) = exp(log(x) + log(y)), so F is necessarily commutative. More generally, we have: Theorem. Every one-dimensional formal group law over R is commutative if and only if R has no nonzero torsion nilpotents (i.e., no nonzero elements that are both torsion and nilpotent). There is no need for an axiom analogous to the existence of inverse elements for groups, as this turns out to follow automatically from the definition of a formal group law. In other words we can always find a (unique) power series G such that F(x,G(x)) = 0. A homomorphism from a formal gr
https://en.wikipedia.org/wiki/Intel%20i960
Intel's i960 (or 80960) was a RISC-based microprocessor design that became popular during the early 1990s as an embedded microcontroller. It became a best-selling CPU in that segment, along with the competing AMD 29000. In spite of its success, Intel stopped marketing the i960 in the late 1990s, as a result of a settlement with DEC whereby Intel received the rights to produce the StrongARM CPU. The processor continues to be used for a few military applications. Origin The i960 design was begun in response to the failure of Intel's iAPX 432 design of the early 1980s. The iAPX 432 was intended to directly support high-level languages that supported tagged, protected, garbage-collected memory—such as Ada and Lisp—in hardware. Because of its instruction-set complexity, its multi-chip implementation, and design flaws, the iAPX 432 was very slow in comparison to other processors of its time. In 1984, Intel and Siemens started a joint project, ultimately called BiiN, to create a high-end, fault-tolerant, object-oriented computer system programmed entirely in Ada. Many of the original i432 team members joined this project, although a new lead architect, Glenford Myers, was brought in from IBM. The intended market for the BiiN systems was high-reliability-computer users such as banks, industrial systems, and nuclear power plants. Intel's major contribution to the BiiN system was a new processor design, influenced by the protected-memory concepts from the i432. The new design was to include a number of features to improve performance and avoid problems that had led to the i432's downfall. The first 960 processors entered the final stages of design, known as taping-out, in October 1985 and were sent to manufacturing that month, with the first working chips arriving in late 1985 and early 1986. The BiiN effort eventually failed, due to market forces, and the 960 was left without a use. Myers attempted to save the design by extracting several subsets of the full capability
https://en.wikipedia.org/wiki/Register%20window
In computer engineering, register windows are a feature which dedicates registers to a subroutine by dynamically aliasing a subset of internal registers to fixed, programmer-visible registers. Register windows are implemented to improve the performance of a processor by reducing the number of stack operations required for function calls and returns. One of the most influential features of the Berkeley RISC design, they were later implemented in instruction set architectures such as AMD Am29000, Intel i960, Sun Microsystems SPARC, and Intel Itanium. General Operation Several sets of registers are provided for the different parts of the program. Registers are deliberately hidden from the programmer to force several subroutines to share processor resources. Rendering the registers invisible can be implemented efficiently; the CPU recognizes the movement from one part of the program to another during a procedure call. It is accomplished by one of a small number of instructions (prologue) and ends with one of a similarly small set (epilogue). In the Berkeley design, these calls would cause a new set of registers to be "swapped in" at that point, or marked as "dead" (or "reusable") when the call ends. Application in CPUs In the Berkeley RISC design, only eight registers out of a total of 64 are visible to the programs. The complete set of registers are known as the register file, and any particular set of eight as a window. The file allows up to eight procedure calls to have their own register sets. As long as the program does not call down chains longer than eight calls deep, the registers never have to be spilled, i.e. saved out to main memory or cache which is a slow process compared to register access. By comparison, the Sun Microsystems SPARC architecture provides simultaneous visibility into four sets of eight registers each. Three sets of eight registers each are "windowed". Eight registers (i0 through i7) form the input registers to the current procedure leve
https://en.wikipedia.org/wiki/Absorbance
Absorbance is defined as "the logarithm of the ratio of incident to transmitted radiant power through a sample (excluding the effects on cell walls)". Alternatively, for samples which scatter light, absorbance may be defined as "the negative logarithm of one minus absorptance, as measured on a uniform sample". The term is used in many technical areas to quantify the results of an experimental measurement. While the term has its origin in quantifying the absorption of light, it is often entangled with quantification of light which is “lost” to a detector system through other mechanisms. What these uses of the term tend to have in common is that they refer to a logarithm of the ratio of a quantity of light incident on a sample or material to that which is detected after the light has interacted with the sample. The term absorption refers to the physical process of absorbing light, while absorbance does not always measure only absorption; it may measure attenuation (of transmitted radiant power) caused by absorption, as well as reflection, scattering, and other physical processes. Sometimes the term "attenuance" or "experimental absorbance" is used to emphasize that radiation is lost from the beam by processes other than absorption, with the term "internal absorbance" used to emphasize that the necessary corrections have been made to eliminate the effects of phenomena other than absorption. History and uses of the term absorbance Beer-Lambert law The roots of the term absorbance are in the Beer–Lambert law. As light moves through a medium, it will become dimmer as it is being "extinguished". Bouguer recognized that this extinction (now often called attenuation) was not linear with distance traveled through the medium, but related by what we now refer to as an exponential function. If is the intensity of the light at the beginning of the travel and is the intensity of the light detected after travel of a distance the fraction transmitted, is given by where
https://en.wikipedia.org/wiki/Glanders
Glanders is a contagious zoonotic infectious disease that occurs primarily in horses, mules, and donkeys. It can be contracted by other animals, such as dogs, cats, pigs, goats, and humans. It is caused by infection with the bacterium Burkholderia mallei. Glanders is endemic in Africa, Asia, the Middle East, and Central and South America. It has been eradicated from North America, Australia, and most of Europe through surveillance and destruction of affected animals, and import restrictions. It has not been reported in the United States since 1945, except in 2000, when an American lab researcher had an accidental exposure in the lab. It is a notifiable disease in the UK, although it has not been reported there since 1928. The term is from Middle English or Old French , both meaning glands. Other terms include , , and . Presentation Signs of glanders include the formation of nodular lesions in the lungs and ulceration of the mucous membranes in the upper respiratory tract. The acute form results in coughing, fever, and the release of an infectious nasal discharge, followed by septicaemia and death within days. In the chronic form, nasal and subcutaneous nodules develop, eventually ulcerating; death can occur within months, while survivors act as carriers. Cause and transmission Glanders is caused by infection with the Burkholderia mallei, usually by ingestion of contaminated feed or water. B. mallei is able to infect humans, so it is classed as a zoonotic agent. Transmission occurs by direct contact with infected animal's body fluid and tissues and entry is through skin abrasions, nasal and oral mucosal surfaces, or inhalation. Diagnosis The mallein test is a sensitive and specific clinical test for glanders. Mallein (ATCvet code: ), a protein fraction of the glanders organism (B. mallei), is injected intradermopalpebrally or given by eye drop. In infected animals, the eyelid swells markedly in 1 to 2 days. Historical cases and potential use in war Glander
https://en.wikipedia.org/wiki/List%20of%20web%20service%20specifications
There are a variety of specifications associated with web services. These specifications are in varying degrees of maturity and are maintained or supported by various standards bodies and entities. These specifications are the basic web services framework established by first-generation standards represented by WSDL, SOAP, and UDDI. Specifications may complement, overlap, and compete with each other. Web service specifications are occasionally referred to collectively as "WS-*", though there is not a single managed set of specifications that this consistently refers to, nor a recognized owning body across them all. Web service standards listings These sites contain documents and links about the different Web services standards identified on this page. IBM Developerworks: Standard and Web Service innoQ's WS-Standard Overview () MSDN .NET Developer Centre: Web Service Specification Index Page OASIS Standards and Other Approved Work Open Grid Forum Final Document XML CoverPage W3C's Web Services Activity XML specification XML (eXtensible Markup Language) XML Namespaces XML Schema XPath XQuery XML Information Set XInclude XML Pointer Messaging specification SOAP (formerly known as Simple Object Access Protocol) SOAP-over-UDP SOAP Message Transmission Optimization Mechanism WS-Notification WS-BaseNotification WS-Topics WS-BrokeredNotification WS-Addressing WS-Transfer WS-Eventing WS-Enumeration WS-MakeConnection Metadata exchange specification JSON-WSP WS-Policy WS-PolicyAssertions WS-PolicyAttachment WS-Discovery WS-Inspection WS-MetadataExchange Universal Description Discovery and Integration (UDDI) WSDL 2.0 Core WSDL 2.0 SOAP Binding Web Services Semantics (WSDL-S) WS-Resource Framework (WSRF) Security specification WS-Security XML Signature XML Encryption XML Key Management (XKMS) WS-SecureConversation WS-SecurityPolicy WS-Trust WS-Federation WS-Federation Active Requestor Profile WS-Federation Passive Requestor
https://en.wikipedia.org/wiki/Essence
Essence () is a polysemic term, having various meanings and uses. It is used in philosophy and theology as a designation for the property or set of properties or attributes that make an entity or substance what it fundamentally is, and which it has by necessity, and without which it loses its identity. Essence is contrasted with accident: a property or attribute the entity or substance has contingently, without which the substance can still retain its identity. The concept originates rigorously with Aristotle (although it can also be found in Plato), who used the Greek expression to ti ên einai (τὸ τί ἦν εἶναι, literally meaning "the what it was to be" and corresponding to the scholastic term quiddity) or sometimes the shorter phrase to ti esti (τὸ τί ἐστι, literally meaning "the what it is" and corresponding to the scholastic term (haecceity(thisness) for the same idea. This phrase presented such difficulties for its Latin translators that they coined the word essentia (English "essence") to represent the whole expression. For Aristotle and his scholastic followers, the notion of essence is closely linked to that of definition (ὁρισμός horismos). In the history of Western philosophy, essence has often served as a vehicle for doctrines that tend to individuate different forms of existence as well as different identity conditions for objects and properties; in this logical meaning, the concept has given a strong theoretical and common-sense basis to the whole family of logical theories based on the "possible worlds" analogy set up by Leibniz and developed in the intensional logic from Carnap to Kripke, which was later challenged by "extensionalist" philosophers such as Quine. Etymology The English word essence comes from Latin essentia, via French essence. The original Latin word was created purposefully, by Ancient Roman philosophers, in order to provide an adequate Latin translation for the Greek term οὐσία (ousia). Stoic philosopher Seneca (d. 65 AD) attributed
https://en.wikipedia.org/wiki/Transmittance
In optical physics, transmittance of the surface of a material is its effectiveness in transmitting radiant energy. It is the fraction of incident electromagnetic power that is transmitted through a sample, in contrast to the transmission coefficient, which is the ratio of the transmitted to incident electric field. Internal transmittance refers to energy loss by absorption, whereas (total) transmittance is that due to absorption, scattering, reflection, etc. Mathematical definitions Hemispherical transmittance Hemispherical transmittance of a surface, denoted T, is defined as where Φet is the radiant flux transmitted by that surface; Φei is the radiant flux received by that surface. Spectral hemispherical transmittance Spectral hemispherical transmittance in frequency and spectral hemispherical transmittance in wavelength of a surface, denoted Tν and Tλ respectively, are defined as where Φe,νt is the spectral radiant flux in frequency transmitted by that surface; Φe,νi is the spectral radiant flux in frequency received by that surface; Φe,λt is the spectral radiant flux in wavelength transmitted by that surface; Φe,λi is the spectral radiant flux in wavelength received by that surface. Directional transmittance Directional transmittance of a surface, denoted TΩ, is defined as where Le,Ωt is the radiance transmitted by that surface; Le,Ωi is the radiance received by that surface. Spectral directional transmittance Spectral directional transmittance in frequency and spectral directional transmittance in wavelength of a surface, denoted Tν,Ω and Tλ,Ω respectively, are defined as where Le,Ω,νt is the spectral radiance in frequency transmitted by that surface; Le,Ω,νi is the spectral radiance received by that surface; Le,Ω,λt is the spectral radiance in wavelength transmitted by that surface; Le,Ω,λi is the spectral radiance in wavelength received by that surface. Beer–Lambert law By definition, internal transmittance is related to optical depth and to absor
https://en.wikipedia.org/wiki/Nondisjunction
Nondisjunction is the failure of homologous chromosomes or sister chromatids to separate properly during cell division (mitosis/meiosis). There are three forms of nondisjunction: failure of a pair of homologous chromosomes to separate in meiosis I, failure of sister chromatids to separate during meiosis II, and failure of sister chromatids to separate during mitosis. Nondisjunction results in daughter cells with abnormal chromosome numbers (aneuploidy). Calvin Bridges and Thomas Hunt Morgan are credited with discovering nondisjunction in Drosophila melanogaster sex chromosomes in the spring of 1910, while working in the Zoological Laboratory of Columbia University. Types In general, nondisjunction can occur in any form of cell division that involves ordered distribution of chromosomal material. Higher animals have three distinct forms of such cell divisions: Meiosis I and meiosis II are specialized forms of cell division occurring during generation of gametes (eggs and sperm) for sexual reproduction, mitosis is the form of cell division used by all other cells of the body. Meiosis II Ovulated eggs become arrested in metaphase II until fertilization triggers the second meiotic division. Similar to the segregation events of mitosis, the pairs of sister chromatids resulting from the separation of bivalents in meiosis I are further separated in anaphase of meiosis II. In oocytes, one sister chromatid is segregated into the second polar body, while the other stays inside the egg. During spermatogenesis, each meiotic division is symmetric such that each primary spermatocyte gives rise to 2 secondary spermatocytes after meiosis I, and eventually 4 spermatids after meiosis II. Meiosis II-nondisjunction may also result in aneuploidy syndromes, but only to a much smaller extent than do segregation failures in meiosis I. Mitosis Division of somatic cells through mitosis is preceded by replication of the genetic material in S phase. As a result, each chromosome consists
https://en.wikipedia.org/wiki/Analyticity%20of%20holomorphic%20functions
In complex analysis, a complex-valued function of a complex variable : is said to be holomorphic at a point if it is differentiable at every point within some open disk centered at , and is said to be analytic at if in some open disk centered at it can be expanded as a convergent power series (this implies that the radius of convergence is positive). One of the most important theorems of complex analysis is that holomorphic functions are analytic and vice versa. Among the corollaries of this theorem are the identity theorem that two holomorphic functions that agree at every point of an infinite set with an accumulation point inside the intersection of their domains also agree everywhere in every connected open subset of their domains that contains the set , and the fact that, since power series are infinitely differentiable, so are holomorphic functions (this is in contrast to the case of real differentiable functions), and the fact that the radius of convergence is always the distance from the center to the nearest non-removable singularity; if there are no singularities (i.e., if is an entire function), then the radius of convergence is infinite. Strictly speaking, this is not a corollary of the theorem but rather a by-product of the proof. no bump function on the complex plane can be entire. In particular, on any connected open subset of the complex plane, there can be no bump function defined on that set which is holomorphic on the set. This has important ramifications for the study of complex manifolds, as it precludes the use of partitions of unity. In contrast the partition of unity is a tool which can be used on any real manifold. Proof The argument, first given by Cauchy, hinges on Cauchy's integral formula and the power series expansion of the expression Let be an open disk centered at and suppose is differentiable everywhere within an open neighborhood containing the closure of . Let be the positively oriented (i.e., count
https://en.wikipedia.org/wiki/Homotopy%20principle
In mathematics, the homotopy principle (or h-principle) is a very general way to solve partial differential equations (PDEs), and more generally partial differential relations (PDRs). The h-principle is good for underdetermined PDEs or PDRs, such as the immersion problem, isometric immersion problem, fluid dynamics, and other areas. The theory was started by Yakov Eliashberg, Mikhail Gromov and Anthony V. Phillips. It was based on earlier results that reduced partial differential relations to homotopy, particularly for immersions. The first evidence of h-principle appeared in the Whitney–Graustein theorem. This was followed by the Nash–Kuiper isometric C1 embedding theorem and the Smale–Hirsch immersion theorem. Rough idea Assume we want to find a function ƒ on Rm which satisfies a partial differential equation of degree k, in co-ordinates . One can rewrite it as where stands for all partial derivatives of ƒ up to order k. Let us exchange every variable in for new independent variables Then our original equation can be thought as a system of and some number of equations of the following type A solution of is called a non-holonomic solution, and a solution of the system which is also solution of our original PDE is called a holonomic solution. In order to check whether a solution to our original equation exists, one can first check if there is a non-holonomic solution. Usually this is quite easy, and if there is no non-holonomic solution, then our original equation did not have any solutions. A PDE satisfies the h-principle if any non-holonomic solution can be deformed into a holonomic one in the class of non-holonomic solutions. Thus in the presence of h-principle, a differential topological problem reduces to an algebraic topological problem. More explicitly this means that apart from the topological obstruction there is no other obstruction to the existence of a holonomic solution. The topological problem of finding a non-holonomic solution is much e
https://en.wikipedia.org/wiki/ISPW
The IRCAM Signal Processing Workstation (ISPW) was a hardware digital audio workstation developed by IRCAM and the Ariel Corporation in the late 1980s. In French, the ISPW is referred to as the SIM (Station d'informatique musicale). Eric Lindemann was the principal designer of the ISPW hardware as well as manager of the overall hardware/software effort. It consisted of up to three customized DSP boards that could be plugged into the expansion bus on a NeXT Computer (a "cube"). The ISPW could then run a customized real-time audio processing server on the hardware boards controlled by a client application on the NeXT. Each ISPW card had two Intel i860 microprocessors (running at 80 MFLOPS). An additional card with eight channels of audio I/O was also available for multi-channel sound recording and playback. A three-board ISPW provided what was at the time unsurpassed signal processing and audio synthesis power on a single workstation. A single ISPW card cost approximately $12,000US (not including the computer), which made it prohibitively expensive outside of research institutes and universities. And the I860 board : The main server software developed by IRCAM for the ISPW was called FTS ("Faster Than Sound"). The main NeXT client application was a graphical program called Max, developed by Miller Puckette. A commercial version of Max (without the FTS server) was licensed by IRCAM to Opcode Systems (and, later, Cycling '74). Max/FTS eventually migrated to a software-only application for SGI and DEC Alpha computers. It is the direct predecessor to jMax. See also Pd. External links A brief history of MAX, from IRCAM Computer music Computer workstations NeXT Digital signal processors
https://en.wikipedia.org/wiki/Convergent%20Technologies%20Operating%20System
The Convergent Technologies Operating System, also known variously as CTOS, BTOS and STARSYS, is a discontinued modular, message-passing, multiprocess-based operating system. Overview CTOS had many innovative features for its time. System access was controlled with a user password and Volume or disk passwords. If one knew the password, for example, for a volume, one could access any file or directory on that volume (hard disk). Each volume and directory were referenced with delimiters to identify them, and could be followed with a file name, depending on the operation, i.e. {Network Node}[VolumeName]<DirectoryName>FileName. It was possible to custom-link the operating system to add or delete features. CTOS supported a transparent peer-to-peer network carried over serial RS-422 cables (daisy-chain topology) and in later versions carried over twisted pair (star topology) with RS-422 adapters using CTOS Cluster Hub-R12 designed by Paul Jackson Ph.D. of SumNet Pty Limited in Australia. Each workgroup (called a "cluster") was connected to a server (called a "master"). The workstations, normally diskless, were booted over the cluster network from the master, and could optionally be locally booted from attached hard drives. The Inter-process communication (IPC) is primarily based on the "request" and "respond" messaging foundation that enhanced the Enterprise Application Integration among services for both internal and external environments. Thus CTOS was well known for the message-based Microkernel Architecture. Applications are added as services to the main server. Each client consumes the services via its own mailbox called "exchange" and well-published message formats. The communication works on "request codes" that are owned by the service. The operating system maintains the exchanges, message queues, scheduling, control, message passing, etc., while the service manages the messages at its own exchange using "wait", "check", and "respond" macros. CTOS ran on In
https://en.wikipedia.org/wiki/Stale%20pointer%20bug
A stale pointer bug, otherwise known as an aliasing bug, is a class of subtle programming errors that can arise in code that does dynamic memory allocation, especially via the malloc function or equivalent. If several pointers address (are "aliases for") a given chunk of storage, it may happen that the storage is freed or reallocated (and thus moved) through one alias and then referenced through another, which may lead to subtle (and possibly intermittent) errors depending on the state and the allocation history of the malloc arena. This bug can be avoided by never creating aliases for allocated memory, by controlling the dynamic scope of references to the storage so that none can remain when it is freed, or by use of a garbage collector, in the form of an intelligent memory-allocation library or as provided by higher-level languages, such as Lisp. The term "aliasing bug" is nowadays associated with C programming, but it was already in use in a very similar sense in the ALGOL 60 and Fortran programming language communities in the 1960s. See also Dangling pointer Software bugs Software_anomalies
https://en.wikipedia.org/wiki/Akamai%20Technologies
Akamai Technologies, Inc. is an American content delivery network (CDN), cybersecurity, and cloud service company, providing web and Internet security services. The company operates a network of servers worldwide and rents the capacity of the servers to customers wanting to increase the efficiency of their websites by using Akamai owned servers located near the user. When a user navigates to the URL of an Akamai customer, their browser is directed by Akamai's domain name system to a proximal edge server that can serve the requested content. Akamai's mapping system assigns each user to a proximal edge server using sophisticated algorithms such as stable matching and consistent hashing, enabling more reliable and faster web downloads. Further, Akamai implements DDoS mitigation and other security services in its edge server platform. History The company was named after akamai, which means 'clever,' or more colloquially, 'cool' in Hawaiian, which Lewin had discovered in a Hawaiian-English dictionary after the suggestion of a colleague. Akamai Technologies entered the 1998 MIT $50K competition with a business proposition based on their research on consistent hashing, and were selected as one of the finalists. By August 1998, they had developed a working prototype, and with the help of Jonathan Seelig and Randall Kaplan, they began taking steps to incorporate the company. Akamai Technologies was incorporated on August 20, 1998. In late 1998 and early 1999, a group of business professionals and scientists joined the founding team. Most notably, Paul Sagan, former president of New Media for Time Inc., and George Conrades, former chairman and chief executive officer of BBN Corp. and senior vice president of US operations for IBM. Conrades became the chief executive officer of Akamai in April 1999. The company launched its commercial service in April 1999 and was listed on the NASDAQ Stock Market from October 29, 1999. On July 1, 2001, Akamai was added to the Russell 3
https://en.wikipedia.org/wiki/Wood%20preservation
Wood easily degrades without sufficient preservation. Apart from structural wood preservation measures, there are a number of different chemical preservatives and processes (also known as timber treatment, lumber treatment or pressure treatment) that can extend the life of wood, timber, and their associated products, including engineered wood. These generally increase the durability and resistance from being destroyed by insects or fungi. History As proposed by Richardson, treatment of wood has been practiced for almost as long as the use of wood itself. There are records of wood preservation reaching back to ancient Greece during Alexander the Great's rule, where bridge wood was soaked in olive oil. The Romans protected their ship hulls by brushing the wood with tar. During the Industrial Revolution, wood preservation became a cornerstone of the wood processing industry. Inventors and scientists such as Bethell, Boucherie, Burnett and Kyan made historic developments in wood preservation, with the preservative solutions and processes. Commercial pressure treatment began in the latter half of the 19th century with the protection of railroad cross-ties using creosote. Treated wood was used primarily for industrial, agricultural, and utility applications, where it is still used, until its use grew considerably (at least in the United States) in the 1970s, as homeowners began building decks and backyard projects. Innovation in treated timber products continues to this day, with consumers becoming more interested in less toxic materials. Hazards Wood that has been industrially pressure-treated with approved preservative products poses a limited risk to the public and should be disposed of properly. On December 31, 2003, the U.S. wood treatment industry stopped treating residential lumber with arsenic and chromium (chromated copper arsenate, or CCA). This was a voluntary agreement with the United States Environmental Protection Agency. CCA was replaced by copper-based
https://en.wikipedia.org/wiki/Leased%20line
A leased line is a private telecommunications circuit between two or more locations provided according to a commercial contract. It is sometimes also known as a private circuit, and as a data line in the UK. Typically, leased lines are used by businesses to connect geographically distant offices. Unlike traditional telephone lines in the public switched telephone network (PSTN) leased lines are generally not switched circuits, and therefore do not have an associated telephone number. Each side of the line is permanently connected, always active and dedicated to the other. Leased lines can be used for telephone, Internet, or other data communication services. Some are ringdown services, and some connect to a private branch exchange (PBX) or network router. The primary factors affecting the recurring lease fees are the distance between end stations and the bandwidth of the circuit. Since the connection does not carry third-party communications, the carrier can assure a specified level of quality. An Internet leased line is a premium Internet connectivity product, normally delivered over fiber, which provides uncontended, symmetrical bandwidth with full-duplex traffic. It is also known as an Ethernet leased line, dedicated line, data circuit or private line. History Leased line services (or private line services) became digital in the 1970s with the conversion of the Bell backbone network from analog to digital circuits. This allowed AT&T to offer Dataphone Digital Services (later re-branded digital data services) that started the deployment of ISDN and T1 lines to customer premises to connect. Leased lines were used to connect mainframe computers with terminals and remote sites, via IBM's Systems Network Architecture (created in 1974) or DEC's DECnet (created in 1975). With the extension of digital services in the 1980s, leased lines were used to connect customer premises to Frame Relay or ATM networks. Access data rates increased from the original T1 option wit
https://en.wikipedia.org/wiki/Engineering%20technologist
An engineering technologist is a professional trained in certain aspects of development and implementation of a respective area of technology. An education in engineering technology concentrates more on application and less on theory than does an engineering education. Engineering technologists often assist engineers; but after years of experience, they can also become engineers. Like engineers, areas where engineering technologists can work include product design, fabrication, and testing. Engineering technologists sometimes rise to senior management positions in industry or become entrepreneurs. Engineering technologists are more likely than engineers to focus on post-development implementation, product manufacturing, or operation of technology. The American National Society of Professional Engineers (NSPE) makes the distinction that engineers are trained in conceptual skills, to "function as designers", while engineering technologists "apply others' designs". The mathematics and sciences, as well as other technical courses, in engineering technology programs, are taught with more application-based examples, whereas engineering coursework provides a more theoretical foundation in math and science. Moreover, engineering coursework tends to require higher-level mathematics including calculus and calculus-based theoretical science courses, as well as more extensive knowledge of the natural sciences, which serves to prepare students for research (whether in graduate studies or industrial R&D) as opposed to engineering technology coursework which focuses on algebra, trigonometry, applied calculus, and other courses that are more practical than theoretical in nature and generally have more labs that involve the hands-on application of the topics studied. In the United States, although some states require, without exception, a BS degree in engineering at schools with programs accredited by the Engineering Accreditation Commission (EAC) of the Accreditation Board for En
https://en.wikipedia.org/wiki/Zeroisation
In cryptography, zeroisation (also spelled zeroization) is the practice of erasing sensitive parameters (electronically stored data, cryptographic keys, and critical security parameters) from a cryptographic module to prevent their disclosure if the equipment is captured. This is generally accomplished by altering or deleting the contents to prevent recovery of the data. Mechanical When encryption was performed by mechanical devices, this would often mean changing all the machine's settings to some fixed, meaningless value, such as zero. On machines with letter settings rather than numerals, the letter 'O' was often used instead. Some machines had a button or lever for performing this process in a single step. Zeroisation would typically be performed at the end of an encryption session to prevent accidental disclosure of the keys, or immediately when there was a risk of capture by an adversary. Software In modern software based cryptographic modules, zeroisation is made considerably more complex by issues such as virtual memory, compiler optimisations and use of flash memory. Also, zeroisation may need to be applied not only to the key, but also to a plaintext and some intermediate values. A cryptographic software developer must have an intimate understanding of memory management in a machine, and be prepared to zeroise data whenever a sensitive device might move outside the security boundary. Typically this will involve overwriting the data with zeroes, but in the case of some types of non-volatile storage the process is much more complex; see data remanence. As well as zeroising data due to memory management, software designers consider performing zeroisation: When an application changes mode (e.g. to a test mode) or user; When a computer process changes privileges; On termination (including abnormal termination); On any error condition which may indicate instability or tampering; Upon user request; Immediately, the last time the parameter is required; an
https://en.wikipedia.org/wiki/Tuned%20radio%20frequency%20receiver
A tuned radio frequency receiver (or TRF receiver) is a type of radio receiver that is composed of one or more tuned radio frequency (RF) amplifier stages followed by a detector (demodulator) circuit to extract the audio signal and usually an audio frequency amplifier. This type of receiver was popular in the 1920s. Early examples could be tedious to operate because when tuning in a station each stage had to be individually adjusted to the station's frequency, but later models had ganged tuning, the tuning mechanisms of all stages being linked together, and operated by just one control knob. By the mid 1930s, it was replaced by the superheterodyne receiver patented by Edwin Armstrong. Background The TRF receiver was patented in 1916 by Ernst Alexanderson. His concept was that each stage would amplify the desired signal while reducing the interfering ones. Multiple stages of RF amplification would make the radio more sensitive to weak stations, and the multiple tuned circuits would give it a narrower bandwidth and more selectivity than the single stage receivers common at that time. All tuned stages of the radio must track and tune to the desired reception frequency. This is in contrast to the modern superheterodyne receiver that must only tune the receiver's RF front end and local oscillator to the desired frequencies; all the following stages work at a fixed frequency and do not depend on the desired reception frequency. Antique TRF receivers can often be identified by their cabinets. They typically have a long, low appearance, with a flip-up lid for access to the vacuum tubes and tuned circuits. On their front panels there are typically two or three large dials, each controlling the tuning for one stage. Inside, along with several vacuum tubes, there will be a series of large coils. These will usually be with their axes at right angles to each other to reduce magnetic coupling between them. A problem with the TRF receiver built with triode vacuum tubes
https://en.wikipedia.org/wiki/555%20timer%20IC
The 555 timer IC is an integrated circuit (chip) used in a variety of timer, delay, pulse generation, and oscillator applications. Derivatives provide two (556) or four (558) timing circuits in one package. The design was first marketed in 1972 by Signetics and used bipolar junction transistors. Since then, numerous companies have made the original timers and later similar low-power CMOS timers. In 2017, it was said that over a billion 555 timers are produced annually by some estimates, and that the design was "probably the most popular integrated circuit ever made". History The timer IC was designed in 1971 by Hans Camenzind under contract to Signetics. In 1968, he was hired by Signetics to develop a phase-locked loop (PLL) IC. He designed an oscillator for PLLs such that the frequency did not depend on the power supply voltage or temperature. Signetics subsequently laid off half of its employees due to the 1970 recession, and development on the PLL was thus frozen. Camenzind proposed the development of a universal circuit based on the oscillator for PLLs and asked that he develop it alone, borrowing equipment from Signetics instead of having his pay cut in half. Camenzind's idea was originally rejected, since other engineers argued the product could be built from existing parts sold by the company; however, the marketing manager approved the idea. The first design for the 555 was reviewed in the summer of 1971. After this design was tested and found to be without errors, Camenzind got the idea of using a direct resistance instead of a constant current source, finding that it worked satisfactorily. The design change decreased the required 9 external pins to 8, so the IC could be fit in an 8-pin package instead of a 14-pin package. This revised version passed a second design review, and the prototypes were completed in October 1971 as the NE555V (plastic DIP) and SE555T (metal TO-5). The 9-pin version had already been released by another company founded by
https://en.wikipedia.org/wiki/Secure%20attention%20key
A secure attention key (SAK) or secure attention sequence (SAS) is a special key or key combination to be pressed on a computer keyboard before a login screen which must, to the user, be completely trustworthy. The operating system kernel, which interacts directly with the hardware, is able to detect whether the secure attention key has been pressed. When this event is detected, the kernel starts the trusted login processing. The secure attention key is designed to make login spoofing impossible, as the kernel will suspend any program, including those masquerading as the computer's login process, before starting a trustable login operation. Examples Some examples are: for Windows NT. default sequence for Linux. Not a true C2-compliant SAK. for PLATO IV in the 1970s. See also Control-Alt-Delete Magic SysRq key Break key References Computer security procedures Computer access control
https://en.wikipedia.org/wiki/DLX
The DLX (pronounced "Deluxe") is a RISC processor architecture designed by John L. Hennessy and David A. Patterson, the principal designers of the Stanford MIPS and the Berkeley RISC designs (respectively), the two benchmark examples of RISC design (named after the Berkeley design). The DLX is essentially a cleaned up (and modernized) simplified Stanford MIPS CPU. The DLX has a simple 32-bit load/store architecture, somewhat unlike the modern MIPS architecture CPU. As the DLX was intended primarily for teaching purposes, the DLX design is widely used in university-level computer architecture courses. There are two known "softcore" hardware implementations: ASPIDA and VAMP. The ASPIDA project resulted in a core with many nice features: it is open source, supports Wishbone, has an asynchronous design, supports multiple ISAs, and is ASIC proven. VAMP is a DLX-variant that was mathematically verified as part of Verisoft project. It was specified with PVS, implemented in Verilog, and runs on a Xilinx FPGA. A full stack from compiler to kernel to TCP/IP was built on it. History In the Stanford MIPS architecture, one of the methods used to gain performance was to force all instructions to complete in one clock cycle. This forced compilers to insert "no-ops" in cases where the instruction would definitely take longer than one clock cycle. Thus input and output activities (like memory accesses) specifically forced this behaviour, leading to artificial program bloat. In general MIPS programs were forced to have a lot of wasteful NOP instructions, a behaviour that was an unintended consequence. The DLX architecture does not force single clock cycle execution, and is therefore immune to this problem. In the DLX design a more modern approach to handling long instructions was used: data-forwarding and instruction reordering. In this case the longer instructions are "stalled" in their functional units, and then re-inserted into the instruction stream when they can complete. Ex
https://en.wikipedia.org/wiki/Glass%20cockpit
A glass cockpit is an aircraft cockpit that features electronic (digital) flight instrument displays, typically large LCD screens, rather than the traditional style of analog dials and gauges. While a traditional cockpit relies on numerous mechanical gauges (nicknamed "steam gauges") to display information, a glass cockpit uses several multi-function displays driven by flight management systems, that can be adjusted to display flight information as needed. This simplifies aircraft operation and navigation and allows pilots to focus only on the most pertinent information. They are also popular with airline companies as they usually eliminate the need for a flight engineer, saving costs. In recent years the technology has also become widely available in small aircraft. As aircraft displays have modernized, the sensors that feed them have modernized as well. Traditional gyroscopic flight instruments have been replaced by electronic attitude and heading reference systems (AHRS) and air data computers (ADCs), improving reliability and reducing cost and maintenance. GPS receivers are usually integrated into glass cockpits. Early glass cockpits, found in the McDonnell Douglas MD-80, Boeing 737 Classic, ATR 42, ATR 72 and in the Airbus A300-600 and A310, used electronic flight instrument systems (EFIS) to display attitude and navigational information only, with traditional mechanical gauges retained for airspeed, altitude, vertical speed, and engine performance. The Boeing 757 and 767-200/-300 introduced an electronic engine-indicating and crew-alerting system (EICAS) for monitoring engine performance while retaining mechanical gauges for airspeed, altitude and vertical speed. Later glass cockpits, found in the Boeing 737NG, 747-400, 767-400, 777, Airbus A320, later Airbuses, Ilyushin Il-96 and Tupolev Tu-204 have completely replaced the mechanical gauges and warning lights in previous generations of aircraft. While glass cockpit-equipped aircraft throughout the late 20t
https://en.wikipedia.org/wiki/Tetration
In mathematics, tetration (or hyper-4) is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though and the left-exponent xb are common. Under the definition as repeated exponentiation, means , where copies of are iterated via exponentiation, right-to-left, i.e. the application of exponentiation times. is called the "height" of the function, while is called the "base," analogous to exponentiation. It would be read as "the th tetration of ". It is the next hyperoperation after exponentiation, but before pentation. The word was coined by Reuben Louis Goodstein from tetra- (four) and iteration. Tetration is also defined recursively as allowing for attempts to extend tetration to non-natural numbers such as real and complex numbers. The two inverses of tetration are called super-root and super-logarithm, analogous to the nth root and the logarithmic functions. None of the three functions are elementary. Tetration is used for the notation of very large numbers. Introduction The first four hyperoperations are shown here, with tetration being considered the fourth in the series. The unary operation succession, defined as , is considered to be the zeroth operation. Addition copies of 1 added to combined by succession. Multiplication copies of combined by addition. Exponentiation copies of combined by multiplication. Tetration copies of combined by exponentiation, right-to-left. Note that nested exponents are conventionally interpreted from the top down: means and not Succession, , is the most basic operation; while addition () is a primary operation, for addition of natural numbers it can be thought of as a chained succession of successors of ; multiplication ) is also a primary operation, though for natural numbers it can analogously be thought of as a chained addition involving numbers of . Exponentiation can be thought of as a chained multiplication involving numbers of and tetra
https://en.wikipedia.org/wiki/Polylogarithm
In mathematics, the polylogarithm (also known as Jonquière's function, for Alfred Jonquière) is a special function of order and argument . Only for special values of does the polylogarithm reduce to an elementary function such as the natural logarithm or a rational function. In quantum statistics, the polylogarithm function appears as the closed form of integrals of the Fermi–Dirac distribution and the Bose–Einstein distribution, and is also known as the Fermi–Dirac integral or the Bose–Einstein integral. In quantum electrodynamics, polylogarithms of positive integer order arise in the calculation of processes represented by higher-order Feynman diagrams. The polylogarithm function is equivalent to the Hurwitz zeta function — either function can be expressed in terms of the other — and both functions are special cases of the Lerch transcendent. Polylogarithms should not be confused with polylogarithmic functions, nor with the offset logarithmic integral , which has the same notation without the subscript. The polylogarithm function is defined by a power series in , which is also a Dirichlet series in : This definition is valid for arbitrary complex order and for all complex arguments with ; it can be extended to by the process of analytic continuation. (Here the denominator is understood as ). The special case involves the ordinary natural logarithm, , while the special cases and are called the dilogarithm (also referred to as Spence's function) and trilogarithm respectively. The name of the function comes from the fact that it may also be defined as the repeated integral of itself: thus the dilogarithm is an integral of a function involving the logarithm, and so on. For nonpositive integer orders , the polylogarithm is a rational function. Properties In the case where the order is an integer, it will be represented by (or when negative). It is often convenient to define where is the principal branch of the complex logarithm so that Also, all e
https://en.wikipedia.org/wiki/Nikolay%20Yakovlevich%20Sonin
Nikolay Yakovlevich Sonin (Russian: Никола́й Я́ковлевич Со́нин, February 22, 1849 – February 27, 1915) was a Russian mathematician. Biography He was born in Tula and attended Lomonosov University, studying mathematics and physics there from 1865 to 1869. His advisor was Nikolai Bugaev. He obtained a master's degree with a thesis submitted in 1871, then he taught at the University of Warsaw where he obtained a doctorate in 1874. He was appointed to a chair in the University of Warsaw in 1876. In 1894, Sonin moved to St. Petersburg, where he taught at the University for Women. Sonin worked on special functions, in particular cylindrical functions. For instance, the Sonine formula is a formula given by Sonin for the integral of the product of three Bessel functions. He is furthermore credited with the introduction of the associated Laguerre polynomials. He also contributed to the Euler–Maclaurin summation formula. Other topics Sonin studied include Bernoulli polynomials and approximate computation of definite integrals, continuing Chebyshev's work on numerical integration. Together with Andrey Markov, Sonin prepared a two volume edition of Chebyshev's works in French and Russian. He died in St. Petersburg. References External links 1849 births 1915 deaths Moscow State University alumni University of Warsaw alumni Academic staff of the University of Warsaw Mathematical analysts Mathematicians from the Russian Empire
https://en.wikipedia.org/wiki/Durability
Durability is the ability of a physical product to remain functional, without requiring excessive maintenance or repair, when faced with the challenges of normal operation over its design lifetime. There are several measures of durability in use, including years of life, hours of use, and number of operational cycles. In economics, goods with a long usable life are referred to as durable goods. Requirements for product durability Product durability is predicated by good repairability and regenerability in conjunction with maintenance. Every durable product must be capable of adapting to technical, technological and design developments. This must be accompanied by a willingness on the part of consumers to forgo having the "very latest" version of a product. In the United Kingdom, durability as a characteristic relating to the quality of goods that can be demanded by consumers was not clearly established until an amendment of the Sale of Goods Act 1979 relating to the quality standards for supplied goods in 1994. Product life spans and sustainable consumption The lifespan of household goods is a significant factor in sustainable consumption. Longer product life spans can contribute to eco-efficiency and sufficiency, thus slowing consumption in order to progress towards a sustainable level of consumption. Cooper (2005) proposed a model to demonstrate the crucial role of product lifespans to sustainable production and consumption. Types of durability Durability can encompass several specific physical properties of designed products, including: Ageing (of polymers) Dust resistance Resistance to fatigue Fire resistance Radiation hardening Thermal resistance Rot-proofing Rustproofing Toughness Waterproofing See also Availability Consumables Disposable product Durable good Interchangeable parts Maintainability Product life Product stewardship Throwaway society Waste minimization References Broad-concept articles Materials science Waste minimisation
https://en.wikipedia.org/wiki/Scavenger
Scavengers are animals that consume dead organisms that have died from causes other than predation or have been killed by other predators. While scavenging generally refers to carnivores feeding on carrion, it is also a herbivorous feeding behavior. Scavengers play an important role in the ecosystem by consuming dead animal and plant material. Decomposers and detritivores complete this process, by consuming the remains left by scavengers. Scavengers aid in overcoming fluctuations of food resources in the environment. The process and rate of scavenging is affected by both biotic and abiotic factors, such as carcass size, habitat, temperature, and seasons. Etymology Scavenger is an alteration of scavager, from Middle English skawager meaning "customs collector", from skawage meaning "customs", from Old North French escauwage meaning "inspection", from schauwer meaning "to inspect", of Germanic origin; akin to Old English scēawian and German schauen meaning "to look at", and modern English "show" (with semantic drift). Types of scavengers (animals) Obligate scavenging (subsisting entirely or mainly on dead animals) is rare among vertebrates, due to the difficulty of finding enough carrion without expending too much energy. Well-known invertebrate scavengers of animal material include burying beetles and blowflies, which are obligate scavengers, and yellowjackets. Fly larvae are also common scavengers for organic materials at the bottom of freshwater bodies. For example, Tokunagayusurika akamusi is a species of midge fly whose larvae live as obligate scavengers at the bottom of lakes and whose adults almost never feed and only live up to a few weeks. Most scavenging animals are facultative scavengers that gain most of their food through other methods, especially predation. Many large carnivores that hunt regularly, such as hyenas and jackals, but also animals rarely thought of as scavengers, such as African lions, leopards, and wolves will scavenge if given the
https://en.wikipedia.org/wiki/Bingham%20plastic
In materials science, a Bingham plastic is a viscoplastic material that behaves as a rigid body at low stresses but flows as a viscous fluid at high stress. It is named after Eugene C. Bingham who proposed its mathematical form. It is used as a common mathematical model of mud flow in drilling engineering, and in the handling of slurries. A common example is toothpaste, which will not be extruded until a certain pressure is applied to the tube. It is then pushed out as a relatively coherent plug. Explanation Figure 1 shows a graph of the behaviour of an ordinary viscous (or Newtonian) fluid in red, for example in a pipe. If the pressure at one end of a pipe is increased this produces a stress on the fluid tending to make it move (called the shear stress) and the volumetric flow rate increases proportionally. However, for a Bingham Plastic fluid (in blue), stress can be applied but it will not flow until a certain value, the yield stress, is reached. Beyond this point the flow rate increases steadily with increasing shear stress. This is roughly the way in which Bingham presented his observation, in an experimental study of paints. These properties allow a Bingham plastic to have a textured surface with peaks and ridges instead of a featureless surface like a Newtonian fluid. Figure 2 shows the way in which it is normally presented currently. The graph shows shear stress on the vertical axis and shear rate on the horizontal one. (Volumetric flow rate depends on the size of the pipe, shear rate is a measure of how the velocity changes with distance. It is proportional to flow rate, but does not depend on pipe size.) As before, the Newtonian fluid flows and gives a shear rate for any finite value of shear stress. However, the Bingham plastic again does not exhibit any shear rate (no flow and thus no velocity) until a certain stress is achieved. For the Newtonian fluid the slope of this line is the viscosity, which is the only parameter needed to describe its flow
https://en.wikipedia.org/wiki/Frequency%20mixer
In electronics, a mixer, or frequency mixer, is an electrical circuit that creates new frequencies from two signals applied to it. In its most common application, two signals are applied to a mixer, and it produces new signals at the sum and difference of the original frequencies. Other frequency components may also be produced in a practical frequency mixer. Mixers are widely used to shift signals from one frequency range to another, a process known as heterodyning, for convenience in transmission or further signal processing. For example, a key component of a superheterodyne receiver is a mixer used to move received signals to a common intermediate frequency. Frequency mixers are also used to modulate a carrier signal in radio transmitters. Types The essential characteristic of a mixer is that it produces a component in its output which is the product of the two input signals. Both active and passive circuits can realize mixers. Passive mixers use one or more diodes and rely on their non-linear relation between voltage and current to provide the multiplying element. In a passive mixer, the desired output signal is always of lower power than the input signals. Active mixers use an amplifying device (such as a transistor or vacuum tube) that may increase the strength of the product signal. Active mixers improve isolation between the ports, but may have higher noise and more power consumption. An active mixer can be less tolerant of overload. Mixers may be built of discrete components, may be part of integrated circuits, or can be delivered as hybrid modules. Mixers may also be classified by their topology: An unbalanced mixer, in addition to producing a product signal, allows both input signals to pass through and appear as components in the output. A single balanced mixer is arranged with one of its inputs applied to a balanced (differential) circuit so that either the local oscillator (LO) or signal input (RF) is suppressed at the output, but not both.
https://en.wikipedia.org/wiki/Bypass%20ratio
The bypass ratio (BPR) of a turbofan engine is the ratio between the mass flow rate of the bypass stream to the mass flow rate entering the core. A 10:1 bypass ratio, for example, means that 10 kg of air passes through the bypass duct for every 1 kg of air passing through the core. Turbofan engines are usually described in terms of BPR, which together with engine pressure ratio, turbine inlet temperature and fan pressure ratio are important design parameters. In addition, BPR is quoted for turboprop and unducted fan installations because their high propulsive efficiency gives them the overall efficiency characteristics of very high bypass turbofans. This allows them to be shown together with turbofans on plots which show trends of reducing specific fuel consumption (SFC) with increasing BPR. BPR is also quoted for lift fan installations where the fan airflow is remote from the engine and doesn't physically touch the engine core. Bypass provides a lower fuel consumption for the same thrust, measured as thrust specific fuel consumption (grams/second fuel per unit of thrust in kN using SI units). Lower fuel consumption that comes with high bypass ratios applies to turboprops, using a propeller rather than a ducted fan. High bypass designs are the dominant type for commercial passenger aircraft and both civilian and military jet transports. Business jets use medium BPR engines. Combat aircraft use engines with low bypass ratios to compromise between fuel economy and the requirements of combat: high power-to-weight ratios, supersonic performance, and the ability to use afterburners. Principles If all the gas power from a gas turbine is converted to kinetic energy in a propelling nozzle, the aircraft is best suited to high supersonic speeds. If it is all transferred to a separate big mass of air with low kinetic energy, the aircraft is best suited to zero speed (hovering). For speeds in between, the gas power is shared between a separate airstream and the gas turbine
https://en.wikipedia.org/wiki/DirectPlay
DirectPlay is part of Microsoft's DirectX API. It is a network communication library intended for computer game development, although it can be used for other purposes. DirectPlay is a high-level software interface between applications and communication services that allows games to be connected over the Internet, a modem link, or a network. It features a set of tools that allow players to find game sessions and sites to manage the flow of information between hosts and players. It provides a way for applications to communicate with each other, regardless of the underlying online service or protocol. It also resolves many connectivity issues, such as Network Address Translation (NAT). Like the rest of DirectX, DirectPlay runs in COM and is accessed through component object model (COM) interfaces. By default, DirectPlay uses multi-threaded programming techniques and requires careful thought to avoid the usual threading issues. Since DirectX version 9, this issue can be alleviated at the expense of efficiency. Networking model Under the hood, DirectPlay is built on the User Datagram Protocol (UDP) to allow it speedy communication with other DirectPlay applications. It uses TCP and UDP ports 2300 to 2400 and 47624. DirectPlay sits on layers 4 and 5 of the OSI model. On layer 4, DirectPlay can handle the following tasks if requested by the application: Message ordering, which ensures that data arrives in the same order it was sent. Message reliability, which ensures that data is guaranteed to arrive. Message flow control, which ensures that data is only sent at the rate the receiver can receive it. On layer 5, DirectPlay always handles the following tasks: Connection initiation and termination. Interfaces The primary interfaces (methods of access) for DirectPlay are: IDirectPlay8Server, which allows access to server functionality IDirectPlay8Client, which allows access to client functionality IDirectPlay8Peer, which allows access to peer-to-peer functionality Seco
https://en.wikipedia.org/wiki/Closure%20operator
In mathematics, a closure operator on a set S is a function from the power set of S to itself that satisfies the following conditions for all sets {| border="0" |- | | (cl is extensive), |- | | (cl is increasing), |- | | (cl is idempotent). |} Closure operators are determined by their closed sets, i.e., by the sets of the form cl(X), since the closure cl(X) of a set X is the smallest closed set containing X. Such families of "closed sets" are sometimes called closure systems or "Moore families". A set together with a closure operator on it is sometimes called a closure space. Closure operators are also called "hull operators", which prevents confusion with the "closure operators" studied in topology. History E. H. Moore studied closure operators in his 1910 Introduction to a form of general analysis, whereas the concept of the closure of a subset originated in the work of Frigyes Riesz in connection with topological spaces. Though not formalized at the time, the idea of closure originated in the late 19th century with notable contributions by Ernst Schröder, Richard Dedekind and Georg Cantor. Examples The usual set closure from topology is a closure operator. Other examples include the linear span of a subset of a vector space, the convex hull or affine hull of a subset of a vector space or the lower semicontinuous hull of a function , where is e.g. a normed space, defined implicitly , where is the epigraph of a function . The relative interior is not a closure operator: although it is idempotent, it is not increasing and if is a cube in and is one of its faces, then , but and , so it is not increasing. In topology, the closure operators are topological closure operators, which must satisfy for all (Note that for this gives ). In algebra and logic, many closure operators are finitary closure operators, i.e. they satisfy In the theory of partially ordered sets, which are important in theoretical computer science, closure operators have
https://en.wikipedia.org/wiki/GNU%20Multiple%20Precision%20Arithmetic%20Library
GNU Multiple Precision Arithmetic Library (GMP) is a free library for arbitrary-precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. There are no practical limits to the precision except the ones implied by the available memory (operands may be of up to 232−1 bits on 32-bit machines and 237 bits on 64-bit machines). GMP has a rich set of functions, and the functions have a regular interface. The basic interface is for C, but wrappers exist for other languages, including Ada, C++, C#, Julia, .NET, OCaml, Perl, PHP, Python, R, Ruby, and Rust. Prior to 2008, Kaffe, a Java virtual machine, used GMP to support Java built-in arbitrary precision arithmetic. Shortly after, GMP support was added to GNU Classpath. The main target applications of GMP are cryptography applications and research, Internet security applications, and computer algebra systems. GMP aims to be faster than any other bignum library for all operand sizes. Some important factors in doing this are: Using full words as the basic arithmetic type. Using different algorithms for different operand sizes; algorithms that are faster for very big numbers are usually slower for small numbers. Highly optimized assembly language code for the most important inner loops, specialized for different processors. The first GMP release was made in 1991. It is constantly developed and maintained. GMP is part of the GNU project (although its website being off gnu.org may cause confusion), and is distributed under the GNU Lesser General Public License (LGPL). GMP is used for integer arithmetic in many computer algebra systems such as Mathematica and Maple. It is also used in the Computational Geometry Algorithms Library (CGAL). GMP is needed to build the GNU Compiler Collection (GCC). Examples Here is an example of C code showing the use of the GMP library to multiply and print large numbers: #include <stdio.h> #include <gmp.h> int main(void) { mpz_t x, y, result;
https://en.wikipedia.org/wiki/Key%20distribution%20center
In cryptography, a key distribution center (KDC) is part of a cryptosystem intended to reduce the risks inherent in exchanging keys. KDCs often operate in systems within which some users may have permission to use certain services at some times and not at others. Security overview For instance, an administrator may have established a policy that only certain users may back up to tape. Many operating systems can control access to the tape facility via a "system service". If that system service further restricts the tape drive to operate only on behalf of users who can submit a service-granting ticket when they wish to use it, there remains only the task of distributing such tickets to the appropriately permitted users. If the ticket consists of (or includes) a key, one can then term the mechanism which distributes it a KDC. Usually, in such situations, the KDC itself also operates as a system service. Operation A typical operation with a KDC involves a request from a user to use some service. The KDC will use cryptographic techniques to authenticate requesting users as themselves. It will also check whether an individual user has the right to access the service requested. If the authenticated user meets all prescribed conditions, the KDC can issue a ticket permitting access. KDCs mostly operate with symmetric encryption. In most (but not all) cases the KDC shares a key with each of all the other parties. The KDC produces a ticket based on a server key. The client receives the ticket and submits it to the appropriate server. The server can verify the submitted ticket and grant access to user submitting it. Security systems using KDCs include Kerberos. (Actually, Kerberos partitions KDC functionality between two different agents: the AS (Authentication Server) and the TGS (Ticket Granting Service).) External links Kerberos Authentication Protocol Microsoft: Kerberos Key Distribution Center - TechNet Microsoft: Key Distribution Center - MSDN Key managem
https://en.wikipedia.org/wiki/Bleep%20censor
A bleep censor is the replacement of offensive language or classified information with a beep sound (usually a ), used in television and radio. History Bleeping has been used for many years as a means of censoring TV and radio programs to remove content not deemed suitable for "family", "daytime", "broadcasting", or "international" viewing, as well as sensitive classified information for security. The bleep censor is a software module, manually operated by a broadcast technician. A bleep is sometimes accompanied by a digital blur pixelization or box over the speaker's mouth in cases where the removed speech may still be easily understood or not understood by lip reading. On the closed caption subtitling, bleeped words are usually represented by "[bleep]", sometimes the phrases "[expletive]", "[beep]", "[censored]", "[explicit]", occasionally hyphens (e.g. abbreviations of the word "fuck" like f—k f---), and sometimes asterisks or other non-letter symbols (e.g. other abbreviations of "fuck" like ****, f***, f**k, f*ck, f#@k or f#@%), remaining faithful to the audio track. The words "cunt" and "shit" may also be censored in the same manner (e.g. c***, c**t, c*nt, c#@t or c#@% and s***, s**t, sh*t, s#@t or s#@%, respectively). The characters used to denote censorship in text (e.g. p%@k, %$^&, mother f%@$er, bulls%@t or c#@t) are called grawlixes. Where open captions are used (generally in instances where the speaker is not easily understood), or the profanities with letters substituted with asterisks non-letter symbols, called grawlixes. Where open captions are used (generally in instances where the speaker is not easily understood), a blank is used where the word is bleeped. Occasionally, bleeping is not reflected in the captions, allowing the unedited dialogue to be seen. Sometimes, a "black bar" can be seen for closed caption bleep. Bleeping is normally only used in unscripted programs – documentaries, radio features, panel games etc. – since scripted drama and
https://en.wikipedia.org/wiki/L0phtCrack
L0phtCrack is a password auditing and recovery application originally produced by Mudge from L0pht Heavy Industries. It is used to test password strength and sometimes to recover lost Microsoft Windows passwords, by using dictionary, brute-force, hybrid attacks, and rainbow tables. The initial version was released in the Spring of 1997. The application was produced by @stake after the L0pht merged with @stake in 2000. @stake was then acquired by Symantec in 2004. Symantec later stopped selling this tool to new customers, citing US Government export regulations, and discontinued support in December 2006. In January 2009, L0phtCrack was acquired by the original authors Zatko, Wysopal, and Rioux from Symantec. L0phtCrack 6 was announced on 11 March 2009 at the SOURCE Boston Conference. L0phtCrack 6 contains support for 64-bit Windows platforms as well as upgraded rainbow tables support. L0phtCrack 7 was released on 30 August 2016, seven years after the previous release. L0phtCrack 7 supports GPU cracking, increasing performance up to 500 times that of previous versions. On April 21, 2020, Terahash announced it had acquired L0phtCrack. Details of the sale were not released. On July 1, 2021 L0pht Holdings, LLC repossessed L0phtCrack after Terahash defaulted on its instalment sale loan. The current owners announced that they were exploring open source options for L0phtcrack. Due to commercial libraries existing within the software this may take some time. On October 17, 2021 L0phtCrack version 7.2.0 was released open-source, with different portions of the software being published under different licenses. References External links L0phtCrack Website L0phtCrack repositories on GitLab Passwords 2015 Keynote 1: Chris Wysopal L0pht Password cracking software Formerly proprietary software Free security software
https://en.wikipedia.org/wiki/LINPACK
LINPACK is a software library for performing numerical linear algebra on digital computers. It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Gilbert Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which runs more efficiently on modern architectures. LINPACK makes use of the BLAS (Basic Linear Algebra Subprograms) libraries for performing basic vector and matrix operations. The LINPACK benchmarks appeared initially as part of the LINPACK user's manual. The parallel LINPACK benchmark implementation called HPL (High Performance Linpack) is used to benchmark and rank supercomputers for the TOP500 list. World's most powerful computer by year References Benchmarks (computing) Fortran libraries Numerical linear algebra Numerical software
https://en.wikipedia.org/wiki/Zeal%20%28web%29
Zeal was a volunteer-built web directory launched by Brian Goler and Kevin Berk in 1999, and then acquired by LookSmart in October 2000 for $20 million. Zeal combined the work of Looksmart's paid editors with that of volunteers who profiled websites and placed them in a hierarchy of subcategories. The resulting categories and profiles were downloaded at intervals by LookSmart and its partners, other search companies such as MSN, Lycos, and Altavista, for use in their own systems with or without modification. Paid editors attended to commercial sites and oversaw the voluntary work on non-commercial sites. Volunteers worked under a defined set of Guidelines and were required to pass an introductory level test on those Guidelines before submitting site profiles or edits. As points and experience were acquired, volunteers could elect to take a further exam which allowed them to "adopt" and create topic categories of special interest. They could then move up the organizational structure from Community Member to Zealot to Expert Zealot, acquiring additional tools and oversight responsibility at each level. Expert Zealots, who could move or delete some whole categories, monitored the day-to-day operations of the non-commercial portion of the directory and acted as mentors to new members. Active volunteers were found in many English-speaking countries (particularly North America, United Kingdom, India, Australia, and New Zealand) and some other countries such as Spain, Switzerland, and Japan. By March 2003, Zeal had passed the 250,000 listings mark; eventually it passed the 400,000 mark due, in part, to the Zeal Charity Drive contest of October 2003, which saw over $25,000 distributed around prominent charities such as the WWF. After Looksmart's acquisition of Zeal, its internet traffic as measured by Alexa fluctuated considerably; after MSN withdrew from the related partnership, Zeal traffic declined from "usually better than 2000th" (mid-2003) to "about 5000th"
https://en.wikipedia.org/wiki/Fibonacci%20polynomials
In mathematics, the Fibonacci polynomials are a polynomial sequence which can be considered as a generalization of the Fibonacci numbers. The polynomials generated in a similar way from the Lucas numbers are called Lucas polynomials. Definition These Fibonacci polynomials are defined by a recurrence relation: The Lucas polynomials use the same recurrence with different starting values: They can be defined for negative indices by The Fibonacci polynomials form a sequence of orthogonal polynomials with and . Examples The first few Fibonacci polynomials are: The first few Lucas polynomials are: Properties The degree of Fn is n − 1 and the degree of Ln is n. The Fibonacci and Lucas numbers are recovered by evaluating the polynomials at x = 1; Pell numbers are recovered by evaluating Fn at x = 2. The ordinary generating functions for the sequences are: The polynomials can be expressed in terms of Lucas sequences as They can also be expressed in terms of Chebyshev polynomials and as where is the imaginary unit. Identities As particular cases of Lucas sequences, Fibonacci polynomials satisfy a number of identities, such as Closed form expressions, similar to Binet's formula are: where are the solutions (in t) of For Lucas Polynomials n > 0, we have A relationship between the Fibonacci polynomials and the standard basis polynomials is given by For example, Combinatorial interpretation If F(n,k) is the coefficient of xk in Fn(x), namely then F(n,k) is the number of ways an n−1 by 1 rectangle can be tiled with 2 by 1 dominoes and 1 by 1 squares so that exactly k squares are used. Equivalently, F(n,k) is the number of ways of writing n−1 as an ordered sum involving only 1 and 2, so that 1 is used exactly k times. For example F(6,3)=4 and 5 can be written in 4 ways, 1+1+1+2, 1+1+2+1, 1+2+1+1, 2+1+1+1, as a sum involving only 1 and 2 with 1 used 3 times. By counting the number of times 1 and 2 are both used in such a sum, it is evident tha
https://en.wikipedia.org/wiki/Loader%20%28computing%29
In computer systems a loader is the part of an operating system that is responsible for loading programs and libraries. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution. Loading a program involves either memory-mapping or copying the contents of the executable file containing the program instructions into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code. All operating systems that support program loading have loaders, apart from highly specialized computer systems that only have a fixed set of specialized programs. Embedded systems typically do not have loaders, and instead, the code executes directly from ROM or similar. In order to load the operating system itself, as part of booting, a specialized boot loader is used. In many operating systems, the loader resides permanently in memory, though some operating systems that support virtual memory may allow the loader to be located in a region of memory that is pageable. In the case of operating systems that support virtual memory, the loader may not actually copy the contents of executable files into memory, but rather may simply declare to the virtual memory subsystem that there is a mapping between a region of memory allocated to contain the running program's code and the contents of the associated executable file. (See memory-mapped file.) The virtual memory subsystem is then made aware that pages with that region of memory need to be filled on demand if and when program execution actually hits those areas of unfilled memory. This may mean parts of a program's code are not actually copied into memory until they are actually used, and unused code may never be loaded into memory at all. Responsibilities In Unix, the loader is the handler for the system call execv
https://en.wikipedia.org/wiki/Loader%20%28equipment%29
A loader is a heavy equipment machine used in construction to move or load materials such as soil, rock, sand, demolition debris, etc. into or onto another type of machinery (such as a dump truck, conveyor belt, feed-hopper, or railroad car). There are many types of loader, which, depending on design and application, are variously called a bucket loader, end loader, front loader, front-end loader, payloader, high lift, scoop, shovel dozer, skid-steer, skip loader, tractor loader or wheel loader. Description A loader is a type of tractor, usually wheeled, sometimes on tracks, that has a front-mounted wide bucket connected to the end of two booms (arms) to scoop up loose material from the ground, such as dirt, sand or gravel, and move it from one place to another without pushing the material across the ground. A loader is commonly used to move a stockpiled material from ground level and deposit it into an awaiting dump truck or into an open trench excavation. The loader assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools—for example, many can mount forks to lift heavy pallets or shipping containers, and a hydraulically opening "clamshell" bucket allows a loader to act as a light dozer or scraper. The bucket can also be augmented with devices like a bale grappler for handling large bales of hay or straw. Large loaders, such as the Kawasaki 95ZV-2, John Deere 844K, ACR 700K Compact Wheel Loader, Caterpillar 950H, Volvo L120E, Case 921E, or Hitachi ZW310 usually have only a front bucket and are called front loaders, whereas small loader tractors are often also equipped with a small backhoe and are called backhoe loaders or loader backhoes or JCBs, after the company that first claimed to have invented them. Other companies like CASE in America and Whitlock in the UK had been manufacturing excavator loaders well before JCB. The largest loader in the world is LeTourneau L-2350. Currently these la
https://en.wikipedia.org/wiki/Audio%20analysis
Audio analysis refers to the extraction of information and meaning from audio signals for analysis, classification, storage, retrieval, synthesis, etc. The observation mediums and interpretation methods vary, as audio analysis can refer to the human ear and how people interpret the audible sound source, or it could refer to using technology such as an audio analyzer to evaluate other qualities of a sound source such as amplitude, distortion, frequency response. Once an audio source's information has been observed, the information revealed can then be processed for the logical, emotional, descriptive, or otherwise relevant interpretation by the user. Natural Analysis The most prevalent form of audio analysis is derived from the sense of hearing. A type of sensory perception that occurs in much of the planet's fauna, audio analysis is a fundamental process of many living beings. Sounds made by the surrounding environment or other living beings provides input to the hearing mechanism, for which the listener's brain can interpret the sound and how it should respond. Examples of functions include speech, startle response, music listening, and more. An inherent ability of humans, hearing is fundamental in communication across the globe, and the process of assigning meaning and value to speech is a complex but necessary function of the human body. The study of the auditory system has been greatly centered using mathematics and the analysis of sinusoidal vibrations and sounds. The Fourier Transform has been an essential theorem in understanding how the human ear processes moving air and turns it into the audible frequency range, about 20 to 20,000 Hz. The ear is able take one complex waveform and process it into varying frequency ranges thanks to differences in the structures of the ear canal, that are tuned to specific frequency ranges. The initial sensory input is then analyzed further up in the neurological system where the perception of sound takes place. The audito
https://en.wikipedia.org/wiki/Vito%20Volterra
Vito Volterra (, ; 3 May 1860 – 11 October 1940) was an Italian mathematician and physicist, known for his contributions to mathematical biology and integral equations, being one of the founders of functional analysis. Biography Born in Ancona, then part of the Papal States, into a very poor Jewish family: his father was Abramo Volterra and his mother, Angelica Almagià. Abramo Volterra died in 1862 when Vito was two years old. The family moved to Turin, and then to Florence, where he studied at the Dante Alighieri Technical School and the Galileo Galilei Technical Institute. Volterra showed early promise in mathematics before attending the University of Pisa, where he fell under the influence of Enrico Betti, and where he became professor of rational mechanics in 1883. He immediately started work developing his theory of functionals which led to his interest and later contributions in integral and integro-differential equations. His work is summarised in his book Theory of functionals and of Integral and Integro-Differential Equations (1930). In 1892, he became professor of mechanics at the University of Turin and then, in 1900, professor of mathematical physics at the University of Rome La Sapienza. Volterra had grown up during the final stages of the Risorgimento when the Papal States were finally annexed by Italy and, like his mentor Betti, he was an enthusiastic patriot, being named by the king Victor Emmanuel III as a senator of the Kingdom of Italy in 1905. In the same year, he began to develop the theory of dislocations in crystals that was later to become important in the understanding of the behaviour of ductile materials. On the outbreak of World War I, already well into his 50s, he joined the Italian Army and worked on the development of airships under Giulio Douhet. He originated the idea of using inert helium rather than flammable hydrogen and made use of his leadership abilities in organising its manufacture. After World War I, Volterra turned h
https://en.wikipedia.org/wiki/Swisscom
Swisscom AG is a major telecommunications provider in Switzerland. Its headquarters are located in Worblaufen near Bern. The Swiss government owns 51 percent of Swisscom AG. According to its own published data, Swisscom holds a market share of 56% for mobile, 50% for broadband and 37% for TV telecommunication in Switzerland. Its Italian subsidiary Fastweb is attributed 16% of private clients and 29% of corporate clients share of Italian broadband and is also active in the mobile market. The Swiss telegraph network was first set up in 1852, followed by telephones in 1877. The two networks were combined with the postal service in 1920 to form Postal Telegraph and Telephone (PTT). The Swiss telecommunications market was deregulated in 1997. Telecom PTT was spun off and rebranded Swisscom ahead of a partial privatisation in 1997. The present-day Swisscom owns the protected brand NATEL, which is used and known only in Switzerland. In 2001, 25% of Swisscom Mobile was sold to Vodafone. In 2007, Swisscom acquired a majority stake in Italy's second-biggest telecom company Fastweb. History Pioneers (1852–1911) Switzerland's entry into the telecommunications era came in 1851, with the passage of legislation giving the Swiss government control over the development of a telegraph network throughout the country. The government's initial plans called for the creation of three primary telegraph lines, as well as a number of secondary networks. In order to build equipment for the system, the government established the Atelier Fédéral de Construction des Télégraphes (Federal Workshop for the Construction of Telegraphs). In July 1852, the first leg of the country's telegraph system—between St. Gallen and Zurich—was operational. By the end of that year, most of the country's main cities had been connected to the telegraph system. In 1855, the network was extended with the first underwater cable, connecting Winkel-Stansstad and Bauen-Flüelen. Night service was also launched that ye
https://en.wikipedia.org/wiki/Broadcast%20address
A broadcast address is a network address used to transmit to all devices connected to a multiple-access communications network. A message sent to a broadcast address may be received by all network-attached hosts. In contrast, a multicast address is used to address a specific group of devices, and a unicast address is used to address a single device. For network layer communications, a broadcast address may be a specific IP address. At the data link layer on Ethernet networks, it is a specific MAC address. IP networking In Internet Protocol version 4 (IPv4) networks, broadcast addresses are special values in the host-identification part of an IP address. The all-ones value was established as the standard broadcast address for networks that support broadcast. This method of using the all-ones address was first proposed by R. Gurwitz and R. Hinden in 1982. The later introduction of subnets and Classless Inter-Domain Routing changed this slightly, so that the all-ones host address of each subnet is that subnet's broadcast address. The broadcast address for any IPv4 host can be obtained by taking the bit complement (bitwise NOT) of the subnet mask and then performing a bitwise OR operation with the host's IP address. A shortcut to this process (for common masks using only 0 and 1 bit placements) is to simply take the host's IP address and set all bits in the host identifier portion of the address (any bit positions which hold a 0 in the subnet mask) to 1. As shown in the example below, in order to calculate the broadcast address to transmit a packet to an entire IPv4 subnet using the private IP address space , which has the subnet mask , the broadcast address is calculated as bitwise ORed with = . A special definition exists for the IP address . It is the broadcast address of the zero network or , which in Internet Protocol standards stands for this network, i.e. the local network. Transmission to this address is limited by definition, in that it is never forwa
https://en.wikipedia.org/wiki/Tukwila%20%28processor%29
The Itanium 9300 series, code-named Tukwila, is the generation of Intel's Itanium processor family following Itanium 2 and Montecito. It was released on 8 February 2010. It utilizes both multiple processor cores (multi-core) and SMT techniques. The engineers said to be working on this project were from the DEC Alpha project, specifically those who worked on the Alpha 21464 (EV8), which was focused on SMT. Named for the city of Tukwila, Washington, Tukwila was previously code-named Tanglewood. The original name is also used by the Tanglewood music festival, and Intel renamed the project in late 2003. The processor has two to four cores per die and up to 24 MB L3 of on-die cache. They are the first batch of processors to contain more than 2 billion transistors on a single die. This total is made up as follows: core logic — 430 million system interface — 157 million L3 cache — 1,420 million I/O logic — 39 million chip total — 2.046 billion Die size is 21.5×32.5 mm or 698.75 mm². Xeon compatibility It was originally stated that Tukwila and its associated chipset would bring socket compatibility between Intel's Xeon and Itanium processors, by introducing a new interconnect called Intel QuickPath Interconnect (QuickPath, previously known as Common System Interface or CSI). This ultimate endeavor would help reduce product development costs for both Intel and its partners, by allowing for greater reuse of components and manufacturing processes. Tukwila is reported to have four "full" QuickPath links and two "half" links. Whitefield, the first Xeon processor to feature QuickPath, suffered significant project delays and was cancelled. The first Xeon MP processor to feature QuickPath is Beckton. The released Itanium 9300-series processors are using a separate socket, LGA 1248, which is incompatible with Xeon processors and motherboards. Comparison table Successor The successor is code-named Poulson. It was initially slated for a Q4 2009 release and said to have o
https://en.wikipedia.org/wiki/Montecito%20%28processor%29
Montecito is the code-name of a major release of Intel's Itanium 2 Processor Family (IPF), which implements the Intel Itanium architecture on a dual-core processor. It was officially launched by Intel on July 18, 2006 as the "Dual-Core Intel Itanium 2 processor". According to Intel, Montecito doubles performance versus the previous, single-core Itanium 2 processor, and reduces power consumption by about 20%. It also adds multi-threading capabilities (two threads per core), a greatly expanded cache subsystem (12 MB per core), and silicon support for virtualization. Architectural features and attributes Two cores per die 2-way coarse-grained multithreading per core (not simultaneous). Montecito-flavour of multi-threading is dubbed temporal, or TMT. This is also known as switch-on-event multithreading, or SoEMT. The two separate threads do not run simultaneously, but the core switches thread in case of a high latency event, like an L3 cache miss which would otherwise stall execution. By this technique, multi-threaded workloads, including database-like workloads, should improve by 15-35%. a total of 4 threads per die separate 16 KB Instruction L1 and 16 KB Data L1 cache per core separate 1 MB Instruction L2 and 256 KB Data L2 cache per core, improved hierarchy 12 MB L3 cache per core, 24 MB L3 per die 1.72 billion transistors per die, which is added up from: core logic — 57M, or 28.5M per core core caches — 106.5M 24 MB L3 cache — 1550M bus logic & I/O — 6.7M Die size is 27.72 mm × 21.5 mm, or 596 mm2 90 nanometer design Lower power consumption and thermal dissipation than earlier flagship Itaniums, despite the high transistor count; 75-104 W. This is mainly achieved by applying different types of transistors. By default, slower and low-leakage transistors were used, while high-speed, thus high-leakage ones where it was necessary. Advanced compensation for errors in cache, for reliable operation under mission-critical workloads. This was code-named Pel
https://en.wikipedia.org/wiki/University%20of%20Michigan%20Executive%20System
The University of Michigan Executive System, or UMES, a batch operating system developed at the University of Michigan in 1958, was widely used at many universities. Based on the General Motors Executive System for the IBM 701, UMES was revised to work on the mainframe computers in use at the University of Michigan during this time (IBM 704, 709, and 7090) and to work better for the small student jobs that were expected to be the primary work load at the University. UMES was in use at the University of Michigan until 1967, when MTS was phased in to take advantage of the newer virtual memory time-sharing technology that became available on the IBM System/360 Model 67. Programming languages available FORTRAN MAD (programming language) See also Timeline of operating systems History of IBM mainframe operating systems FORTRAN Monitor System Bell Operating System (BESYS) or Bell Monitor (BELLMON) SHARE Operating System (SOS) IBM 7090/94 IBSYS Compatible Time-Sharing System (CTSS) Michigan Terminal System (MTS) Hardware: IBM 701, IBM 704, IBM 709, IBM 7090 External links University of Michigan Executive System for the IBM 7090 Computer, volumes 1 (General, Utilities, Internal Organization), 2 (Translators), and 3 (Subroutine Libraries), Computing Center, University of Michigan, September 1965, 1050 pp. The IBM 7094 and CTSS, Tom Van Vleck University of Michigan Executive System (UMES) subseries, Computing Center publications, 1965-1999, Bentley Historical Library, University of Michigan, Ann Arbor, Michigan "A Markovian model of the University of Michigan Executive System", James D. Foey, Communications of the ACM, 1967, No.6 Discontinued operating systems University of Michigan 1958 software IBM mainframe operating systems
https://en.wikipedia.org/wiki/Hairy%20ball%20theorem
The hairy ball theorem of algebraic topology (sometimes called the hedgehog theorem in Europe) states that there is no nonvanishing continuous tangent vector field on even-dimensional n-spheres. For the ordinary sphere, or 2‑sphere, if f is a continuous function that assigns a vector in R3 to every point p on a sphere such that f(p) is always tangent to the sphere at p, then there is at least one pole, a point where the field vanishes (a p such that f(p) = 0). The theorem was first proved by Henri Poincaré for the 2-sphere in 1885, and extended to higher even dimensions in 1912 by Luitzen Egbertus Jan Brouwer. The theorem has been expressed colloquially as "you can't comb a hairy ball flat without creating a cowlick" or "you can't comb the hair on a coconut". Counting zeros Every zero of a vector field has a (non-zero) "index", and it can be shown that the sum of all of the indices at all of the zeros must be two, because the Euler characteristic of the 2-sphere is two. Therefore, there must be at least one zero. This is a consequence of the Poincaré–Hopf theorem. In the case of the torus, the Euler characteristic is 0; and it is possible to "comb a hairy doughnut flat". In this regard, it follows that for any compact regular 2-dimensional manifold with non-zero Euler characteristic, any continuous tangent vector field has at least one zero. Application to computer graphics A common problem in computer graphics is to generate a non-zero vector in R3 that is orthogonal to a given non-zero vector. There is no single continuous function that can do this for all non-zero vector inputs. This is a corollary of the hairy ball theorem. To see this, consider the given vector as the radius of a sphere and note that finding a non-zero vector orthogonal to the given one is equivalent to finding a non-zero vector that is tangent to the surface of that sphere where it touches the radius. However, the hairy ball theorem says there exists no continuous function that can do this
https://en.wikipedia.org/wiki/Keith%20number
In recreational mathematics, a Keith number or repfigit number (short for repetitive Fibonacci-like digit) is a natural number in a given number base with digits such that when a sequence is created such that the first terms are the digits of and each subsequent term is the sum of the previous terms, is part of the sequence. Keith numbers were introduced by Mike Keith in 1987. They are computationally very challenging to find, with only about 100 known. Definition Let be a natural number, let be the number of digits of in base , and let be the value of each digit of . We define the sequence by a linear recurrence relation. For , and for If there exists an such that , then is said to be a Keith number. For example, 88 is a Keith number in base 6, as and the entire sequence and . Finding Keith numbers Whether or not there are infinitely many Keith numbers in a particular base is currently a matter of speculation. Keith numbers are rare and hard to find. They can be found by exhaustive search, and no more efficient algorithm is known. According to Keith, in base 10, on average Keith numbers are expected between successive powers of 10. Known results seem to support this. Examples 14, 19, 28, 47, 61, 75, 197, 742, 1104, 1537, 2208, 2580, 3684, 4788, 7385, 7647, 7909, 31331, 34285, 34348, 55604, 62662, 86935, 93993, 120284, 129106, 147640, 156146, 174680, 183186, 298320, 355419, 694280, 925993, 1084051, 7913837, 11436171, 33445755, 44121607, 129572008, 251133297, ... Other bases In base 2, there exists a method to construct all Keith numbers. The Keith numbers in base 12, written in base 12, are 11, 15, 1Ɛ, 22, 2ᘔ, 31, 33, 44, 49, 55, 62, 66, 77, 88, 93, 99, ᘔᘔ, ƐƐ, 125, 215, 24ᘔ, 405, 42ᘔ, 654, 80ᘔ, 8ᘔ3, ᘔ59, 1022, 1662, 2044, 3066, 4088, 4ᘔ1ᘔ, 4ᘔƐ1, 50ᘔᘔ, 8538, Ɛ18Ɛ, 17256, 18671, 24ᘔ78, 4718Ɛ, 517Ɛᘔ, 157617, 1ᘔ265ᘔ, 5ᘔ4074, 5ᘔƐ140, 6Ɛ1449, 6Ɛ8515, ... where ᘔ represents 10 and Ɛ represents 11. Keith clusters A Keith cluster is a rel
https://en.wikipedia.org/wiki/Householder%20transformation
In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder. Its analogue over general inner product spaces is the Householder operator. Definition Transformation The reflection hyperplane can be defined by its normal vector, a unit vector (a vector with length ) that is orthogonal to the hyperplane. The reflection of a point about this hyperplane is the linear transformation: where is given as a column unit vector with conjugate transpose . Householder matrix The matrix constructed from this transformation can be expressed in terms of an outer product as: is known as the Householder matrix, where is the identity matrix. Properties The Householder matrix has the following properties: it is Hermitian: , it is unitary: , hence it is involutory: . A Householder matrix has eigenvalues . To see this, notice that if is orthogonal to the vector which was used to create the reflector, then , i.e., is an eigenvalue of multiplicity , since there are independent vectors orthogonal to . Also, notice , and so is an eigenvalue with multiplicity . The determinant of a Householder reflector is , since the determinant of a matrix is the product of its eigenvalues, in this case one of which is with the remainder being (as in the previous point). Applications Geometric optics In geometric optics, specular reflection can be expressed in terms of the Householder matrix (see ). Numerical linear algebra Householder transformations are widely used in numerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix, to perform QR decompositions and in the first step of the QR algorithm. They are also widely used for transforming to a Hessenberg form. For symmetric or Hermitian m
https://en.wikipedia.org/wiki/Advanced%20Boolean%20Expression%20Language
The Advanced Boolean Expression Language (ABEL) is an obsolete hardware description language (HDL) and an associated set of design tools for programming programmable logic devices (PLDs). It was created in 1983 by Data I/O Corporation, in Redmond, Washington. ABEL includes both concurrent equation and truth table logic formats as well as a sequential state machine description format. A preprocessor with syntax loosely based on Digital Equipment Corporation's MACRO-11 assembly language is also included. In addition to being used for describing digital logic, ABEL may also be used to describe test vectors (patterns of inputs and expected outputs) that may be downloaded to a hardware PLD programmer along with the compiled and fuse-mapped PLD programming data. Other PLD design languages originating in the same era include CUPL and PALASM. Since the advent of larger field-programmable gate arrays (FPGAs), PLD-specific HDLs have fallen out of favor as standard HDLs such as Verilog and VHDL gained adoption. The ABEL concept and original compiler were created by Russell de Pina of Data I/O's Applied Research Group in 1981. The work was continued by ABEL product development team (led by Dr. Kyu Y. Lee) and included Mary Bailey, Bjorn Benson, Walter Bright, Michael Holley, Charles Olivier, and David Pellerin. After a series of acquisitions, the ABEL toolchain and intellectual property were bought by Xilinx. Xilinx discontinued support for ABEL in its ISE Design Suite starting with version 11 (released in 2010). References External links University of Pennsylvania's ABEL primer, as recommended by Walter Bright. Dead Link University of Southern Maine ABEL-HDL Primer, by J. Van der Spiegel Hardware description languages