text
stringlengths
16
172k
source
stringlengths
32
122
Dynamic random-access memory(dynamic RAMorDRAM) is a type ofrandom-accesssemiconductor memorythat stores eachbitof data in amemory cell, usually consisting of a tinycapacitorand atransistor, both typically based onmetal–oxide–semiconductor(MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. Theelectric chargeon the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an externalmemory refreshcircuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast tostatic random-access memory(SRAM) which does not require data to be refreshed. Unlikeflash memory, DRAM isvolatile memory(vs.non-volatile memory), since it loses its data quickly when power is removed. However, DRAM does exhibit limiteddata remanence. DRAM typically takes the form of anintegrated circuitchip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used indigital electronicswhere low-cost and high-capacitycomputer memoryis required. One of the largest applications for DRAM is themain memory(colloquially called the RAM) in moderncomputersandgraphics cards(where the main memory is called thegraphics memory). It is also used in many portable devices andvideo gameconsoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as thecache memoriesinprocessors. The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This complexity is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very highdensitieswith a simultaneous reduction in cost per bit. Refreshing the data consumes power, causing a variety of techniques to be used to manage the overall power consumption. For this reason, DRAM usually needs to operate with amemory controller; thememory controllerneeds to know DRAM parameters, especiallymemory timings, to initialize DRAMs, which may be different depending on different DRAM manufacturers and part numbers. DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down.[3]In 2018, a "key characteristic of the DRAM market is that there are currently only three major suppliers —Micron Technology,SK HynixandSamsung Electronics" that are "keeping a pretty tight rein on their capacity".[4]There is alsoKioxia(previouslyToshibaMemory Corporation after 2017 spin-off) which doesn't manufacture DRAM. Other manufacturers make and sellDIMMs(but not the DRAM chips in them), such asKingston Technology, and some manufacturers that sellstacked DRAM(used e.g. in the fastestsupercomputerson theexascale), separately such asViking Technology. Others sell such integrated into other products, such asFujitsuinto its CPUs, AMD in GPUs, andNvidia, withHBM2in some of their GPU chips. Thecryptanalyticmachine code-namedAquariusused atBletchley ParkduringWorld War IIincorporated a hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store." The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')".[5] In November 1965,Toshibaintroduced a bipolar dynamic RAM for itselectronic calculatorToscal BC-1411.[6][7][8]In 1966, Tomohisa Yoshimaru and Hiroshi Komikawa from Toshiba applied for a Japanese patent of a memory circuit composed of several transistors and a capacitor, in 1967 they applied for a patent in the US.[9] The earliest forms of DRAM mentioned above usedbipolar transistors. While it offered improved performance overmagnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory.[10]Capacitors had also been used for earlier memory schemes, such as the drum of theAtanasoff–Berry Computer, theWilliams tubeand theSelectron tube. In 1966, Dr.Robert Dennardinvented modern DRAM architecture in which there's a single MOS transistor per capacitor,[11]at theIBM Thomas J. Watson Research Center, while he was working on MOS memory and was trying to create an alternative to SRAM which required six MOS transistors for eachbitof data. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of the single-transistor MOS DRAM memory cell.[12]He filed a patent in 1967, and was granted U.S. patent number3,387,286in 1968.[13]MOS memory offered higher performance, was cheaper, and consumed less power, than magnetic-core memory.[14]The patent describes the invention: "Each cell is formed, in one embodiment, using a single field-effect transistor and a single capacitor."[15] MOS DRAM chips were commercialized in 1969 by Advanced Memory Systems, Inc ofSunnyvale, CA. This 1024 bit chip was sold toHoneywell,Raytheon,Wang Laboratories, and others. The same year, Honeywell askedIntelto make a DRAM using a three-transistor cell that they had developed. This became the Intel 1102 in early 1970.[16]However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially available DRAM, theIntel 1103, in October 1970, despite initial problems with low yield until the fifth revision of themasks. The 1103 was designed by Joel Karp and laid out by Pat Earhart. The masks were cut by Barbara Maness and Judy Garcia.[17][original research?]MOS memory overtook magnetic-core memory as the dominant memory technology in the early 1970s.[14] The first DRAM with multiplexed row and columnaddress lineswas theMostekMK4096 4 Kbit DRAM designed by Robert Proebsting and introduced in 1973. This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles. This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16 Kbit density, the cost advantage increased; the 16 Kbit Mostek MK4116 DRAM,[18][19]introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64 Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers, which dominated the US and worldwide markets during the 1980s and 1990s. Early in 1985,Gordon Mooredecided to withdraw Intel from producing DRAM.[20]By 1986, many, but not all, United States chip makers had stopped making DRAMs.[21]Micron Technology and Texas Instruments continued to produce them commercially, and IBM produced them for internal use. In 1985, when 64K DRAM memory chips were the most common memory chips used in computers, and when more than 60 percent of those chips were produced by Japanese companies, semiconductor makers in the United States accused Japanese companies ofexport dumpingfor the purpose of driving makers in the United States out of the commodity memory chip business. Prices for the 64K product plummeted to as low as 35 cents apiece from $3.50 within 18 months, with disastrous financial consequences for some U.S. firms. On 4 December 1985 the US Commerce Department's International Trade Administration ruled in favor of the complaint.[22][23][24][25] Synchronous dynamic random-access memory(SDRAM) was developed bySamsung. The first commercial SDRAM chip was the Samsung KM48SL2000, which had a capacity of 16Mb,[26]and was introduced in 1992.[27]The first commercialDDR SDRAM(double data rateSDRAM) memory chip was Samsung's 64Mb DDR SDRAM chip, released in 1998.[28] Later, in 2001, Japanese DRAM makers accused Korean DRAM manufacturers of dumping.[29][30][31][32] In 2002, US computer makers made claims ofDRAM price fixing. DRAM is usually arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit. The figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in height and width.[33][34] The long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column (the illustration to the right does not include this important detail). They are generally known as the+and−bit lines. Asense amplifieris essentially a pair of cross-connectedinvertersbetween the bit-lines. The first inverter is connected with input from the + bit-line and output to the − bit-line. The second inverter's input is from the − bit-line with output to the + bit-line. This results inpositive feedbackwhich stabilizes after one bit-line is fully at its highest voltage and the other bit-line is at the lowest possible voltage. To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low-voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value. Due to the sense amplifier's positive feedback configuration, it will hold a bit-line at stable voltage even after the forcing voltage is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right.[35] Typically, manufacturers specify that each row must be refreshed every 64 ms or less, as defined by theJEDECstandard. Some systems refresh every row in a burst of activity involving all rows every 64 ms. Other systems refresh one row at a time staggered throughout the 64 ms interval. For example, a system with 213= 8,192 rows would require a staggeredrefresh rateof one row every 7.8 μs which is 64 ms divided by 8,192 rows. A few real-time systems refresh a portion of memory at a time determined by an external timer function that governs the operation of the rest of a system, such as thevertical blanking intervalthat occurs every 10–20 ms in video equipment. The row address of the row that will be refreshed next is maintained by external logic or acounterwithin the DRAM. A system that provides the row address (and the refresh command) does so to have greater control over when to refresh and which row to refresh. This is done to minimize conflicts with memory accesses, since such a system has both knowledge of the memory access patterns and the refresh requirements of the DRAM. When the row address is supplied by a counter within the DRAM, the system relinquishes control over which row is refreshed and only provides the refresh command. Some modern DRAMs are capable of self-refresh; no external logic is required to instruct the DRAM to refresh or to provide a row address. Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes.[36] Many parameters are required to fully describe the timing of DRAM operation. Here are some examples for two timing grades of asynchronous DRAM, from a data sheet published in 1998:[37] Thus, the generally quoted number is the /RAS low to valid data out time. This is the time to open a row, settle the sense amplifiers, and deliver the selected column data to the output. This is also the minimum /RAS low time, which includes the time for the amplified data to be delivered back to recharge the cells. The time to read additional bits from an open page is much less, defined by the /CAS to /CAS cycle time. The quoted number is the clearest way to compare between the performance of different DRAM memories, as it sets the slower limit regardless of the row length or page size. Bigger arrays forcibly result in larger bit line capacitance and longer propagation delays, which cause this time to increase as the sense amplifier settling time is dependent on both the capacitance as well as the propagation latency. This is countered in modern DRAM chips by instead integrating many more complete DRAM arrays within a single chip, to accommodate more capacity without becoming too slow. When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This was generally described as"5-2-2-2"timing, as bursts of four reads within a page were common. When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers representtCL-tRCD-tRP-tRASin multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate whendouble data ratesignaling is used. JEDEC standard PC3200 timing is3-4-4-8[38]with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at2-2-2-5timing.[39] Minimum random access time has improved fromtRAC= 50 ns totRCD+tCL= 22.5 ns, and even the premium 20 ns variety is only 2.5 times faster than the asynchronous DRAM.CAS latencyhas improved even less, fromtCAC= 13 nsto 10 ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns(1600Mword/s), while the EDO DRAM can output one word pertPC= 20 ns (50 Mword/s). Each bit of data in a DRAM is stored as a positive or negative electrical charge in a capacitive structure. The structure providing the capacitance, as well as the transistors that control access to it, is collectively referred to as aDRAM cell. They are the fundamental building block in DRAM arrays. Multiple DRAM memory cell variants exist, but the most commonly used variant in modern DRAMs is the one-transistor, one-capacitor (1T1C) cell. The transistor is used to admit current into the capacitor during writes, and to discharge the capacitor during reads. The access transistor is designed to maximize drive strength and minimize transistor-transistor leakage (Kenner, p. 34). The capacitor has two terminals, one of which is connected to its access transistor, and the other to either ground or VCC/2. In modern DRAMs, the latter case is more common, since it allows faster operation. In modern DRAMs, a voltage of +VCC/2 across the capacitor is required to store a logic one; and a voltage of −VCC/2 across the capacitor is required to store a logic zero. The resultant charge isQ=±VCC2⋅C{\textstyle Q=\pm {V_{CC} \over 2}\cdot C}, whereQis the charge incoulombsandCis the capacitance infarads.[40] Reading or writing a logic one requires the wordline be driven to a voltage greater than the sum of VCCand the access transistor's threshold voltage (VTH). This voltage is calledVCCpumped(VCCP). The time required to discharge a capacitor thus depends on what logic value is stored in the capacitor. A capacitor containing logic one begins to discharge when the voltage at the access transistor's gate terminal is above VCCP. If the capacitor contains a logic zero, it begins to discharge when the gate terminal voltage is above VTH.[41] Up until the mid-1980s, the capacitors in DRAM cells were co-planar with the access transistor (they were constructed on the surface of the substrate), thus they were referred to asplanarcapacitors. The drive to increase both density and, to a lesser extent, performance, required denser designs. This was strongly motivated by economics, a major consideration for DRAM devices, especially commodity DRAMs. The minimization of DRAM cell area can produce a denser device and lower the cost per bit of storage. Starting in the mid-1980s, the capacitor was moved above or below the silicon substrate in order to meet these objectives. DRAM cells featuring capacitors above the substrate are referred to asstackedorfolded platecapacitors. Those with capacitors buried beneath the substrate surface are referred to astrenchcapacitors. In the 2000s, manufacturers were sharply divided by the type of capacitor used in their DRAMs and the relative cost and long-term scalability of both designs have been the subject of extensive debate. The majority of DRAMs, from major manufactures such asHynix,Micron Technology,Samsung Electronicsuse the stacked capacitor structure, whereas smaller manufacturers such Nanya Technology use the trench capacitor structure (Jacob, pp. 355–357). The capacitor in the stacked capacitor scheme is constructed above the surface of the substrate. The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape. There are two basic variations of the stacked capacitor, based on its location relative to the bitline—capacitor-under-bitline (CUB) and capacitor-over-bitline (COB). In the former, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal. In the latter, the capacitor is constructed above the bitline, which is almost always made of polysilicon, but is otherwise identical to the COB variation. The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface. However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that the capacitor contact does not touch the bitline. CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp. 33–42). The trench capacitor is constructed by etching a deep hole into the silicon substrate. The substrate volume surrounding the hole is then heavily doped to produce a buried n+plate with low resistance. A layer of oxide-nitride-oxide dielectric is grown or deposited, and finally the hole is filled by depositing doped polysilicon, which forms the top plate of the capacitor. The top of the capacitor is connected to the access transistor's drain terminal via a polysilicon strap (Kenner, pp. 42–44). A trench capacitor's depth-to-width ratio in DRAMs of the mid-2000s can exceed 50:1 (Jacob, p. 357). Trench capacitors have numerous advantages. Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp. 356–357). Alternatively, the capacitance can be increased by etching a deeper hole without any increase to surface area (Kenner, p. 44). Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate. The fact that the capacitor is under the logic means that it is constructed before the transistors are. This allows high-temperature processes to fabricate the capacitors, which would otherwise degrade the logic transistors and their performance. This makes trench capacitors suitable for constructingembedded DRAM(eDRAM) (Jacob, p. 357). Disadvantages of trench capacitors are difficulties in reliably constructing the capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, p. 44). First-generation DRAM ICs (those with capacities of 1 Kbit), such as the archetypicalIntel 1103, used a three-transistor, one-capacitor (3T1C) DRAM cell with separate read and write circuitry. The write wordline drove a write transistor which connected the capacitor to the write bitline just as in the 1T1C cell, but there was a separate read wordline and read transistor which connected an amplifier transistor to the read bitline. By the second generation, the drive to reduce cost by fitting the same amount of bits in a smaller area led to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16 Kbit capacities continued to use the 3T1C cell for performance reasons (Kenner, p. 6). These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read). A second performance advantage relates to the 3T1C cell's separate transistors for reading and writing; the memory controller can exploit this feature to perform atomic read-modify-writes, where a value is read, modified, and then written back as a single, indivisible operation (Jacob, p. 459). The one-transistor, zero-capacitor (1T, or 1T0C) DRAM cell has been a topic of research since the late-1990s.1T DRAMis a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as1T DRAM, particularly in comparison to the 3T and 4T DRAM which it replaced in the 1970s. In 1T DRAM cells, the bit of data is still stored in a capacitive region controlled by a transistor, but this capacitance is no longer provided by a separate capacitor. 1T DRAM is a "capacitorless" bit cell design that stores data using the parasitic body capacitance that is inherent tosilicon on insulator(SOI) transistors. Considered a nuisance in logic design, thisfloating body effectcan be used for data storage. This gives 1T DRAM cells the greatest density as well as allowing easier integration with high-performance logic circuits since they are constructed with the same SOI process technologies.[42] Refreshing of cells remains necessary, but unlike with 1T1C DRAM, reads in 1T DRAM are non-destructive; the stored charge causes a detectable shift in thethreshold voltageof the transistor.[43]Performance-wise, access times are significantly better than capacitor-based DRAMs, but slightly worse than SRAM. There are several types of 1T DRAMs: the commercializedZ-RAMfrom Innovative Silicon, the TTRAM[44]from Renesas and theA-RAMfrom theUGR/CNRSconsortium. DRAM cells are laid out in a regular rectangular, grid-like pattern to facilitate their control and access via wordlines and bitlines. The physical layout of the DRAM cells in an array is typically designed so that two adjacent DRAM cells in a column share a single bitline contact to reduce their area. DRAM cell area is given asnF2, wherenis a number derived from the DRAM cell design, andFis the smallest feature size of a given process technology. This scheme permits comparison of DRAM size over different process technology generations, as DRAM cell area scales at linear or near-linear rates with respect to feature size. The typical area for modern DRAM cells varies between 6–8 F2. The horizontal wire, the wordline, is connected to the gate terminal of every access transistor in its row. The vertical bitline is connected to the source terminal of the transistors in its column. The lengths of the wordlines and bitlines are limited. The wordline length is limited by the desired performance of the array, since propagation time of the signal that must transverse the wordline is determined by theRC time constant. The bitline length is limited by its capacitance (which increases with length), which must be kept within a range for proper sensing (as DRAMs operate by sensing the charge of the capacitor released onto the bitline). Bitline length is also limited by the amount of operating current the DRAM can draw and by how power can be dissipated, since these two characteristics are largely determined by the charging and discharging of the bitline. Sense amplifiersare required to read the state contained in the DRAM cells. When the access transistor is activated, the electrical charge in the capacitor is shared with the bitline. The bitline's capacitance is much greater than that of the capacitor (approximately ten times). Thus, the change in bitline voltage is minute. Sense amplifiers are required to resolve the voltage differential into the levels specified by the logic signaling system. Modern DRAMs use differential sense amplifiers, and are accompanied by requirements as to how the DRAM arrays are constructed. Differential sense amplifiers work by driving their outputs to opposing extremes based on the relative voltages on pairs of bitlines. The sense amplifiers function effectively and efficient only if the capacitance and voltages of these bitline pairs are closely matched. Besides ensuring that the lengths of the bitlines and the number of attached DRAM cells attached to them are equal, two basic architectures to array design have emerged to provide for the requirements of the sense amplifiers: open and folded bitline arrays. The first generation (1 Kbit) DRAM ICs, up until the 64 Kbit generation (and some 256 Kbit generation devices) had open bitline array architectures. In these architectures, the bitlines are divided into multiple segments, and the differential sense amplifiers are placed in between bitline segments. Because the sense amplifiers are placed between bitline segments, to route their outputs outside the array, an additional layer of interconnect placed above those used to construct the wordlines and bitlines is required. The DRAM cells that are on the edges of the array do not have adjacent segments. Since the differential sense amplifiers require identical capacitance and bitline lengths from both segments, dummy bitline segments are provided. The advantage of the open bitline array is a smaller array area, although this advantage is slightly diminished by the dummy bitline segments. The disadvantage that caused the near disappearance of this architecture is the inherent vulnerability tonoise, which affects the effectiveness of the differential sense amplifiers. Since each bitline segment does not have any spatial relationship to the other, it is likely that noise would affect only one of the two bitline segments. The folded bitline array architecture routes bitlines in pairs throughout the array. The close proximity of the paired bitlines provide superiorcommon-modenoise rejection characteristics over open bitline arrays. The folded bitline array architecture began appearing in DRAM ICs during the mid-1980s, beginning with the 256 Kbit generation. This architecture is favored in modern DRAM ICs for its superior noise immunity. This architecture is referred to asfoldedbecause it takes its basis from the open array architecture from the perspective of the circuit schematic. The folded array architecture appears to remove DRAM cells in alternate pairs (because two DRAM cells share a single bitline contact) from a column, then move the DRAM cells from an adjacent column into the voids. The location where the bitline twists occupies additional area. To minimize area overhead, engineers select the simplest and most area-minimal twisting scheme that is able to reduce noise under the specified limit. As process technology improves to reduce minimum feature sizes, the signal to noise problem worsens, since coupling between adjacent metal wires is inversely proportional to their pitch. The array folding and bitline twisting schemes that are used must increase in complexity in order to maintain sufficient noise reduction. Schemes that have desirable noise immunity characteristics for a minimal impact in area is the topic of current research (Kenner, p. 37). Advances in process technology could result in open bitline array architectures being favored if it is able to offer better long-term area efficiencies; since folded array architectures require increasingly complex folding schemes to match any advance in process technology. The relationship between process technology, array architecture, and area efficiency is an active area of research. The first DRAMintegrated circuitsdid not have any redundancy. An integrated circuit with a defective DRAM cell would be discarded. Beginning with the 64 Kbit generation, DRAM arrays have included spare rows and columns to improve yields. Spare rows and columns provide tolerance of minor fabrication defects which have caused a small number of rows or columns to be inoperable. The defective rows and columns are physically disconnected from the rest of the array by a triggering aprogrammable fuseor by cutting the wire by a laser. The spare rows or columns are substituted in by remapping logic in the row and column decoders (Jacob, pp. 358–361). Electrical or magnetic interference inside a computer system can cause a single bit of DRAM tospontaneously flipto the opposite state. The majority of one-off ("soft") errors in DRAM chips occur as a result ofbackground radiation, chieflyneutronsfromcosmic raysecondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read/write them. The problem can be mitigated by usingredundantmemory bits and additional circuitry that use these bits to detect and correct soft errors. In most cases, the detection and correction are performed by thememory controller; sometimes, the required logic is transparently implemented within DRAM chips or modules, enabling the ECC memory functionality for otherwise ECC-incapable systems.[46]The extra memory bits are used to recordparityand to enable missing data to be reconstructed byerror-correcting code(ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, aSECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected.[47] Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from10−10−10−17error/bit·h, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.[48][49][50]The Schroeder et al. 2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors and that trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data.[51]A 2010 study at the University of Rochester also gave evidence that a substantial fraction of memory errors are intermittent hard errors.[52]Large scale studies on non-ECC main memory in PCs and laptops suggest that undetected memory errors account for a substantial number of system failures: the 2011 study reported a 1-in-1700 chance per 1.5% of memory tested (extrapolating to an approximately 26% chance for total memory) that a computer would have a memory error every eight months.[53] Although dynamic memory is only specified andguaranteedto retain its contents when supplied with power and refreshed every short period of time (often64 ms), the memory cellcapacitorsoften retain their values for significantly longer time, particularly at low temperatures.[54]Under some conditions most of the data in DRAM can be recovered even if it has not been refreshed for several minutes.[55] This property can be used to circumvent security and recover data stored in the main memory that is assumed to be destroyed at power-down. The computer could be quickly rebooted, and the contents of the main memory read out; or by removing a computer's memory modules, cooling them to prolong data remanence, then transferring them to a different computer to be read out. Such an attack was demonstrated to circumvent popular disk encryption systems, such as theopen sourceTrueCrypt, Microsoft'sBitLocker Drive Encryption, andApple'sFileVault.[54]This type of attack against a computer is often called acold boot attack. Dynamic memory, by definition, requires periodic refresh. Furthermore, reading dynamic memory is a destructive operation, requiring a recharge of the storage cells in the row that has been read. If these processes are imperfect, a read operation can causesoft errors. In particular, there is a risk that some charge can leak between nearby cells, causing the refresh or read of one row to cause adisturbance errorin an adjacent or even nearby row. The awareness of disturbance errors dates back to the first commercially available DRAM in the early 1970s (theIntel 1103). Despite the mitigation techniques employed by manufacturers, commercial researchers proved in a 2014 analysis that commercially availableDDR3DRAM chips manufactured in 2012 and 2013 are susceptible to disturbance errors.[56]The associated side effect that led to observed bit flips has been dubbedrow hammer. Dynamic RAM ICs can be packaged in molded epoxy cases, with an internal lead frame for interconnections between thesilicon dieand the package leads. The originalIBM PCdesign used ICs, including those for DRAM, packaged indual in-line packages(DIP), soldered directly to the main board or mounted in sockets. As memory density skyrocketed, the DIP package was no longer practical. For convenience in handling, several dynamic RAM integrated circuits may be mounted on a single memory module, allowing installation of 16-bit, 32-bit or 64-bit wide memory in a single unit, without the requirement for the installer to insert multiple individual integrated circuits. Memory modules may include additional devices for parity checking or error correction. Over the evolution of desktop computers, several standardized types of memory module have been developed. Laptop computers, game consoles, and specialized devices may have their own formats of memory modules not interchangeable with standard desktop parts for packaging or proprietary reasons. DRAM that is integrated into an integrated circuit designed in a logic-optimized process (such as anapplication-specific integrated circuit,microprocessor, or an entiresystem on a chip) is calledembedded DRAM(eDRAM). Embedded DRAM requires DRAM cell designs that can befabricatedwithout preventing the fabrication of fast-switching transistors used in high-performance logic, and modification of the basic logic-optimized process technology to accommodate the process steps required to build DRAM cell structures. Since the fundamental DRAM cell and array has maintained the same basic structure for many years, the types of DRAM are mainly distinguished by the many different interfaces for communicating with DRAM chips. The original DRAM, now known by theretronymasynchronous DRAMwas the first type of DRAM in use. From its origins in the late 1960s, it was commonplace in computing up until around 1997, when it was mostly replaced bysynchronous DRAM. In the present day, manufacture of asynchronous RAM is relatively rare.[57] An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically one or four) bidirectional data lines. There are three mainactive-lowcontrol signals: This interface provides direct control of internal timing: whenRASis driven low, aCAScycle must not be attempted until the sense amplifiers have sensed the memory state, andRASmust not be returned high until the storage cells have been refreshed. WhenRASis driven high, it must be held high long enough for precharging to complete. Although the DRAM is asynchronous, the signals are typically generated by a clocked memory controller, which limits their timing to multiples of the controller's clock cycle. For completeness, we mention two other control signals which are not essential to DRAM operation, but are provided for the convenience of systems using DRAM: Classic asynchronous DRAM is refreshed by opening each row in turn. The refresh cycles are distributed across the entire refresh interval in such a way that all rows are refreshed within the required interval. To refresh one row of the memory array usingRASonly refresh (ROR), the following steps must occur: This can be done by supplying a row address and pulsingRASlow; it is not necessary to perform anyCAScycles. An external counter is needed to iterate over the row addresses in turn.[58]In some designs, the CPU handled RAM refresh. TheZilog Z80is perhaps the best known example, as it has an internal row counter R which supplies the address for a special refresh cycle generated after each instruction fetch.[59]In other systems, especiallyhome computers, refresh was handled by the video circuitry as a side effect of its periodic scan of theframe buffer.[60] For convenience, the counter was quickly incorporated into the DRAM chips themselves. If theCASline is driven low beforeRAS(normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open.[58][61]This is known asCAS-before-RAS(CBR) refresh. This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM. Given support ofCAS-before-RASrefresh, it is possible to deassertRASwhile holdingCASlow to maintain data output. IfRASis then asserted again, this performs a CBR refresh cycle while the DRAM outputs remain valid. Because data output is not interrupted, this is known ashidden refresh.[61]Hidden refresh is no faster than a normal read followed by a normal refresh, but does maintain the data output valid during the refresh cycle. Page mode DRAMis a minor modification to the first-generation DRAM IC interface which improves the performance of reads and writes to a row by avoiding the inefficiency of precharging and opening the same row repeatedly to access a different column. In page mode DRAM, after a row is opened by holdingRASlow, the row can be kept open, and multiple reads or writes can be performed to any of the columns in the row. Each column access is initiated by presenting a column address and assertingCAS. For reads, after a delay (tCAC), valid data appears on the data out pins, which are held at high-Z before the appearance of valid data. For writes, the write enable signal and write data is presented along with the column address.[62] Page mode DRAM was in turn later improved with a small modification which further reduced latency. DRAMs with this improvement are calledfast page mode DRAMs(FPM DRAMs). In page mode DRAM, the chip does not capture the column address untilCASis asserted, so column access time (until data out was valid) begins whenCASis asserted. In FPM DRAM, the column address can be supplied whileCASis still deasserted, and the main column access time (tAA) begins as soon as the address is stable. TheCASsignal is only needed to enable the output (the data out pins were held at high-Z whileCASwas deasserted), so time fromCASassertion to data valid (tCAC) is greatly reduced.[63]Fast page mode DRAM was introduced in 1986 and was used with theIntel 80486. Static columnis a variant of fast page mode in which the column address does not need to be latched, but rather the address inputs may be changed withCASheld low, and the data output will be updated accordingly a few nanoseconds later.[63] Nibble modeis another variant in which four sequential locations within the row can be accessed with four consecutive pulses ofCAS. The difference from normal page mode is that the address inputs are not used for the second through fourthCASedges but are generated internally starting with the address supplied for the firstCASedge.[63]The predictable addresses let the chip prepare the data internally and respond very quickly to the subsequentCASpulses. Extended data out DRAM (EDO DRAM) was invented and patented in the 1990s byMicron Technologywho then licensed technology to many other memory manufacturers.[64]EDO RAM, sometimes referred to ashyper page modeenabled DRAM, is similar to fast page mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance.[65]It is up to 30% faster than FPM DRAM,[66]which it began to replace in 1995 whenIntelintroduced the430FX chipsetwith EDO DRAM support. Irrespective of the performance gains, FPM and EDO SIMMs can be used interchangeably in many (but not all) applications.[67][68] To be precise, EDO DRAM begins data output on the falling edge ofCASbut does not disable the output whenCASrises again. Instead, it holds the current output valid (thus extending the data output time) even as the DRAM begins decoding a new column address, until either a new column's data is selected by anotherCASfalling edge, or the output is switched off by the rising edge ofRAS. (Or, less commonly, a change inCS,OE, orWE.) This ability to start a new access even before the system has received the preceding column's data made it possible to design memory controllers which could carry out aCASaccess (in the currently open row) in one clock cycle, or at least within two clock cycles instead of the previously required three. EDO's capabilities were able to partially compensate for the performance lost due to the lack of an L2 cache in low-cost, commodity PCs. More expensive notebooks also often lacked an L2 cache die to size and power limitations, and benefitted similarly. Even for systemswithan L2 cache, the availability of EDO memory improved the average memory latency seen by applications over earlier FPM implementations. Single-cycle EDO DRAM became very popular on video cards toward the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM. An evolution of EDO DRAM, burst EDO DRAM (BEDO DRAM), could process four memory addresses in one burst, for a maximum of5-1-1-1, saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipeline stage allowing page-access cycle to be divided into two parts. During a memory-read operation, the first part accessed the data from the memory array to the output stage (second latch). The second part drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, quicker access time is achieved (up to 50% for large blocks of data) than with traditional EDO. Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM.[69]Even though BEDO RAM was superior to SDRAM in some ways, the latter technology quickly displaced BEDO. Synchronous dynamic RAM (SDRAM) significantly revises the asynchronous memory interface, adding a clock (and a clock enable) line. All other signals are received on the rising edge of the clock. TheRASandCASinputs no longer act as strobes, but are instead, along withWE, part of a 3-bit command: TheOEline's function is extended to a per-byte DQM signal, which controls data input (writes) in addition to data output (reads). This allows DRAM chips to be wider than 8 bits while still supporting byte-granularity writes. Many timing parameters remain under the control of the DRAM controller. For example, a minimum time must elapse between a row being activated and a read or write command. One important parameter must be programmed into the SDRAM chip itself, namely theCAS latency. This is the number of clock cycles allowed for internal operations between a read command and the first data word appearing on the data bus. TheLoad mode registercommand is used to transfer this value to the SDRAM chip. Other configurable parameters include the length of read and write bursts, i.e. the number of words transferred per read or write command. The most significant change, and the primary reason that SDRAM has supplanted asynchronous RAM, is the support for multiple internal banks inside the DRAM chip. Using a few bits ofbank addressthat accompany each command, a second bank can be activated and begin reading datawhile a read from the first bank is in progress. By alternating banks, a single SDRAM device can keep the data bus continuously busy, in a way that asynchronous DRAM cannot. Single data rate SDRAM (SDR SDRAM or SDR) is the original generation of SDRAM; it made a single transfer of data per clock cycle. Double data rate SDRAM (DDR SDRAM or DDR) was a later development of SDRAM, used in PC memory beginning in 2000. Subsequent versions are numbered sequentially (DDR2,DDR3, etc.). DDR SDRAM internally performs double-width accesses at the clock rate, and uses adouble data rateinterface to transfer one half on each clock edge. DDR2 and DDR3 increased this factor to 4× and 8×, respectively, delivering 4-word and 8-word bursts over 2 and 4 clock cycles, respectively. The internal access rate is mostly unchanged (200 million per second for DDR-400, DDR2-800 and DDR3-1600 memory), but each access transfers more data. Direct RAMBUS DRAM(DRDRAM) was developed by Rambus. First supported onmotherboardsin 1999, it was intended to become an industry standard, but was outcompeted byDDR SDRAM, making it technically obsolete by 2003. Reduced Latency DRAM (RLDRAM) is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth, mainly intended for networking and caching applications. Graphics RAMs are asynchronous and synchronous DRAMs designed for graphics-related tasks such astexture memoryandframebuffers, found onvideo cards. Video DRAM (VRAM) is adual-portedvariant of DRAM that was once commonly used to store the frame-buffer in somegraphics adaptors. Window DRAM (WRAM) is a variant of VRAM that was once used in graphics adaptors such as theMatroxMillennium andATI 3D Rage Pro. WRAM was designed to perform better and cost less than VRAM. WRAM offered up to 25% greater bandwidth than VRAM and accelerated commonly used graphical operations such as text drawing and block fills.[70] Multibank DRAM (MDRAM) is a type of specialized DRAM developed byMoSys. It is constructed from smallmemory banksof256 kB, which are operated in aninterleavedfashion, providing bandwidths suitable for graphics cards at a lower cost to memories such asSRAM. MDRAM also allows operations to two banks in a single clock cycle, permitting multiple concurrent accesses to occur if the accesses were independent. MDRAM was primarily used in graphic cards, such as those featuring theTseng LabsET6x00 chipsets. Boards based upon this chipset often had the unusual capacity of2.25 MBbecause of MDRAM's ability to be implemented more easily with such capacities. A graphics card with2.25 MBof MDRAM had enough memory to provide 24-bit color at a resolution of 1024×768—a very popular setting at the time. Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptors. It adds functions such asbit masking(writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies. Graphics double data rate SDRAM is a type of specializedDDRSDRAMdesigned to be used as the main memory ofgraphics processing units(GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 2020, there are seven, successive generations of GDDR:GDDR2,GDDR3,GDDR4,GDDR5,GDDR5X,GDDR6andGDDR6X. Pseudostatic RAM (PSRAM or PSDRAM) is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM. PSRAM is used in the Apple iPhone and other embedded systems such as XFlar Platform.[71] Some DRAM components have aself-refresh mode. While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, rather than to allow operation without a separate DRAM controller as is in the case of mentioned PSRAMs. Anembeddedvariant of PSRAM was sold by MoSys under the name1T-SRAM. It is a set of small DRAM banks with an SRAM cache in front to make it behave much like a true SRAM. It is used inNintendoGameCubeandWiivideo game consoles. Cypress Semiconductor's HyperRAM[72]is a type of PSRAM supporting aJEDEC-compliant 8-pin HyperBus[73]or Octal xSPI interface.
https://en.wikipedia.org/wiki/Dynamic_random-access_memory
Column address strobe latency, also calledCAS latencyorCL, is the delay in clock cycles between the READ command and the moment data is available.[1][2]In asynchronousDRAM, the interval is specified in nanoseconds (absolute time).[3]Insynchronous DRAM, the interval is specified in clock cycles. Because the latency is dependent upon a number of clock ticks instead of absolute time, the actual time for anSDRAMmodule to respond to a CAS event might vary between uses of the same module if the clock rate differs. Dynamic RAM is arranged in a rectangular array. Each row is selected by a horizontalword line. Sending a logical high signal along a given row enables theMOSFETspresent in that row, connecting each storage capacitor to its corresponding verticalbit line. Each bit line is connected to asense amplifierthat amplifies the small voltage change produced by the storage capacitor. This amplified signal is then output from the DRAM chip as well as driven back up the bit line torefreshthe row. When no word line is active, the array is idle and the bit lines are held in a precharged[4]state, with a voltage halfway between high and low. This indeterminate signal is deflected towards high or low by the storage capacitor when a row is made active. To access memory, a row must first be selected and loaded into the sense amplifiers. This row is thenactive,and columns may be accessed for read or write. The CAS latency is the delay between the time at which the column address and thecolumn address strobesignal are presented to the memory module and the time at which the corresponding data is made available by the memory module. The desired row must already be active; if it is not, additional time is required. As an example, a typical 1GiBSDRAMmemory module might contain eight separate one-gibibitDRAM chips, each offering 128MiBof storage space. Each chip is divided internally into eight banks of 227=128Mibits, each of which composes a separate DRAM array. Each bank contains 214=16384 rows of 213=8192 bits each. One byte of memory (from each chip; 64 bits total from the whole DIMM) is accessed by supplying a 3-bit bank number, a 14-bit row address, and a 13-bit column address.[citation needed] With asynchronous DRAM, memory was accessed by a memory controller on the memory bus based on a set timing rather than a clock, and was separate from the system bus.[3]Synchronous DRAM, however, has a CAS latency that is dependent upon the clock rate. Accordingly, the CAS latency of anSDRAMmemory module is specified in clock ticks instead of absolute time.[citation needed] Because memory modules have multiple internal banks, and data can be output from one during access latency for another, the output pins can be kept 100% busy regardless of the CAS latency throughpipelining; the maximum attainablebandwidthis determined solely by the clock speed. Unfortunately, this maximum bandwidth can only be attained if the address of the data to be read is known long enough in advance; if the address of the data being accessed is not predictable,pipeline stallscan occur, resulting in a loss of bandwidth. For a completely unknown memory access (AKA Random access), the relevant latency is the time to close any open row, plus the time to open the desired row, followed by the CAS latency to read data from it. Due tospatial locality, however, it is common to access several words in the same row. In this case, the CAS latency alone determines the elapsed time. Because modernDRAMmodules' CAS latencies are specified in clock ticks instead of time, when comparing latencies at different clock speeds, latencies must be translated into absolute times to make a fair comparison; a higher numerical CAS latency may still be less time if the clock is faster. Likewise, a memory module which isunderclockedcould have its CAS latency cycle count reduced to preserve the same CAS latency time.[citation needed] Double data rate(DDR)RAMperforms two transfers per clock cycle, and it is usually described by this transfer rate. Because the CAS latency is specified in clock cycles, and not transfers (which occur on both the rising and falling edges of the clock), it is important to ensure it is the clock rate (half of the transfer rate) which is being used to compute CAS latency times.[citation needed] Another complicating factor is the use of burst transfers. A modern microprocessor might have acache linesize of 64 bytes, requiring eight transfers from a 64-bit-wide (eight bytes) memory to fill. The CAS latency can only accurately measure the time to transfer the first word of memory; the time to transfer all eight words depends on the data transfer rate as well. Fortunately, the processor typically does not need to wait for all eight words; the burst is usually sent incritical word firstorder, and the first critical word can be used by the microprocessor immediately. In the table below, data rates are given in million transfers—also known asmegatransfers—per second (MT/s), while clock rates are given in MHz, million cycles per second.
https://en.wikipedia.org/wiki/CAS_latency
Incomputing,mass storagerefers to thestorageof large amounts ofdatain apersistingandmachine-readablefashion. In general, the termmassinmass storageis used to meanlargein relation to contemporaneous hard disk drives, but it has also been used to meanlargerelative to the size ofprimary memoryas for example withfloppy disksonpersonal computers. Devices and/or systems that have been described as mass storage includetape libraries,RAIDsystems, and a variety ofcomputer drivessuch ashard disk drives(HDDs),magnetic tapedrives,magneto-optical discdrives,optical discdrives,memory cards, andsolid-state drives(SSDs). It also includes experimental forms likeholographic memory. Mass storage includes devices withremovableand non-removable media.[1][2]It does not includerandom access memory(RAM). There are two broad classes of mass storage: local data in devices such assmartphonesorcomputers, and enterprise servers and data centers for the cloud. For local storage, SSDs are on the way to replacing HDDs. Considering the mobile segment from phones to notebooks, the majority of systems today is based onNAND Flash. As for Enterprise anddata centers, storage tiers have established using a mix ofSSDandHDD.[3] The notion of "large" amounts of data is of course highly dependent on the time frame and the market segment, as storage device capacity has increased by many orders of magnitude since the beginnings of computer technology in the late 1940s and continues to grow; however, in any time frame, common mass storage devices have tended to be much larger and at the same time much slower than common realizations of contemporaneousprimary storagetechnology. Papers[4][5][6]at the 1966Fall Joint Computer Conference[7](FJCC) used the termmass storagefor devices substantially larger than contemporaneous hard disk drives. Similarly, a 1972 analysis identified mass storage systems fromAmpex(Terabit Memory) using video tape, Precision Industries (Unicon 690-212) using lasers and International Video (IVC-1000) using video tape[8]and states "In the literature, the most common definition of mass storage capacity is a trillion bits.".[9]The first IEEE conference on mass storage was held in 1974[10]and at that time identified mass storage as "capacity on the order of 1012bits" (1 gigabyte).[11]In the mid-1970s IBM used the term to in the name of theIBM 3850Mass Storage System, which provided virtual disks backed up byHelical scanmagnetic tape cartridges, slower than disk drives but with a capacity larger than was affordable with disks.[12]The termmass storagewas used in the PC marketplace for devices, such as floppy disk drives, far smaller than devices that were not[a]considered mass storage in the mainframe marketplace. Mass storage devices are characterized by: Hard disk drives dominate storage media in terms of exabytes shipped and are projected to continue to so for this decade.[13] Solid-state drives (i.e. Flash storage media) are the predominant storage media inpersonal computers.Flash memory(in particular,NAND flash) has an established and growing niche in high performance enterprise computing installations. Flash memory has also long been popular as removable storage such asUSB sticks, where it de facto makes up the market. Flash dominates incell phones.[14][15] Tape is predominantly used for archival storage[16] Optical discs are almost exclusively used in the physical distribution of retail software, music and movies because of the cost and manufacturing efficiency of the molding process used to produceDVDandcompact discsand the nearly-universal presence ofreader drivesin personal computers and consumer appliances.[17] The design ofcomputer architecturesandoperating systemsare often dictated by the mass storage andbustechnology of their time.[18] Mass storage devices used in desktop and most server computers typically have their data organized in afile system. The choice of file system is often important in maximizing the performance of the device: general purpose file systems (such asNTFSandHFS, for example) tend to do poorly on slow-seeking optical storage such as compact discs. Somerelational databasescan also be deployed on mass storage devices without an intermediate file system or storage manager.OracleandMySQL, for example, can store table data directly on rawblock devices. Onremovable media, archive formats (such astar archivesonmagnetic tape, which pack file data end-to-end) are sometimes used instead of file systems because they are moreportableand simpler tostream. On embedded computers, it is common tomemory mapthe contents of a mass storage device (usuallyROMor flash memory) so that its contents can be traversed as in-memory data structures or executed directly by programs.
https://en.wikipedia.org/wiki/Mass_storage
Memory cellmay refer to:
https://en.wikipedia.org/wiki/Memory_cell_(disambiguation)
Incomputer science, amemory leakis a type ofresource leakthat occurs when acomputer programincorrectly managesmemory allocations[1]in a way thatmemorywhich is no longer needed is not released. A memory leak may also happen when anobjectis stored in memory but cannot be accessed by the running code (i.e.unreachable memory).[2]A memory leak has symptoms similar to a number of other problems and generally can only be diagnosed by aprogrammerwith access to the program's source code. A related concept is the "space leak", which is when a program consumes excessive memory but does eventually release it.[3] Because they can exhaust available system memory as an application runs, memory leaks are often the cause of or a contributing factor tosoftware aging. If a program has a memory leak and its memory usage is steadily increasing, there will not usually be an immediate symptom. In modern operating systems, normal memory used by an application is released when the application terminates. This means that a memory leak in a program that only runs for a short time may not be noticed and is rarely serious, and slow leaks can also be covered over by program restarts. Every physical system has a finite amount of memory, and if the memory leak is not contained (for example, by restarting the leaking program) it will eventually cause problems for users.[4] Most modern consumer desktopoperating systemshave bothmain memorywhich is physically housed in RAM microchips, andsecondary storagesuch as ahard drive. Memory allocation is dynamic – each process gets as much memory as it requests. Activepagesare transferred into main memory for fast access; inactive pages are pushed out to secondary storage to make room, as needed. When a single process starts consuming a large amount of memory, it usually occupies more and more of main memory, pushing other programs out to secondary storage – usually significantly slowing performance of the system. Even if the leaking program is terminated, it may take some time for other programs to swap back into main memory, and for performance to return to normal. The resulting slowness and excessive accessing of secondary storage is known asthrashing. If a program uses all available memory before being terminated (whether there is virtual memory or only main memory, such as on an embedded system) any attempt to allocate more memory will fail. This usually causes the program attempting to allocate the memory to terminate itself, or to generate asegmentation fault. Some programs are designed to recover from this situation (possibly by falling back on pre-reserved memory). The first program to experience the out-of-memory may or may not be the program that has the memory leak. Somemulti-taskingoperating systems have special mechanisms to deal with an out-of-memory condition, such as killing processes at random (which may affect "innocent" processes), or killing the largest process in memory (which presumably is the one causing the problem). Some operating systems have a per-process memory limit, to prevent any one program from hogging all of the memory on the system. The disadvantage to this arrangement is that the operating system sometimes must be re-configured to allow proper operation of programs that legitimately require large amounts of memory, such as those dealing with graphics, video, or scientific calculations. If the memory leak is in thekernel, the operating system itself will likely fail. Computers without sophisticated memory management, such as embedded systems, may also completely fail from a persistent memory leak. Much more serious leaks include those where: Memory leaks are a common error in programming, especially when usinglanguagesthat have no built in automaticgarbage collection, such asCandC++. Typically, a memory leak occurs becausedynamically allocatedmemory has becomeunreachable. The prevalence of memory leakbugshas led to the development of a number ofdebuggingtoolsto detect unreachable memory.BoundsChecker,Deleaker, Memory Validator,IBM Rational Purify,Valgrind,ParasoftInsure++,Dr. Memoryandmemwatchare some of the more popularmemory debuggersfor C and C++ programs. "Conservative" garbage collection capabilities can be added to any programming language that lacks it as a built-in feature, and libraries for doing this are available for C and C++ programs. A conservative collector finds and reclaims most, but not all, unreachable memory. Although thememory managercan recover unreachable memory, it cannot free memory that is still reachable and therefore potentially still useful. Modern memory managers therefore provide techniques for programmers to semantically mark memory with varying levels of usefulness, which correspond to varying levels ofreachability. The memory manager does not free an object that is strongly reachable. An object is strongly reachable if it is reachable either directly by astrong referenceor indirectly by a chain of strong references. (Astrong referenceis a reference that, unlike aweak reference, prevents an object from being garbage collected.) To prevent this, the developer is responsible for cleaning up references after use, typically by setting the reference tonullonce it is no longer needed and, if necessary, by deregistering anyevent listenersthat maintain strong references to the object. In general, automatic memory management is more robust and convenient for developers, as they do not need to implement freeing routines or worry about the sequence in which cleanup is performed or be concerned about whether or not an object is still referenced. It is easier for a programmer to know when a reference is no longer needed than to know when an object is no longer referenced. However, automatic memory management can impose a performance overhead, and it does not eliminate all of the programming errors that cause memory leaks.[citation needed] Publicly accessible systems such asweb serversorroutersare prone todenial-of-service attacksif an attacker discovers a sequence of operations which can trigger a leak. Such a sequence is known as anexploit. Resource acquisition is initialization(RAII) is an approach to the problem commonly taken inC++,D, andAda. It involves associating scoped objects with the acquired resources, and automatically releasing the resources once the objects are out of scope. Unlike garbage collection, RAII has the advantage of knowing when objects exist and when they do not. Compare the following C and C++ examples: The C version, as implemented in the example, requires explicit deallocation; the array isdynamically allocated(from the heap in most C implementations), and continues to exist until explicitly freed. The C++ version requires no explicit deallocation; it will always occur automatically as soon as the objectarraygoes out of scope, including if an exception is thrown. This avoids some of the overhead ofgarbage collectionschemes. And because object destructors can free resources other than memory, RAII helps to prevent theleaking of input and output resources accessed through a handle, which mark-and-sweep garbage collection does not handle gracefully. These include open files, open windows, user notifications, objects in a graphics drawing library, thread synchronisation primitives such as critical sections, network connections, and connections to theWindows Registryor another database. However, using RAII correctly is not always easy and has its own pitfalls. For instance, if one is not careful, it is possible to createdangling pointers(or references) by returning data by reference, only to have that data be deleted when its containing object goes out of scope. Duses a combination of RAII and garbage collection, employing automatic destruction when it is clear that an object cannot be accessed outside its original scope, and garbage collection otherwise. More moderngarbage collectionschemes are often based on a notion of reachability – if you do not have a usable reference to the memory in question, it can be collected. Other garbage collection schemes can be based onreference counting, where an object is responsible for keeping track of how many references are pointing to it. If the number goes down to zero, the object is expected to release itself and allow its memory to be reclaimed. The flaw with this model is that it does not cope with cyclic references, and this is why nowadays most programmers are prepared to accept the burden of the more costlymark and sweeptype of systems. The followingVisual Basiccode illustrates the canonical reference-counting memory leak: In practice, this trivial example would be spotted straight away and fixed. In most real examples, the cycle of references spans more than two objects, and is more difficult to detect. A well-known example of this kind of leak came to prominence with the rise ofAJAXprogramming techniques inweb browsersin thelapsed listener problem.JavaScriptcode which associated aDOMelement with an event handler, and failed to remove the reference before exiting, would leak memory (AJAX web pages keep a given DOM alive for a lot longer than traditional web pages, so this leak was much more apparent). A "sawtooth" pattern of memory utilization may be an indicator of a memory leak within an application, particularly if the vertical drops coincide with reboots or restarts of that application. Care should be taken though becausegarbage collectionpoints could also cause such a pattern and would show a healthy usage of the heap. Constantly increasing memory usage is not necessarily evidence of a memory leak. Some applications will store ever increasing amounts of information in memory (e.g. as acache). If the cache can grow so large as to cause problems, this may be a programming or design error, but is not a memory leak as the information remains nominally in use. In other cases, programs may require an unreasonably large amount of memory because the programmer has assumed memory is always sufficient for a particular task; for example, a graphics file processor might start by reading the entire contents of an image file and storing it all into memory, something that is not viable where a very large image exceeds available memory. Confirmation that excessive memory use is due to a memory leak requires access to the program code.[citation needed] The following example, written inpseudocode, is intended to show how a memory leak can come about, and its effects, without needing any programming knowledge. The program in this case is part of some very simple software designed to control anelevator. This part of the program is run whenever anyone inside the elevator presses the button for a floor. The memory leak would occur if the floor number requested is the same floor that the elevator is on; the condition for releasing the memory would be skipped. Each time this case occurs, more memory is leaked. Cases like this would not usually have any immediate effects. People do not often press the button for the floor they are already on, and in any case, the elevator might have enough spare memory that this could happen hundreds or thousands of times. However, the elevator will eventually run out of memory. This could take months or years, so it might not be discovered despite thorough testing. The consequences would be unpleasant; at the very least, the elevator would stop responding to requests to move to another floor (such as when an attempt is made to call the elevator or when someone is inside and presses the floor buttons). If other parts of the program need memory (a part assigned to open and close the door, for example), then no one would be able to enter, and if someone happens to be inside, they will become trapped (assuming the doors cannot be opened manually). The memory leak lasts until the system is reset. For example: if the elevator's power were turned off or in a power outage, the program would stop running. When power was turned on again, the program would restart and all the memory would be available again, but the slow process of memory leak would restart together with the program, eventually prejudicing the correct running of the system. The leak in the above example can be corrected by bringing the "release" operation outside of the conditional: The followingC++program deliberately leaks memory by losing the pointer to the allocated memory.
https://en.wikipedia.org/wiki/Memory_leak
Apage address register(PAR) contains thephysical addressesof pages currently held in themain memoryof a computer system. PARs are used in order to avoid excessive use of an address table in someoperating systems. A PAR may check a page's number against all entries in the PAR simultaneously, allowing it to retrieve the pages physical address quickly. A PAR is used by a singleprocessand is only used for pages which are frequently referenced (though these pages may change as the process's behaviour changes in accordance withthe principle of locality). An example computer which made use of PARs is theAtlas. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Page_address_register
Stable storageis a classification ofcomputer data storagetechnology that guaranteesatomicityfor any given write operation and allows software to be written that isrobustagainst some hardware and power failures. To be considered atomic, upon reading back a just written-to portion of the disk, the storage subsystem must return either the write data or the data that was on that portion of the disk before the write operations. Most computerdisk drivesare not considered stable storage because they do not guarantee atomic write; an error could be returned upon subsequent read of the disk where it was just written to in lieu of either the new or prior data. Multiple techniques have been developed to achieve the atomic property from weakly atomic devices such as disks. Writing data to a disk in two places in a specific way is one technique and can be done byapplication software. Most often though, stable storage functionality is achieved bymirroringdata on separate disks viaRAIDtechnology (level 1 or greater). The RAID controller implements the disk writingalgorithmsthat enable separate disks to act as stable storage. The RAID technique is robust against some singledisk failurein an array of disks whereas the software technique of writing to separate areas of the same disk only protects against some kinds of internal disk media failures such as badsectorsin single disk arrangements. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it. Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Stable_storage
Static random-access memory(static RAMorSRAM) is a type ofrandom-access memory(RAM) that useslatching circuitry (flip-flop)to store each bit. SRAM isvolatile memory; data is lost when power is removed. Thestaticqualifier differentiates SRAM fromdynamicrandom-access memory(DRAM): Semiconductor bipolar SRAM was invented in 1963 by Robert Norman atFairchild Semiconductor.[1]Metal–oxide–semiconductorSRAM (MOS-SRAM) was invented in 1964 by John Schmidt at Fairchild Semiconductor. The first device was a 64-bit MOS p-channel SRAM.[2][3] SRAM was the main driver behind any newCMOS-based technology fabrication process since the 1960s, when CMOS was invented.[4] In 1964, Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using atransistorgate andtunnel diodelatch. They replaced the latch with two transistors and tworesistors, a configuration that became known as the Farber-Schlig cell. That year they submitted an invention disclosure, but it was initially rejected.[5][6]In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 84 transistors, 64 resistors, and 4 diodes. In April 1969, Intel Inc. introduced its first product, Intel 3101, a SRAM memory chip intended to replace bulkymagnetic-core memorymodules; Its capacity was 64 bits[a][7]and was based onbipolar junction transistors.[8]It was designed by usingrubylith.[9] Though it can be characterized asvolatile memory, SRAM exhibitsdata remanence.[10] SRAM offers a simple data access model and does not require a refresh circuit. Performance and reliability are good and power consumption is low when idle. Since SRAM requires more transistors per bit to implement, it is less dense and more expensive than DRAM and also has a higherpower consumptionduring read or write access. The power consumption of SRAM varies widely depending on how frequently it is accessed.[11] Many categories of industrial and scientific subsystems, automotive electronics, and similarembedded systems, contain SRAM which, in this context, may be referred to asembedded SRAM(ESRAM).[12]Some amount is also embedded in practically all modern appliances, toys, etc. that implement an electronic user interface. SRAM in itsdual-portedform is sometimes used for real-timedigital signal processingcircuits.[13] SRAM is also used in personal computers, workstations, routers and peripheral equipment: CPUregister files, internalCPU caches, internalGPU cachesand externalburst modeSRAM caches,hard diskbuffers,routerbuffers, etc.LCD screensandprintersalso normally employ SRAM to hold the image displayed (or to be printed). LCDs can have SRAM in their LCD controllers. SRAM was used for the main memory of many early personal computers such as theZX80,TRS-80 Model 100, andVIC-20. Some earlymemory cardsin the late 1980s to early 1990s used SRAM as a storage medium, which required a lithium battery to keep the contents of the SRAM.[14][15] SRAM may be integrated on chip for: Hobbyists, specifically home-built processor enthusiasts,[16]often prefer SRAM due to the ease of interfacing. It is much easier to work with than DRAM as there are no refresh cycles and the address and data buses are often directly accessible.[citation needed]In addition to buses and power connections, SRAM usually requires only three controls: Chip Enable (CE), Write Enable (WE) and Output Enable (OE). In synchronous SRAM, Clock (CLK) is also included.[17] Non-volatile SRAM(nvSRAM) has standard SRAM functionality, but they save the data when the power supply is lost, ensuring preservation of critical information. nvSRAMs are used in a wide range of situations – networking, aerospace, and medical, among many others[18]– where the preservation of data is critical and where batteries are impractical. Pseudostatic RAM(PSRAM) is DRAM combined with a self-refresh circuit.[19]It appears externally as slower SRAM, albeit with a density and cost advantage over true SRAM, and without the access complexity of DRAM. In the 1990s, asynchronous SRAM used to be employed for fast access time. Asynchronous SRAM was used asmain memoryfor small cache-less embedded processors used in everything fromindustrial electronicsandmeasurement systemstohard disksand networking equipment, among many other applications. Nowadays, synchronous SRAM (e.g. DDR SRAM) is rather employed similarly to synchronous DRAM –DDR SDRAMmemory is rather used thanasynchronous DRAM. Synchronous memory interface is much faster as access time can be significantly reduced by employingpipelinearchitecture. Furthermore, as DRAM is much cheaper than SRAM, SRAM is often replaced by DRAM, especially in the case when a large volume of data is required. SRAM memory is, however, much faster for random (not block / burst) access. Therefore, SRAM memory is mainly used forCPU cache, small on-chip memory,FIFOsor other small buffers. A typical SRAM cell is made up of sixMOSFETs, and is often called a6TSRAM cell. Eachbitin the cell is stored on fourtransistors(M1, M2, M3, M4) that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additionalaccesstransistors serve to control the access to a storage cell during read and write operations. 6T SRAM is the most common kind of SRAM.[21]In addition to 6T SRAM, other kinds of SRAM use 4, 5, 7,[22]8, 9,[21]10[23](4T, 5T, 7T 8T, 9T, 10T SRAM), or more transistors per bit.[24][25][26]Four-transistor SRAM is quite common in stand-alone SRAM devices (as opposed to SRAM used for CPU caches), implemented in special processes with an extra layer ofpolysilicon, allowing for very high-resistance pull-up resistors.[27]The principal drawback of using 4T SRAM is increasedstatic powerdue to the constant current flow through one of the pull-down transistors (M1 or M2). This is sometimes used to implement more than one (read and/or write) port, which may be useful in certain types ofvideo memoryandregister filesimplemented with multi-ported SRAM circuitry. Generally, the fewer transistors needed per cell, the smaller each cell can be. Since the cost of processing a silicon wafer is relatively fixed, using smaller cells and so packing more bits on one wafer reduces the cost per bit of memory. Memory cells that use fewer than four transistors are possible; however, such 3T[28][29]or 1T cells are DRAM, not SRAM (even the so-called1T-SRAM). Access to the cell is enabled by the word line (WL in figure) which controls the twoaccesstransistors M5and M6in 6T SRAM figure (or M3and M4in 4T SRAM figure) which, in turn, control whether the cell should be connected to the bit lines:BLand BL. They are used to transfer data for both read and write operations. Although it is not strictly necessary to have two bit lines, both the signal and its inverse are typically provided in order to improvenoise marginsand speed. During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This improves SRAM bandwidth compared to DRAMs – in a DRAM, the bit line is connected to storage capacitors andcharge sharingcauses the bit line to swing upwards or downwards. The symmetric structure of SRAMs also allows fordifferential signaling, which makes small voltage swings more easily detectable. Another difference with DRAM that contributes to making SRAM faster is that commercial chips accept all address bits at a time. By comparison, commodity DRAMs have the address multiplexed in two halves, i.e. higher bits followed by lower bits, over the same package pins in order to keep their size and cost down. The size of an SRAM withmaddress lines andndata lines is2mwords, or2m×nbits. The most common word size is 8 bits, meaning that a single byte can be read or written to each of2mdifferent words within the SRAM chip. Several common SRAM chips have 11 address lines (thus a capacity of211= 2,048 =2kwords) and an 8-bit word, so they are referred to as2k × 8 SRAM. The dimensions of an SRAM cell on an IC is determined by theminimum feature sizeof the process used to make the IC. An SRAM cell has three states: SRAM operating in read and write modes should havereadabilityandwrite stability, respectively. The three different states work as follows: If the word line is not asserted, theaccesstransistors M5and M6disconnect the cell from the bit lines. The two cross-coupled inverters formed by M1– M4will continue to reinforce each other as long as they are connected to the supply. In theory, reading only requires asserting the word line WL and reading the SRAM cell state by a single access transistor and bit line, e.g. M6, BL. However, bit lines are relatively long and have largeparasitic capacitance. To speed up reading, a more complex process is used in practice: The read cycle is started by precharging both bit lines BL andBL, to high (logic1) voltage. Then asserting the word line WL enables both the access transistors M5and M6, which causes one bit line BL voltage to slightly drop. Then the BL andBLlines will have a small voltage difference between them. A sense amplifier will sense which line has the higher voltage and thus determine whether there was 1 or 0 stored. The higher the sensitivity of the sense amplifier, the faster the read operation. As the NMOS is more powerful, the pull-down is easier. Therefore, bit lines are traditionally precharged to high voltage. Many researchers are also trying to precharge at a slightly low voltage to reduce the power consumption.[30][31] The write cycle begins by applying the value to be written to the bit lines. To write a 0, a 0 is applied to the bit lines, such as settingBLto 1 and BL to 0. This is similar to applying a reset pulse to anSR-latch, which causes the flip flop to change state. A1is written by inverting the values of the bit lines. WL is then asserted and the value that is to be stored is latched in. This works because the bit line input-drivers are designed to be much stronger than the relatively weak transistors in the cell itself so they can easily override the previous state of the cross-coupled inverters. In practice, access NMOS transistors M5and M6have to be stronger than either bottom NMOS (M1, M3) or top PMOS (M2, M4) transistors. This is easily obtained as PMOS transistors are much weaker than NMOS when same sized. Consequently, when one transistor pair (e.g. M3and M4) is only slightly overridden by the write process, the opposite transistors pair (M1and M2) gate voltage is also changed. This means that the M1and M2transistors can be easier overridden, and so on. Thus, cross-coupled inverters magnify the writing process. RAMwith an access time of 70 ns will output valid data within 70 ns from the time that the address lines are valid. Some SRAM cells have apage mode, where words of a page (256, 512, or 1024 words) can be read sequentially with a significantly shorter access time (typically approximately 30 ns). The page is selected by setting the upper address lines and then words are sequentially read by stepping through the lower address lines. Over 30 years (from 1987 to 2017), with a steadily decreasingtransistor size(node size), the footprint-shrinking of the SRAM cell topology itself slowed down, making it harder to pack the cells more densely.[4]One of the reasons is that scaling down transistor size leads to SRAM reliability issues. Careful cells designs are necessary to achieve SRAM cells that do not suffer from stability problems especially when they are being read.[32]With the introduction of theFinFETtransistor implementation of SRAM cells, they started to suffer from increasing inefficiencies in cell sizes. Besides issues with size a significant challenge of modern SRAM cells is a static current leakage. The current, that flows from positive supply (Vdd), through the cell, and to the ground, increases exponentially when the cell's temperature rises. The cell power drain occurs in both active and idle states, thus wasting useful energy without any useful work done. Even though in the last 20 years the issue was partially addressed by the Data Retention Voltage technique (DRV) with reduction rates ranging from 5 to 10, the decrease in node size caused reduction rates to fall to about 2.[4] With these two issues it became more challenging to develop energy-efficient and dense SRAM memories, prompting semiconductor industry to look for alternatives such asSTT-MRAMandF-RAM.[4][33] In 2019 a French institute reported on a research of anIoT-purposed28nmfabricatedIC.[34]It was based onfully depleted silicon on insulator-transistors (FD-SOI), had two-ported SRAM memory rail for synchronous/asynchronous accesses, and selectivevirtual ground(SVGND). The study claimed reaching an ultra-low SVGND current in asleepand read modes by finely tuning its voltage.[34]
https://en.wikipedia.org/wiki/Static_random-access_memory
Anopen service interface definition(OSID) is a programmatic interface specification describing a service. These interfaces are specified by theOpen Knowledge Initiative(OKI) to implement aservice-oriented architecture(SOA) to achieveinteroperabilityamong applications across a varied base of underlying and changing technologies. To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer fromprotocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments. OSIDs assist insoftware designand development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider andbelowthe interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplifiedproject management. OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achievesreusabilityat a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes. An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means ofabstraction. When all the OSID providers implement the same service, this is called anadapterpattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application. This Internet-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Filing_Open_Service_Interface_Definition
TheGNOME Core Applications(also known as Apps for GNOME) are asoftware suiteofsoftware applicationsthat are packaged as part of the standardfree and open-sourceGNOMEdesktop environment. GNOME Core Applications have a consistent look and feel to the GNOME desktop, utilize theAdwaitadesign language and tightly integrate with the GNOME desktop. GNOME Core Applications are developed and maintained through GNOME's officialGitLabinstance. A comprehensive list of these applications is available atapps.gnome.org GNOME Circleis a collection of applications which have been built to extend theGNOMEplatform,[9]utilizeGNOMEtechnologies, and follow theGNOME human interface guidelines.[10]They are hosted, developed, and managed in theGNOMEofficial development infrastructure, ongitlab.gnome.org. Developers who are using the GNOME platform can apply for inclusion in GNOME Circle. Benefits include promotional support and eligibility for project contributors to become GNOME Foundation members.[9]Circle applications are not part of GNOME Core Applications. Some examples of such applications include:
https://en.wikipedia.org/wiki/GNOME_Core_Applications
TheGNU Core Utilitiesorcoreutilsis a collection ofGNUsoftwarethat implements many standard,Unix-basedshellcommands. The utilities generally providePOSIXcompliant interface when thePOSIXLY_CORRECTenvironment variable is set, but otherwise offers a superset to the standard interface. For example, the utilities supportlong optionsand options after parameters. This environment variable enables a different functionality inBSD. Similar collections are available in theFOSSecosystem, with a slightly different scope and focus (less functionality), or license. For example,BusyBoxwhich is licensed underGPL-2.0-only, andToyboxwhich is licensed under0BSD. The commands implemented by coreutils are listed below. Throughout this article and customary for Unix-based systems, the termfilerefers to allfile system itemsincluding regular files and special files such as directories. In 1990, David MacKenzie announcedGNU fileutils.[3] In 1991, MacKenzie announcedGNU shellutilsandGNU textutils.[4][5]Moreover, Jim Meyering became the maintainer of the packages (known now as coreutils) and has remained so since.[6] In September 2002, theGNU coreutilswere created by merging the earlier packagestextutils,shellutils, andfileutils, along with some other miscellaneous utilities.[7] In July 2007, the license of the GNU coreutils was updated fromGPL-2.0-or-latertoGPL-3.0-or-later.[8]
https://en.wikipedia.org/wiki/GNU_Core_Utilities
A number of notablesoftware packageswere developed for, or are maintained by, theFree Software Foundationas part of theGNU Project. Summarising the situation in 2013,Richard Stallmanidentified nine aspects which generally apply to being a GNU package,[1]but he noted that exceptions and flexibility are possible when there are good reasons:[2] There is no official "base system" of theGNU operating system. GNU was designed to be a replacement forUnixoperating systems of the 1980s and used thePOSIXstandards as a guide, but either definition would give a much larger "base system". The following list is instead a small set of GNU packages which seem closer to being "core" packages than being in any of the further down sections. Inclusions (such asplotutils) and exclusions (such as theC standard library) are of course debatable. The software listed below is generally useful tosoftware developersand othercomputer programmers. The followinglibrariesandsoftware frameworksare often used in combination with the basic toolchain tools above to build software. (For libraries specifically designed to implement GUI desktops, seeGraphical desktop.) The following packages provide compilers and interpreters for programming languages beyond those included in theGNU Compiler Collection. The software listed below is generally useful to users not specifically engaged in software development. The following packages provideGUIdesktop environments,window managers, and associated graphics libraries.
https://en.wikipedia.org/wiki/List_of_GNU_packages
TheKDE Gearis a set of applications and supporting libraries that are developed by theKDE community,[3]primarily used onLinux-based operating systems but mostly multiplatform, and released on a common release schedule. The bundle is composed of over 200 applications. Examples of prominent applications in the bundle include the file managerDolphin, document viewerOkular, text editorKate, archiving toolArkand terminal emulatorKonsole.[4] Previously the KDE Applications Bundle was part of theKDE Software Compilation. Software that is not part of the official KDE Applications bundle can be found in the "Extragear" section. They release on their own schedule and feature their own versioning numbers. There are many standalone applications likeKritaorAmarokthat are mostly designed to be portable between operating systems and deployable independent of a particular workspace or desktop environment. Some brands consist of multiple applications, such asCalligra Office Suite. There are several options for obtaining and installing KDE applications under Linux. Moreover, most of the KDE platform and applications have been ported toOpenBSDandNetBSD. KDE SDK[5][6]is a collection of two dozen distinct integrated (both within the SDK but also with other KDE applications, e.g. many work with Dolphin, the default file manager) applications and components that work with/are part of KDevelop,[7]and is suitable for general purpose software development in a range of languages. It provides the tooling used to engineer KDE, and is particularly rich in tools to support Qt and C++ development, as well as the more fashionable Rust, Python, etc. Various other packages are being built for testing on Android, although plans for some of the core parts of the SDK (e.g. Kate) have not been announced. [32] KDebugSettings[43] Dferry D-Bus library and tools[45]CuteHMI Open-source HMI (Human Machine Interface) software written in C++ and QML. Unmaintained Applications[67] The KDE Applications Bundle is released every four months and has bugfix releases in each intervening month. A date-based version scheme is used, which is composed of the year and month. A third digit is used for bugfix releases.[78] With the April 2021 release, the KDE Applications Bundle has been renamed to KDE Gear.[3]
https://en.wikipedia.org/wiki/List_of_KDE_applications
TheUnix philosophy, originated byKen Thompson, is a set of cultural norms and philosophical approaches tominimalist,modularsoftware development. It is based on the experience of leading developers of theUnixoperating system. Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software; these norms became as important and influential as the technology of Unix itself, and have been termed the "Unix philosophy." The Unix philosophy emphasizes building simple, compact, clear, modular, andextensiblecode that can be easily maintained and repurposed by developers other than its creators. The Unix philosophy favorscomposabilityas opposed tomonolithic design. The Unix philosophy is documented byDoug McIlroy[1]in theBell System Technical Journalfrom 1978:[2] It was later summarized byPeter H. Salusin A Quarter-Century of Unix (1994):[1] In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:[3] In their preface to the 1984 book,The UNIX Programming Environment,Brian KernighanandRob Pike, both fromBell Labs, give a brief description of the Unix design and the Unix philosophy:[4] Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can't be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools. The authors further write that their goal for this book is "to communicate the UNIX programming philosophy."[4] In October 1984, Brian Kernighan and Rob Pike published a paper calledProgram Design in the UNIX Environment. In this paper, they criticize the accretion of program options and features found in some newer Unix systems such as4.2BSDandSystem V, and explain the Unix philosophy of software tools, each performing one general function:[5] Much of the power of the UNIX operating system comes from a style of program design that makes programs easy to use and, more important, easy to combine with other programs. This style has been called the use ofsoftware tools, and depends more on how the programs fit into the programming environment and how they can be used with other programs than on how they are designed internally. [...] This style was based on the use oftools: using programs separately or in combination to get a job done, rather than doing it by hand, by monolithic self-sufficient subsystems, or by special-purpose, one-time programs. The authors contrast Unix tools such ascatwith larger program suites used by other systems.[5] The design ofcatis typical of most UNIX programs: it implements one simple but general function that can be used in many different applications (including many not envisioned by the original author). Other commands are used for other functions. For example, there are separate commands for file system tasks like renaming files, deleting them, or telling how big they are. Other systems instead lump these into a single "file system" command with an internal structure and command language of its own. (ThePIPfile copy program[6]found on operating systems likeCP/MorRSX-11is an example.) That approach is not necessarily worse or better, but it is certainly against the UNIX philosophy. McIlroy, then head of the Bell Labs Computing Sciences Research Center, and inventor of theUnix pipe,[7]summarized the Unix philosophy as follows:[1] This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handletext streams, because that is a universal interface. Beyond these statements, he has also emphasized simplicity andminimalismin Unix programming:[1] The notion of "intricate and beautiful complexities" is almost an oxymoron. Unix programmers vie with each other for "simple and beautiful" honors — a point that's implicit in these rules, but is well worth making overt. Conversely, McIlroy has criticized modernLinuxas havingsoftware bloat, remarking that, "adoring admirers have fed Linux goodies to a disheartening state ofobesity."[8]He contrasts this with the earlier approach taken at Bell Labs when developing and revisingResearch Unix:[9] Everything was small... and my heart sinks for Linux when I see the size of it. [...] Themanual page, which really used to be a manualpage, is now a small volume, with a thousand options... We used to sit around in the Unix Room saying, 'What can we throw out? Why is there this option?' It's often because there is some deficiency in the basic design — you didn't really hit the right design point. Instead of adding an option, think about what was forcing you to add that option. As stated by McIlroy, and generally accepted throughout the Unix community, Unix programs have always been expected to follow the concept of DOTADIW, or "Do One Thing And Do It Well." There are limited sources for the acronym DOTADIW on the Internet, but it is discussed at length during the development and packaging of new operating systems, especially in the Linux community. Patrick Volkerding, the project lead ofSlackware Linux, invoked this design principle in a criticism of thesystemdarchitecture, stating that, "attempting to control services, sockets, devices, mounts, etc., all within onedaemonflies in the face of the Unix concept of doing one thing and doing it well."[10] In his bookThe Art of Unix Programmingthat was first published in 2003,[11]Eric S. Raymond(open source advocate and programmer) summarizes the Unix philosophy asKISS Principleof "Keep it Simple, Stupid."[12]He provides a series of design rules:[1] In 1994,Mike Gancarz, a member ofDigital Equipment Corporation's Unix Engineering Group (UEG), publishedThe UNIX Philosophybased on his own Unix (Ultrix) port development at DEC in the 1980s and discussions with colleagues. He is also a member of theX Window Systemdevelopment team and author ofUltrix Window Manager(uwm). The book focuses on porting UNIX to different computers during theUnix warsof the 1980s and describes his philosophy that portability should be more important than the efficiency of using non-standard interfaces for hardware and graphics devices. The nine basic "tenets" he claims to be important are Richard P. Gabrielsuggests that a key advantage of Unix was that it embodied a design philosophy he termed "worse is better", in which simplicity of both the interface and the implementation are more important than any other attributes of the system—including correctness, consistency, and completeness. Gabriel argues that this design style has key evolutionary advantages, though he questions the quality of some results. For example, in the early days Unix used amonolithic kernel(which means that user processes carried out kernel system calls all on the user stack). If a signal was delivered to a process while it was blocked on a long-termI/Oin the kernel, the handling of the situation was unclear. The signal handler could not be executed when the process was in kernel mode, with sensitive kernel data on the stack. In a 1981 article entitled "The truth about Unix:The user interface is horrid"[13]published inDatamation,Don Normancriticized the design philosophy of Unix for its lack of concern for the user interface. Writing from his background in cognitive science and from the perspective of the then-current philosophy ofcognitive engineering,[14]he focused on how end-users comprehend and form a personalcognitive modelof systems—or, in the case of Unix, fail to understand, with the result that disastrous mistakes (such as losing an hour's worth of work) are all too easy. In the podcast On the Metal, game developerJonathan Blowcriticised UNIX philosophy as being outdated.[15]He argued that tying together modular tools results in very inefficient programs. He says that UNIX philosophy suffers from similar problems tomicroservices: without overall supervision, big architectures end up ineffective and inefficient.
https://en.wikipedia.org/wiki/Unix_philosophy
util-linuxis a package ofutilitiesdistributed by theLinux Kernel Organizationfor use in aLinuxoperating system. Afork,util-linux-ng(withngmeaning "next generation"), was created when development stalled,[4]but as of January 2011[update]has been renamed back toutil-linux, and is the official version of the package.[5] The package includes the following utilities: Utilities formerly included, but removed as of 1 July 2015[update]:
https://en.wikipedia.org/wiki/Util-linux
Apath(orfilepath,file path,pathname, or similar) is atext stringthat uniquely specifies an item in ahierarchical file system. Generally, a path is composed of directory names, special directory specifiers and optionally afilename, separated bydelimiting text. The delimiter varies by operating system and in theory can be anything, but popular, modern systems useslash/,backslash\, or colon:. A path can be either relative or absolute. A relative path includes information that is relative to a particular directory whereas an absolute path indicates a location relative to the systemroot directory, and therefore, does not depends on context like a relative path does. Often, a relative path is relative to theworking directory. For example, in commandls f,fis a relative path to the file with that name in the working directory. Paths are used extensively incomputer scienceto represent the directory/file relationships common in modern operating systems and are essential in the construction ofuniform resource locators(URLs). Multicsfirst introduced ahierarchical file systemwith directories (separated by ">") in the mid-1960s.[1] Around 1970,Unixintroduced the slash character ("/") as its directory separator. Originally,MS-DOSdid not support directories, but when adding the feature, using the Unix standard of slash was not a good option since many existing commands used slash as theswitchprefix. For example,dir /w. In contrast, Unix uses dash-as the switch prefix. In this context, MS-DOS version 2.0 used backslash\for the path delimiter since it is similar to slash but did not conflict with existing commands. This convention continued intoWindowsin its shellCommand Prompt. Eventually,PowerShell, was introduced to Windows that is slash-agnostic, allowing the use of either slash in a path.[2][3] The following table describes the syntax of paths in notable operating systems and with notable aspects by shell. UserDocs:/Letter.txtVariable:PSVersionTableRegistry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft.PowerShell.Security\Certificate::CurrentUser\ []IN_THIS_DIR.COM;[-.-]GreatGrandParent.TXTSYS$SYSDEVICE:[.DRAFTS]LETTER.TXT;4GEIN::[000000]LETTER.TXT;4SYS$LOGIN:LOGIN.COM FLIGHT.SIMULATOR,D2 note: &, %, and @ can also be used to reference the rootof the current user, the library and the current (working) directory respectively. When filesystems with filename extensions are mounted,'.' characters are changed to '/', as in the Japan/gif example above. //(root of domain)/(root of current node) note: prefix may be a number (0–31),*(boot volume) or@(AppleShare home directory) hb set -p --product [PRODUCT_NAME] Japanese and Korean versions of Windows often display the '¥' character or the '₩' character instead of the directory separator. In such cases the code for a backslash is being drawn as these characters. Very early versions of MS-DOS replaced the backslash with these glyphs on the display to make it possible to display them by programs that only understood 7-bitASCII(other characters such as the square brackets were replaced as well, seeISO 646,Windows Codepage 932 (Japanese Shift JIS), andCodepage 949 (Korean)). Although even the first version of Windows supported the 8-bitISO-8859-1character set which has the Yen sign at U+00A5, and modern versions of Windows supportsUnicodewhich has the Won sign at U+20A9, much software will continue to display backslashes found in ASCII files this way to preserve backward compatibility.[8] macOS, as a derivative of UNIX, uses UNIX paths internally. However, to preserve compatibility for software and familiarity for users, many portions of the GUI switch "/" typed by the user to ":" internally, and switch them back when displaying filenames (a ":" entered by the user is also changed into "/" but the inverse translation does not happen). Programming languages also use paths. E.g.: When a file is opened. Most programming languages use the path representation of the underlying operating system: This direct access to the operating system paths can hinder the portability of programs. To support portable programsJavausesFile.separatorto distinguish between / and \ separated paths.Seed7has a different approach for the path representation. In Seed7 all paths use the Unix path convention, independent of the operating system. Under windows a mapping takes place (e.g.: The path/c/usersis mapped toc:\users). The Microsoftuniversal naming convention(UNC), a.k.a.uniform naming convention, a.k.a.network path, specifies a syntax to describe the location of a network resource, such as a shared file, directory, or printer. A UNC path has the general form: Some Windows interfaces allow or require UNC syntax forWebDAVshare access, rather than a URL. The UNC syntax is extended[9]with optional components to denote use of SSL and TCP/IP port number, a WebDAV URL ofhttp[s]://HostName[:Port]/SharedFolder/Resourcebecomes When viewed remotely, the "SharedFolder" may have a name different from what a program on the server sees when opening "\SharedFolder". Instead, the SharedFolder name consists of an arbitrary name assigned to the folder when defining its "sharing". Some Windows interfaces also accept the "Long UNC": Windows uses the following types of paths: In versions of Windows prior to Windows XP, only the APIs that accept "long" device paths could accept more than 260 characters. TheshellinWindows XPandWindows Vista,explorer.exe, allows path names up to 248 characters long.[citation needed] Since UNCs start with two backslashes, and the backslash is also used for string escaping and inregular expressions, this can result in extreme cases ofleaning toothpick syndrome: an escaped string for a regular expression matching a UNC begins with 8 backslashes –\\\\\\\\– because the string and regular expression both require escaping. This can be simplified by usingraw strings, as in C#'s@"\\\\"or Python'sr'\\\\', or regular expression literals, as in Perl'sqr{\\\\}. Most Unix-like systems use a similar syntax.[13]POSIXallows treating a path beginning with two slashes in an implementation-defined manner,[14]though in other cases systems must treat multiple slashes as single slashes.[15]Many applications on Unix-like systems (for example,scp,rcp, andrsync) use resource definitions such as: orURIschemes with the service name (here 'smb'): The following examples are for typical, Unix-based file systems: Given the working directory is/home/mark/and it contains subdirectorybobapples, relative paths to the subdirectory include./bobapplesandbobapples, and the absolute path is/home/mark/bobapples. A command to change the working directory to the subdirectory: If the working directory was/home/jo, then the relative path../mark/bobapplesspecifies the subdirectory. The double dots..indicates a move up the directory hierarchy one level to/home, the rest indicates moving down tomarkand thenboapples. TheWindows APIaccepts slash for path delimiter. Unlike Unix that always has a single root directory, a Windows file system has a root for each storage drive. An absolute path includes a drive letter or uses the UNC format. A UNC path (starting with\\?\) does not support slashes.[4] A:\Temp\File.txtis an absolute path that specifies a file namedFile.txtin the directoryTempwhich is in the root of driveA:: C:..\File.txtis a relative path that specifies fileFile.txtlocated in the parent of the working directory on driveC:: Folder\SubFolder\File.txtis a relative path that specifies fileFile.txtin directorySubFolderwhich is in directoryFolderwhich is in the working directory of the current drive: File.txtis a relative path that specifiesFile.txtin the working directory: \\.\COM1specifies the firstserial port,COM1: The following uses a path with slashes for directory delimiter: A path with forward slashes may need to be surrounded by double quotes to disambiguate from command-line switches. For example,dir /windowsis invalid, butdir "/window"is valid. Andcdis more lenient by allowingcd /windows.
https://en.wikipedia.org/wiki/Path_(computing)
Aclient portalis an electronic gateway to a collection of digital files, services, and information, accessible over theInternetthrough aweb browser. The term is most often applied to a sharing mechanism between anorganizationand itsclients.[1]The organization provides a secure entry point, typically via awebsite, that lets its clients log into an area where they can view,download, anduploadprivate information. Client portals are most prevalently used for the secure exchange of financial information, usually when teams are working remotely.Privacy lawssuch as theGramm–Leach–Bliley Actrequire that organizations encrypt their clients'personally identifiable informationthat is sent online electronically. Sharing such information through email does not comply with the Gramm–Leach–Bliley Act and other federal privacy laws.[2]Client portals allow users to centralise and virtualise their organization, usually to increase efficiencies and communication. Other advantages of client portals, as distinguished from email, include increased file size limitations and self-service access to a private repository. Client portals are often used in conjunction withworkflow automationanddocument managementto maximize work environment efficiency.[3] Client portals are prevalent in many industries. Industry sectors include various commercial organisations and aren't usually specific to one industry. Owing to the nature of the industry, law firms make up a significant number of client portal users. This is because lawyers are constantly collaborating and interacting with clients, involving a significant amount of paperwork. In these cases the file sharing functionality is imperative.[4]Some client portal features extend toinvoicing,time loggingand expenses tracking. These benefits lead to a number ofSmall and medium enterprisesusing client portals to manage their business operations.[5] Security has been a hot topic surrounding client portals. Many client portals feature 256 bit SSL, similar to that of online banking,[citation needed]but businesses withsensitive information, such as medical data, sometimes express concerns about their data being in the cloud. Businesses still wishing to use client portals usually adoptprivate cloudsolutions and host the software on-premises.
https://en.wikipedia.org/wiki/Client_portal
Network-attached storage(NAS) is a file-levelcomputer data storageserver connected to acomputer networkproviding data access to aheterogeneousgroup of clients. In this context, the term "NAS" can refer to both the technology and systems involved, or a specializedcomputer appliancedevice unit built for such functionality – aNAS applianceorNAS box. NAS contrasts withblock-levelstorage area networks(SAN). A NAS device is optimised forserving fileseither by its hardware, software, or configuration. It is often manufactured as acomputer appliance– a purpose-built specialized computer. NAS systems are networked appliances that contain one or morestorage drives, often arranged intological, redundant storage containers orRAID. Network-attached storage typically provide access to files using network file sharing protocols such asNFS,SMB, orAFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers, as well as to remove the responsibility of file serving from other servers on the network; by doing so, a NAS can provide faster data access, easier administration, and simpler configuration as opposed to using general-purpose server to serve files.[1] Accompanying a NAS are purpose-builthard disk drives, which are functionally similar to non-NAS drives but may have different firmware, vibration tolerance, or power dissipation to make them more suitable for use in RAID arrays, a technology often used in NAS implementations.[2]For example, some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application, it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on a single drive can be recovered completely via the redundancy encoded across the RAID set. If a drive spends several seconds executing extensive retries it might cause the RAID controller to flag the drive as "down" whereas if it simply replied promptly that the block of data had a checksum error, the RAID controller would use the redundant data on the other drives to correct the error and continue without any problem. A NAS unit is a computer connected to a network that provides only file-based data storage services to other devices on the network. Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser.[3] A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used. NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers orRAID. NAS uses file-based protocols such asNFS(popular onUNIXsystems), SMB (Server Message Block) (used withMicrosoft Windowssystems),AFP(used withApple Macintoshcomputers), or NCP (used withOESandNovell NetWare). NAS units rarely limit clients to a single protocol. The key difference betweendirect-attached storage(DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. As the name suggests, DAS typically is connected via aUSBorThunderboltenabled cable. NAS is designed as an easy and self-contained solution for sharing files over the network. Both DAS and NAS can potentially increase availability of data by usingRAIDorclustering. Both NAS and DAS can have various amount ofcache memory, which greatly affects performance. When comparing use of NAS with use of local (non-networked) DAS, the performance of NAS depends mainly on the speed of and congestion on the network. Most NAS solutions will include the option to install a wide array of software applications to allow better configuration of the system or to include other capabilities outside of storage (like video surveillance, virtualization, media, etc). DAS typically is focused solely on data storage but capabilities can be available based on specific vendor options. NAS provides both storage and afile system. This is often contrasted with SAN (storage area network), which provides only block-based storage and leaves file system concerns on the "client" side. SAN protocols includeFibre Channel,iSCSI,ATA over Ethernet(AoE) andHyperSCSI. One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as afile server(the client canmapnetwork drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system andmounted. Despite their differences, SAN and NAS are not mutually exclusive and may be combined as a SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system[citation needed]. Ashared disk file systemcan also be run on top of a SAN to provide filesystem services. In the early 1980s, the "Newcastle Connection" byBrian Randelland his colleagues atNewcastle Universitydemonstrated and developed remote file access across a set of UNIX machines.[4][5]Novell'sNetWareserver operating system andNCPprotocol was released in 1983. Following the Newcastle Connection,Sun Microsystems' 1984 release ofNFSallowed network servers to share their storage space with networked clients. 3Com andMicrosoftwould develop theLAN Managersoftware and protocol to further this new market.3Com's3Serverand3+Sharesoftware was the first purpose-built server (including proprietary hardware, software, and multiple disks) for open systems servers. Inspired by the success offile serversfrom Novell,IBM, and Sun, several firms developed dedicated file servers. While 3Com was among the first firms to build a dedicated NAS for desktop operating systems,Auspex Systemswas one of the first to develop a dedicated NFS server for use in the UNIX market. A group of Auspex engineers split away in the early 1990s to create the integratedNetApp FAS, which supported both the Windows SMB and the UNIX NFS protocols and had superiorscalabilityand ease of deployment. This started the market forproprietaryNAS devices now led by NetApp and EMC Celerra. Starting in the early 2000s, a series of startups emerged offering alternative solutions to single filer solutions in the form of clustered NAS – Spinnaker Networks (acquired byNetAppin February 2004),Exanet(acquired byDellin February 2010),Gluster(acquired by RedHat in 2011), ONStor (acquired by LSI in 2009),IBRIX(acquired byHP),Isilon(acquired by EMC – November 2010), PolyServe (acquired byHPin 2007), andPanasas, to name a few. In 2009, NAS vendors (notably CTERA networks[6][7]andNetgear) began to introduceonline backupsolutions integrated in their NAS appliances, for online disaster recovery.[8][9] By 2021, three major types of NAS solutions are offered (all with hybrid cloud models where data can be stored both on-premise on the NAS and off site on a separate NAS or through a public cloud service provider). The first type of NAS is focused on consumer needs with lower-cost options that typically support 1–5 hot plug hard drives. The second is focused on small-to-medium-sized businesses – these NAS solutions range from 2–24+ hard drives and are typically offered in tower or rackmount form factors. Pricing can vary greatly depending on the processor, components, and overall features supported. The last type is geared toward enterprises or large businesses and are offered with more advanced software capabilities. NAS solutions are typically sold without hard drives installed to allow the buyer (or IT departments) to select the hard drive cost, size, and quality. The way manufacturers make NAS devices can be classified into three types: NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such asload-balancingand fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike theirrackmountedcounterparts, they are generally packaged in smaller form factors. The price of NAS appliances has fallen sharply in recent[when?]years, offering flexible network-based storage to the home consumer market for little more than the cost of a regularUSBorFireWireexternal hard disk. Many of these home consumer devices are built aroundARM, x86 orMIPSprocessors running anembedded Linuxoperating system. Apurpose-built backup appliance(PBBA) is a kind of NAS intended for storingbackupdata. PBBAs typically includedata deduplication, compression,RAID 6or other redundant hardware components, and automated maintenance.[10][11][12][13]A PBBA may also be called abackup and disaster recovery applianceor simply abackupappliance. Open-sourceNAS-oriented distributions ofLinuxandFreeBSDare available. These are designed to be easy to set up on commodity PC hardware, and are typically configured using a web browser. They can run from avirtual machine,Live CD,bootableUSB flash drive (Live USB), or from one of the mounted hard drives. They runSamba(anSMBdaemon),NFSdaemon, andFTPdaemons which are freely available for those operating systems. Network-attached secure disks(NASD) is 1997–2001 research project ofCarnegie Mellon University, with the goal of providing cost-effective scalablestoragebandwidth.[14]NASD reduces the overhead on the fileserver(file manager) by allowing storage devices to transfer data directly toclients. Most of the file manager's work is offloaded to the storage disk without integrating the file system policy into the disk. Most client operations like Read/Write go directly to the disks; less frequent operations like authentication go to the file manager. Disks transfer variable-length objects instead of fixed-size blocks to clients. The File Manager provides a time-limited cachable capability for clients to access the storage objects. A file access from the client to the disks has the following sequence: Aclustered NASis a NAS that is using a distributed file system running simultaneously on multiple servers. The key difference between a clustered and traditional NAS is the ability to distribute[citation needed](e.g. stripe) data andmetadataacross the cluster nodes or storage devices. Clustered NAS, like a traditional one, still provides unified access to the files from any of the cluster nodes, unrelated to the actual location of the data.
https://en.wikipedia.org/wiki/Network-attached_storage
Incomputer science,resource contentionis a conflict over access to ashared resourcesuch asrandom access memory,disk storage,cache memory, internalbusesor external network devices. A resource experiencing ongoing contention can be described asoversubscribed. Resolving resource contention problems is one of the basic functions ofoperating systems. Various low-level mechanisms can be used to aid this, includinglocks,semaphores,mutexesandqueues. The other techniques that can be applied by the operating systems include intelligent scheduling, application mapping decisions, andpage coloring.[1][2] Access to resources is also sometimes regulated by queuing; in the case of computing time on aCPUthe controllingalgorithmof thetaskqueue is called ascheduler. Failure to properly resolve resource contention problems may result in a number of problems, includingdeadlock,livelock, andthrashing. Resource contention results when multiple processes attempt to use the same shared resource. Access to memory areas is often controlled by semaphores, which allows a pathological situation called a deadlock, when differentthreadsorprocessestry to allocate resources already allocated by each other. A deadlock usually leads to a program becoming partially or completely unresponsive. In recent years, research on the contention is focused more on the resources in thememory hierarchy, e.g., last-level caches, front-side bus, and memory socket connection.[citation needed] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Resource_contention
Incomputing,time-sharingis theconcurrentsharing of a computing resource among many tasks or users by giving eachtaskorusera small slice ofprocessing time. This quick switch between tasks or users gives the illusion ofsimultaneousexecution.[1][2]It enablesmulti-taskingby a single user or enables multiple-user sessions. Developed during the 1960s, its emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing many users to interactconcurrentlywith a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one,[3]and promoted theinteractive use of computersand the development of new interactiveapplications. The earliest computers were extremely expensive devices, and very slow. Machines were typically dedicated to a particular set of tasks and operated by control panels, the operator manually entering small programs via switches one at a time. These programs might take hours to run. As computers increased in speed,run timesdropped, and soon the time taken to start up the next program became a concern. Newerbatch processingsoftware and methodologies, including batch operating systems such asIBSYS(1960), decreased these "dead periods" by queuing up programs ready to run.[4] Comparatively inexpensivecard punchorpaper tapewriters were used by programmers to write their programs "offline". Programs were submitted to the operations team, which scheduled them to be run. Output (generally printed) was returned to the programmer. The complete process might take days, during which time the programmer might never see the computer. Stanford students made a short film humorously critiquing this situation.[5] The alternative of allowing the user to operate the computer directly was generally far too expensive to consider. This was because users might have long periods of entering code while the computer remained idle. This situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part. The concept is claimed to have been first described by Robert Dodds in a letter he wrote in 1949 although he did not use the termtime-sharing.[6]LaterJohn Backusalso described the concept, but did not use the term, in the 1954 summer session atMIT.[7]Bob Bemerused the termtime-sharingin his 1957 article "How to consider a computer" inAutomatic Control Magazineand it was reported the same year he used the termtime-sharingin a presentation.[6][8][9]In a paper published in December 1958, W. F. Bauer wrote that "The computers would handle a number of problems concurrently. Organizations would have input-output equipment installed on their own premises and would buy time on the computer much the same way that the average household buys power and water from utility companies."[10] Christopher Strachey, who becameOxford University'sfirst professor of computation, filed a patent application in the United Kingdom for "time-sharing" in February 1959.[11][12]He gave a paper "Time Sharing in Large Fast Computers"[13]at the firstUNESCO Information Processing Conferencein Paris in June that year, where he passed the concept on toJ. C. R. Licklider.[14]This paper was credited by theMIT Computation Centerin 1963 as "the first paper on time-shared computers".[15] The meaning of the termtime-sharinghas shifted from its original usage. Up until 1960, time-sharing was used to refer tomultiprogrammingwithout multiple user sessions.[6]Later, it came to mean sharing a computerinteractivelyamong multiple users. In 1984 Christopher Strachey wrote he considered the change in the meaning of the termtime-sharinga source of confusion and not what he meant when he wrote his paper in 1959.[6] There are also examples of systems which provide multiple user consoles but only for specific applications, they are not general-purpose systems. These includeSAGE(1958),SABRE(1960)[6]andPLATO II(1961), created byDonald Bitzerat a public demonstration atRobert Allerton Parknear the University of Illinois in early 1961. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for two years.[16] The firstinteractive, general-purpose time-sharing system usable for software development,Compatible Time-Sharing System, was initiated byJohn McCarthyat MIT writing a memo in 1959.[17]Fernando J. Corbatóled the development of the system, a prototype of which had been produced and tested by November 1961.[18]Philip M. Morsearranged for IBM to provide a series of their mainframe computers starting with theIBM 704and then theIBM 709product lineIBM 7090andIBM 7094.[18]IBM loaned those mainframes at no cost to MIT along with the staff to operate them and also provided hardware modifications mostly in the form ofRPQsas prior customers had already commissioned the modifications.[19][18]There were certain stipulations that governed MIT's use of the loaned IBM hardware. MIT could not charge for use of CTSS.[20]MIT could only use the IBM computers for eight hours a day; another eight hours were available for other colleges and universities; IBM could use their computers for the remaining eight hours, although there were some exceptions. In 1963 a second deployment of CTSS was installed on an IBM 7094 that MIT has purchased usingARPAmoney. This was used to supportMulticsdevelopment atProject MAC.[18] JOSSbegan time-sharing service in January 1964.[21]Dartmouth Time-Sharing System(DTSS) began service in March 1964.[22] Throughout the late 1960s and the 1970s,computer terminalswere multiplexed onto large institutionalmainframe computers(centralized computingsystems), which in many implementations sequentially polled the terminals to see whether any additional data was available or action was requested by the computer user. Later technology in interconnections wereinterruptdriven, and some of these used parallel data transfer technologies such as theIEEE 488standard. Generally, computer terminals were utilized on college properties in much the same places asdesktop computersorpersonal computersare found today. In the earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems. DTSS's creators wrote in 1968 that "any response time which averages more than 10 seconds destroys the illusion of having one's own computer".[23]Conversely, timesharing users thought that their terminal was the computer,[24]and unless they received a bill for using the service, rarely thought about how others shared the computer's resources, such as when a large JOSS application causedpagingfor all users. TheJOSS Newsletteroften asked users to reduce storage usage.[25]Time-sharing was nonetheless an efficient way to share a large computer. As of 1972[update]DTSS supported more than 100 simultaneous users. Although more than 1,000 of the 19,503 jobs the system completed on "a particularly busy day" required ten seconds or more of computer time, DTSS was able to handle the jobs because 78% of jobs needed one second or less of computer time. About 75% of 3,197 users used their terminal for 30 minutes or less, during which they used less than four seconds of computer time. A football simulation, amongearly mainframe gameswritten for DTSS, used less than two seconds of computer time during the 15 minutes of real time for playing the game.[26]With the rise of microcomputing in the early 1980s, time-sharing became less significant, because individual microprocessors were sufficiently inexpensive that a single person could have all theCPU timededicated solely to their needs, even when idle. However, the Internet brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, web sites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many customers at once, usually with no perceptible communication delays, unless the servers start to get very busy. Genesis In the 1960s, several companies started providing time-sharing services asservice bureaus. Early systems usedTeletype Model 33KSR or ASR or Teletype Model 35 KSR or ASR machines inASCIIenvironments, andIBM Selectric typewriter-based terminals (especially theIBM 2741) with two different seven-bit codes.[27]They would connect to thecentral computerbydial-upBell 103A modem oracoustically coupledmodemsoperating at 10–15 characters per second. Later terminals and modems supported 30–120 characters per second. The time-sharing system would provide a complete operating environment, including a variety of programming language processors, various software packages, file storage, bulk printing, and off-line storage. Users were charged rent for the terminal, a charge for hours of connect time, a charge for seconds of CPU time, and a charge for kilobyte-months of disk storage. Common systems used for time-sharing included theSDS 940, thePDP-10, theIBM 360, and theGE-600 series. Companies providing this service includedGE'sGEISCO, theIBMsubsidiary TheService Bureau Corporation,Tymshare(founded in 1966),National CSS(founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968),AL/COM,Bolt, Beranek, and Newman(BBN) andTime Sharing Ltd.in theUK.[28]By 1968, there were 32 such service bureaus serving the USNational Institutes of Health(NIH) alone.[29]TheAuerbach Guide to Timesharing(1973) lists 125 different timesharing services using equipment fromBurroughs,CDC,DEC,HP,Honeywell,IBM,RCA,Univac, andXDS.[30][31] In 1975, acting president ofPrime ComputerBen F. Robelen told stockholders that "The biggest end-user market currently is time-sharing".[32]For DEC, for a while the second largest computer company (after IBM), this was also true: TheirPDP-10and IBM's360/67[33]were widely used[34]by commercial timesharing services such as CompuServe,On-Line Systems, Inc.(OLS), Rapidata andTime Sharing Ltd. The advent of thepersonal computermarked the beginning of the decline of time-sharing.[citation needed]The economics were such that computer time went from being an expensive resource that had to be shared to being so cheap that computers could be left to sit idle for long periods in order to be available as needed.[citation needed] Although many time-sharing services simply closed, Rapidata[35][36]held on, and became part ofNational Data Corporation.[37]It was still of sufficient interest in 1982 to be the focus of "A User's Guide to Statistics Programs: The Rapidata Timesharing System".[38]Even as revenue fell by 66%[39]and National Data subsequently developed its own problems, attempts were made to keep this timesharing business going.[40][41][42] Beginning in 1964, theMulticsoperating system[43]was designed as acomputing utility, modeled on the electrical or telephone utilities. In the 1970s,Ted Nelson's original "Xanadu" hypertext repository was envisioned as such a service. Time-sharing was the first time that multipleprocesses, owned by different users, were running on a single machine, and these processes could interfere with one another.[44]For example, one process might altershared resourceswhich another process relied on, such as a variable stored in memory. When only one user was using the system, this would result in possibly wrong output - but with multiple users, this might mean that other users got to see information they were not meant to see. To prevent this from happening, an operating system needed to enforce a set of policies that determined whichprivilegeseach process had. For example, the operating system might deny access to a certain variable by a certain process. The first international conference on computer security in London in 1971 was primarily driven by the time-sharing industry and its customers.[45] Time-sharing in the form ofshell accountshas been considered a risk.[46] Significant early timesharing systems:[30]
https://en.wikipedia.org/wiki/Time-sharing
Empiricalmethods Prescriptiveand policy Thetragedy of the commonsis the concept that, if many people enjoy unfettered access to a finite, valuable resource, such as apasture, they will tend to overuse it and may end up destroying its value altogether. Even if some users exercised voluntary restraint, the other users would merely replace them, the predictable result being a "tragedy" for all. The concept has been widely discussed, and criticised, ineconomics,ecologyand other sciences. Themetaphorical termis the title of a 1968 essay by ecologistGarrett Hardin. The concept itself did not originate with Hardin but rather extends back to classical antiquity, being discussed byAristotle. The principal concern of Hardin's essay was overpopulation of the planet. To prevent the inevitable tragedy (he argued) it was necessary to reject the principle (supposedly enshrined in theUniversal Declaration of Human Rights) according to which every family has a right to choose the number of its offspring, and to replace it by "mutual coercion, mutually agreed upon". Some scholars have argued that over-exploitation of the common resource is by no means inevitable, since the individuals concerned may be able to achieve mutual restraint by consensus. Others have contended that the metaphor isinappositeor inaccurate because its exemplar – unfettered access to common land – did not exist historically, the right to exploit common land being controlled by law. The work ofElinor Ostrom, who received theNobel Prize in Economics, is seen by some economists as having refuted Hardin's claims.[1]Hardin's views on over-population have been criticised as simplistic[2]and racist.[3] The concept of unrestricted-access resources becoming spent, where personal use does not incur personal expense, was discussed by the philosopherAristotle,[4]who observed in hisPoliticsthat "That which is common to the greatest number has the least care bestowed upon it. Every one thinks chiefly of his own, hardly at all of the common interest; and only when he is himself concerned as an individual."[5] In 1833, the English economistWilliam Forster Lloydpublished "Two Lectures on the Checks to Population",[6]a pamphlet that included a hypothetical example of over-use of a common resource.[7]This was the situation of cattle herders sharing a common parcel of land on which they were each entitled to let their cows graze. He postulated that if a herder put more than his allotted number of cattle on the common,overgrazingcould result. For each additional animal, a herder could receive additional benefits, while the whole group shared the resulting damage to the commons.[8]If all herders made this individually rational economic decision, the common could be depleted or even destroyed, to the detriment of all.[6] Lloyd's pamphlet was written after theenclosuremovement had eliminated the open field system of common property as the standard model for land exploitation in England (though there remained, and still remain, millions of acres of "common land": see§ Commons in historical reality). Carl Dahlman and others have asserted that his description was historically inaccurate, pointing to the fact that the system endured for hundreds of years without producing the disastrous effects claimed by Lloyd.[9] In 1968,ecologistGarrett Hardinexplored thissocial dilemmain his article "The Tragedy of the Commons", published in the journalScience.[10]The essay derived its title from the pamphlet byLloyd, which he cites, on the over-grazing of common land:[11] Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all. Hardin discussed problems that cannot be solved by technical means, as distinct from those with solutions that require "a change only in the techniques of thenatural sciences,[12]demanding little or nothing in the way of change inhuman valuesor ideas ofmorality". Hardin focused on humanpopulation growth, the use of the Earth'snatural resources, and the welfare state.[13] Hardin argued that if individuals relied on themselves alone, and not on the relationship between society and man, then people will treat other people as resources, which would lead to the world population growing and for the process to continue.[14]Parents breeding excessively would leave fewer descendants because they would be unable to provide for each child adequately. Such negative feedback is found in the animal kingdom.[13]Hardin said that if the children of improvident parents starved to death, if overbreeding was its own punishment, then there would be no public interest in controlling the breeding of families.[13] Hardin blamed thewelfare statefor allowing the tragedy of the commons; where the state provides for children and supports over breeding as a fundamental human right, aMalthusian catastropheis inevitable. Consequently, in his article, Hardin lamented the following proposal from theUnited Nations:[15] TheUniversal Declaration of Human Rightsdescribes the family as the natural and fundamental unit of society. [Article 16][16]It follows that any choice and decision with regard to the size of the family must irrevocably rest with the family itself, and cannot be made by anyone else. In addition, Hardin also pointed out the problem of individuals acting in rational self-interest by claiming that if all members in a group used common resources for their own gain and with no regard for others, all resources would still eventually be depleted. Overall, Hardin argued against relying onconscienceas a means of policing commons, suggesting that this favorsselfishindividuals – often known asfree riders– over those who are more altruistic.[18] In the context of avoidingover-exploitationofcommon resources, Hardin concluded by restatingHegel'smaxim(which was quoted byEngels), "freedom is the recognition of necessity".[19]He suggested that "freedom" completes the tragedy of the commons. By recognizing resources as commons in the first place, and by recognizing that, as such, they require management, Hardin believed that humans "can preserve and nurture other and more precious freedoms".[15] Hardin's article marked the mainstream acceptance of the term "commons" as used to connote a shared resource.[20]AsFrank van LaerhovenandElinor Ostromhave stated: "Prior to the publication of Hardin’s article on the tragedy of the commons (1968), titles containing the words 'the commons', 'common pool resources', or 'common property' were very rare in the academic literature."[21]They go on to say: "In 2002, Barrett and Mabry conducted a major survey of biologists to determine which publications in the twentieth century had become classic books or benchmark publications in biology.[22][23]They report that Hardin’s 1968 article was the one having the greatest career impact on biologists and is the most frequently cited".[24]However, the Ostroms point out that Hardin's analysis was based on crucial misconceptions about the nature of common property systems. Insystems theory, the commons problem is one of the ten most commonsystem archetypes. The Tragedy of the Commons archetype can be illustrated using a causal loop diagram.[25] Like Lloyd andThomas Malthusbefore him, Hardin was primarily interested in the problem ofhuman population growth. But in his essay, he also focused on the use of larger (though finite) resources such as the Earth's atmosphere and oceans, as well as pointing out the "negative commons" of pollution (i.e., instead of dealing with the deliberate privatization of a positive resource, a "negative commons" deals with the deliberate commonization of a negative cost, pollution). As ametaphor, the tragedy of the commons should not be taken too literally. The "tragedy" is not in the word's conventional or theatric sense, nor a condemnation of the processes that lead to it. Similarly, Hardin's use of "commons" has frequently been misunderstood, leading him to later remark that he should have titled his work "The Tragedy of the Unregulated Commons".[26][27] The metaphor illustrates the argument that free access and unrestricted demand for a finite resource ultimately reduces the resource throughover-exploitation, temporarily or permanently. This occurs because the benefits of exploitation accrue to individuals or groups, each of whom is motivated to maximize the use of the resource to the point in which they become reliant on it, while the costs of the exploitation are borne by all those to whom the resource is available (which may be a wider class of individuals than those who are exploiting it). This, in turn, causes demand for the resource to increase, which causes the problem to snowball until the resource collapses (even if it retains a capacity to recover). The rate at which depletion of the resource is realized depends primarily on three factors: the number of users wanting to consume the common in question, the consumptive nature of their uses, and the relative robustness of the common.[28] The same concept is sometimes called the "tragedy of the fishers", because fishing too many fish before or during breeding could cause stocks to plummet.[29] Thetragedy of the commonscan be considered in relation to environmental issues such assustainability.[30]The commons dilemma stands as a model for a great variety of resource problems in society today, such as water, forests,[31]fish, andnon-renewable energysources such as oil, gas, and coal. Hardin's model posits that the tragedy of the commons may emerge if individuals prioritize self-interest.[32]Government regulations have been instituted to avert resource degradation. However, extensive research spanning decades highlights instances where community-level resource management, operating independently of government intervention, has effectively overseen common resources. In the United States, fishing communities employ a strategy wherein access to local fishing areas is restricted to accepted members, resembling a private, members-only club. Membership is sustained through fee payments, and outsiders are met with resistance, showcasing a quasi-privatized system.[citation needed] Another case study involves beavers in Canada, historically crucial for natives who, as stewards, organized to hunt them for food and commerce. Non-native trappers, motivated by fur prices, contributed to resource degradation, wresting control from the indigenous population. Conservation laws enacted in the 1930s in response to declining beaver populations led to the expulsion of trappers, legal acknowledgment of natives, and enforcement of customary laws. This intervention resulted in productive harvests by the 1950s.[33] Situations exemplifying the "tragedy of the commons" include the overfishing and destruction of theGrand Banks of Newfoundland, the destruction ofsalmonruns on rivers that have been dammed[34](most prominently in modern times on theColumbia Riverin theNorthwest United Statesand historically inNorth Atlanticrivers), and the devastation of the sturgeon fishery (in modern Russia, but historically in the United States as well). In terms of water supply, another example is the limited water available in arid regions (e.g., the area of theAral Seaand theLos Angeleswater system supply, especially atMono LakeandOwens Lake). In economics, anexternalityis a cost or benefit that affects a party who did not choose to incur that cost or benefit.[35][36]Negative externalities are a well-known feature of the "tragedy of the commons". For example, driving cars has many negative externalities; these includepollution,carbon emissions, and traffic accidents. Every time Person A gets in a car, it becomes more likely that Person Z will suffer in each of those areas.[37]Economists often urge the government to adopt policies that "internalize" an externality.[38] Thetragedy of the commonscan also refer to the idea ofopen data.[39]Anonymised data are crucial for useful social research and represent therefore a public resource – better said, a common good – which is liable to exhaustion.[40]Some feel that the law should provide a safe haven for the dissemination of research data, since it can be argued that current data protection policies overburden valuable research without mitigating realistic risks.[41] An expansive application of the concept can also be seen in Vyse's[42]analysis of differences between countries in their responses to theCOVID-19 pandemic.[43]Vyse argues that those who defy public health recommendations can be thought of as spoiling a set of common goods,[44]"the economy, the healthcare system, and the very air we breathe,[45]for all of us. In a similar vein, it has been argued that higher sickness and mortality rates from COVID-19 in individualistic cultures with less obligatory collectivism,[46]is another instance of the "tragedy of the commons". In the past two decades, scholars have been attempting to apply the concept of the tragedy of the commons to the digital environment. However, between scholars there are differences on some very basic notions inherent to the tragedy of the commons: the idea of finite resources and the extent of pollution.[21]On the other hand, there seems to be some agreement on the role of thedigital divideand how to solve a potential tragedy of the digital commons.[21] Many digital resources have properties that make them vulnerable to the tragedy of the commons, includingdata,[47]virtual artifacts[48]and evenlimited user attention.[49]Closely related are the physical computational resources, such asCPU,RAM, andnetwork bandwidth, that digital communities onshared serversrely upon and govern.[50]Some scholars argue that digital resources are infinite, and therefore immune to the tragedy of the commons, because downloading a file does not constitute the destruction of the file in thedigital environment,[51]and because it can be replicated and disseminated throughout the digital environment.[52]However, it can still be considered a finite resource within the context of privacy laws and regulations that limit access to it.[53] Finite digital resources can thus bedigital commons. An example is adatabasethat requires persistent maintenance, such asWikipedia. As a non-profit, it survives on a network of people contributing to maintain a knowledge base without expectation of direct compensation. This digital resource will deplete as Wikipedia may only survive if it is contributed to and used as a commons. The motivation for individuals to contribute is reflective of the theory because, if humans act in their own immediate interest and no longer participate, then the resource becomes misinformed or depleted. Arguments surrounding the regulation and mitigation requirements for digital resources may become reflective of natural resources.[54][55] This raises the question whether one can view access itself as a finite resource in the context of a digital environment. Some scholars argue this point, often pointing to a proxy for access that is more concrete and measurable.[56]One such proxy isbandwidth, which can become congested when too many people try to access the digital environment.[52][57]Alternatively, one can think of the network itself as a common resource which can be exhausted through overuse.[58]Therefore, when talking about resources running out in a digital environment, it could be more useful to think in terms of the access to the digital environment being restricted in some way; this is calledinformation entropy.[59] In terms of pollution, there are some scholars who look only at the pollution that occurs in the digital environment itself.[60]They argue that unrestricted use of digital resources can cause an overproduction of redundant data which causes noise and corrupts communication channels within the digital environment.[52]Others argue that the pollution caused by the overuse of digital resources also causes pollution in the physical environment.[61]They argue that unrestricted use of digital resources causes misinformation, fake news, crime, and terrorism, as well as problems of a different nature such as confusion, manipulation, insecurity, and loss of confidence.[62][63] Scholars disagree on the particularities underlying the tragedy of the digital commons; however, there does seem to be some agreement on the cause and the solution.[21]The cause of the tragedy of the commons occurring in the digital environment is attributed by some scholars to the digital divide.[21]They argue that there is too large a focus on bridging this divide and providing unrestricted access to everyone. Such a focus on increasing access without the necessary restrictions causes the exploitation of digital resources for individual self-interest that is underlying any tragedy of the commons.[52][57] In terms of the solution, scholars agree that cooperation rather than regulation is the best way to mitigate a tragedy of the digital commons.[21]The digital world is not a closed system in which a central authority can regulate the users, as such some scholars argue that voluntary cooperation must be fostered.[57]This could perhaps be done throughdigital governancestructure that motivates multiple stakeholders to engage and collaborate in the decision-making process.[63]Other scholars argue more in favor of formal or informal sets of rules, like a code of conduct, to promote ethical behaviour in the digital environment and foster trust.[52][64]Alternative to managing relations between people, some scholars argue that it is access itself that needs to be properly managed, which includes expansion of network capacity.[58] Patentsare effectively a limited-time exploitation monopoly given to inventors. Once the period has elapsed, theinventionis in principle free to all, and many companies do indeed commercialize such products, now market-proven. However, around 50% of allpatentapplications do not reach successful commercialization at all, often due to immature levels of components or marketing failures by the innovators. Scholars have suggested that since investment is often connected topatentability, such inactive patents form a rapidlygrowingcategory of underprivileged technologies and ideas that, under current market conditions, are effectively unavailable for use.[65]Thus, "Under the current system, people are encouraged to register new patents, and are discouraged from using publicly available patents."[65]: 765The case might be particularly relevant to technologies that are relatively more environmentally/human damaging but also somewhat costlier than other alternatives developed contemporaneously.[65]: 766 More general examples (some alluded to by Hardin) of potential and actual tragedies include: A parallel was drawn in 2006 between the tragedy of the commons and the competing behaviour of parasites that, through acting selfishly, eventually diminish or destroy their common host.[79]The idea has also been applied to areas such as the evolution ofvirulenceorsexual conflict, where males may fatally harm females when competing for matings.[80] The idea ofevolutionary suicide, where adaptation at the level of the individual causes the whole species or population to be drivenextinct, can be seen as an extreme form of an evolutionary tragedy of the commons.[81][82]From an evolutionary point of view, the creation of the tragedy of the commons in pathogenic microbes may provide us with advanced therapeutic methods.[83][84] Microbial ecology studies have also addressed if resource availability modulates the cooperative or competitive behaviour in bacteria populations. When resources availability is high, bacterial populations become competitive and aggressive with each other, but when environmental resources are low, they tend to be cooperative andmutualistic.[85] Ecological studies have hypothesised thatcompetitiveforces between animals are major in highcarrying capacityzones (i.e., near the Equator), where biodiversity is higher, because of natural resources abundance. This abundance or excess of resources, causes animal populations to haverreproduction strategies(many offspring, short gestation, less parental care, and a short time until sexual maturity), so competition is affordable for populations. Also, competition could select populations to haverbehaviour in apositive feedbackregulation.[86] Contrarily, in lowcarrying capacityzones (i.e., far from the equator), where environmental conditions are harsh,Kstrategiesare common (longer life expectancy, produce relatively fewer offspring and tend to be altricial, requiring extensive care by parents when young) and populations tend to have cooperative ormutualisticbehaviours. If populations have a competitive behaviour in hostile environmental conditions, they mostly are filtered out (die) by environmental selection; hence, populations in hostile conditions are selected to be cooperative.[87] The effects of climate change have been given as a mass example of the tragedy of the commons.[88]This perspective proposes that the earth, being the commons, has suffered a depletion ofnatural resourceswithout regard to theexternalities, the impact on neighboring and future populations. The collective actions of individuals, organisations, and governments continue to contribute toenvironmental degradation. Mitigation of the long-term impacts andtipping pointsrequire strict controls or other solution, but this may come as a loss to different industries. The sustainability of population and industry growth is the subject of climate change discussion. The global commons of environmental resource consumption or selfishness, as in the fossil fuel industry has been theorised as not realistically manageable. This is due to the crossing of irreversible thresholds of impact before the costs are entirely realised.[89] Thecommons dilemmais a specific class ofsocial dilemmain which people's short-term selfish interests are at odds with long-term group interests and thecommon good.[90]In academia, a range of related terminology has also been used as shorthand for the theory or aspects of it, includingresource dilemma,take-some dilemma, andcommon pool resource.[91] Commons dilemma researchers have studied conditions under which groups and communities are likely to under- orover-harvestcommon resources in both the laboratory and field. Research programs have concentrated on a number of motivational, strategic, and structural factors that might be conducive to management of commons.[92] Ingame theory, which constructs mathematical models for individuals' behavior in strategic situations, the corresponding "game", developed by Hardin, is known as the Commonize Costs – Privatize Profits Game (CC–PP game).[93] Kopelman, Weber, & Messick (2002), in a review of the experimental research on cooperation in commons dilemmas, identify nine classes of independent variables that influence cooperation in commons dilemmas: social motives, gender, payoff structure, uncertainty, power and status, group size, communication, causes, and frames.[94]They organize these classes and distinguish between psychological individual differences (stable personality traits) and situational factors (the environment).[95]Situational factors include both the task (social and decision structure) and the perception of the task.[96] Empirical findings support the theoretical argument that the cultural group is a critical factor that needs to be studied in the context of situational variables.[97][98]Rather than behaving in line with economic incentives, people are likely to approach the decision to cooperate with an appropriateness framework.[99]An expanded, four factor model of the Logic of Appropriateness,[100][101]suggests that the cooperation is better explained by the question: "What does a person like me (identity) do (rules) in a situation like this (recognition) given this culture (group)?" Strategicfactors also matter in commons dilemmas. One often-studied strategic factor is the order in which people take harvests from the resource. In simultaneous play, all people harvest at the same time, whereas in sequential play people harvest from the pool according to a predetermined sequence – first, second, third, etc.[102]There is a clear order effect in the latter games: the harvests of those who come first – the leaders – are higher than the harvest of those coming later – the followers.[103]The interpretation of this effect is that the first players feel entitled to take more. With sequential play, individuals adopt a first come-first served rule, whereas with simultaneous play people may adopt an equality rule.[104]Another strategic factor is the ability to build up reputations.[105]Research found that people take less from the common pool in public situations than in anonymous private situations. Moreover, those who harvest less gain greater prestige and influence within their group.[106] Hardin stated in his analysis of the tragedy of the commons that "Freedom in a commons brings ruin to all."[107]One of the proposed solutions is to appoint a leader to regulate access to the common.[108]Groups are more likely to endorse a leader when a common resource is being depleted and when managing a common resource is perceived as a difficult task.[109]Groups prefer leaders who are elected, democratic, and prototypical of the group, and these leader types are more successful in enforcing cooperation.[110]A general aversion to autocraticleadershipexists, although it may be an effective solution, possibly because of the fear of power abuse and corruption.[111] The provision of rewards and punishments may also be effective in preserving common resources.[112]Selective punishments for overuse can be effective in promoting domestic water and energy conservation – for example, through installing water and electricity meters in houses.[112]Selective rewards work, provided that they are open to everyone. An experimental carpool lane in the Netherlands failed because car commuters did not feel they were able to organize a carpool.[113]The rewards do not have to be tangible. In Canada, utilities considered putting "smiley faces" on electricity bills of customers below the average consumption of that customer's neighborhood.[114] Articulating solutions to the tragedy of the commons is one of the main problems ofpolitical philosophy.[115][116]In some situations, locals implement (often complex) social schemes that work well.[117]When these fail, there are many possible governmental solutions such as privatization, internalizing the externalities, and regulation.[117] Robert Axelrodcontends that even self-interested individuals will often find ways to cooperate, because collective restraint serves both the collective and individual interests.[118]Anthropologist G. N. Appell criticised those who cited Hardin to "impos[e] their own economic and environmental rationality on other social systems of which they have incomplete understanding and knowledge."[119] Political scientistElinor Ostrom, who was awarded 2009'sNobel Memorial Prize in Economic Sciencesfor her work on the issue, and others revisited Hardin's work in 1999.[120]They found the tragedy of the commons not as prevalent or as difficult to solve as Hardin maintained, since locals have often come up with solutions to the commons problem themselves.[121]For example, another group found that a commons in the Swiss Alps has been run by a collective of farmers there to their mutual and individual benefit since 1517, in spite of the farmers also having access to their own farmland.[122]In general, it is in the interest of the users of a commons to keep them functioning and so complex social schemes are often invented by the users for maintaining them at optimum efficiency.[123][124]Another prominent example of this is the deliberative process of granting legal personhood to a part of nature, for example rivers, with the aim of preserving their water resources and prevent environmental degradation. This process entails that a river is regarded as its own legal entity that can sue against environmental damage done to it while being represented by an independently appointed guardian advisory group.[125]This has happened as a bottom-up process in New Zealand: Here debates initiated by the Whanganui Iwi tribe have resulted in legal personhood for the river. The river is considered as a living whole, stretching from mountain to sea and even includes not only the physical but also its metaphysical elements.[126] Similarly, geographer Douglas L. Johnson remarks that manynomadic pastoralistsocieties of Africa and the Middle East in fact "balanced local stocking ratios against seasonal rangeland conditions in ways that were ecologically sound", reflecting a desire for lower risk rather than higher profit; in spite of this,[127]it was often the case that "the nomad was blamed for problems that were not of his own making and were a product of alien forces."[128]Independently finding precedent in the opinions of previous scholars such asIbn Khaldunas well as common currency in antagonistic cultural attitudes towards non-sedentary peoples,[128]governments and international organizations have made use of Hardin's work to help justify restrictions on land access and the eventualsedentarizationof pastoral nomads despite its weak empirical basis.[129]Examining relations between historically nomadicBedouinArabs and theSyrianstate in the 20th century,Dawn Chattynotes that "Hardin's argument was curiously accepted as the fundamental explanation for the degradation of thesteppeland"[130]in development schemes for the arid interior of the country, downplaying the larger role of agriculturaloverexploitationindesertificationas it melded with prevailing nationalist ideology which viewed nomads as socially backward and economically harmful.[131] Elinor Ostromand her colleagues looked at how real-world communities manage communal resources, such as fisheries, land irrigation systems, and farmlands, and they identified a number of factors conducive to successful resource management.[132]One factor is the resource itself; resources with definable boundaries (e.g. land) can be preserved much more easily.[133]A second factor is resource dependence; there must be a perceptible threat ofresource depletion, and it must be difficult to find substitutes.[134]The third is the presence of a community; small and stable populations with a thick social network and social norms promoting conservation do better.[123]A final condition is that there be appropriate community-based rules and procedures in place with built-in incentives for responsible use and punishments for overuse.[135]When the commons is taken over by non-locals, those solutions can no longer be used.[121] Many of the economic and social structures recommended by Ostrom coincide with the structures recommended byanarchists, particularlygreen anarchism.[136]The largest contemporary societies that use these organizational strategies are theRebel Zapatista Autonomous Municipalitiesand theAutonomous Administration of North and East Syriawhich have heavily been influenced byanarchismand other versions oflibertarianandecologicalsocialism. Individuals may act in a deliberate way to avoid consumption habits that deplete natural resources. This consciousness promotes theboycottingof products or brands and seeking alternative, more sustainable options. Various well-established theories, such as theory of kin selection and direct reciprocity, have limitations in explaining patterns of cooperation emerging between unrelated individuals and in non-repeatable short-term interactions.[137][138]Studies have shown that punishment is an efficacious motivator for cooperation among humans.[139][140] Altruistic punishment entails the presence of individuals that punish defectors from a cooperative agreement, although doing so is costly and provides no material gain. These punishments effectively resolve tragedy of the commons scenarios by addressing both first-order free rider problems (i.e. defectors free riding on cooperators) and second-order free rider problems (i.e. cooperators free riding on work of punishers).[141]Such results can only be witnessed when the punishment levels are high enough. While defectors are motivated by self-interest and cooperators feel morally obliged to practice self-restraint, punishers pursue this path when their emotions are clouded by annoyance and anger at free riders.[142] Governmental solutions are used when the above conditions are not met (such as a community being larger than the cohesion of its social network).[143]Examples of government regulation include population control, privatization, regulation, and internalizing the externalities.[144] In Hardin's essay, he proposed that the solution to the problem of overpopulation must be based on "mutual coercion, mutually agreed upon" and result in "relinquishing the freedom to breed". Hardin discussed this topic further in a 1979 book,Managing the Commons,co-written withJohn A. Baden.[145][146]He framed this prescription in terms of needing to restrict the "reproductive right", to safeguard all otherrights. Several countries have a variety ofpopulation control lawsin place.[147] In the context of United States policy debates, Hardin advocated restrictions on migration, particularly of non-whites. In a 1991 article, he stated Popular anthropology came along with its dogma that all cultures are equally good, equally valuable. To say otherwise was to be narrow-minded and prejudiced, to be guilty of the sin of ethnocentrism. In time, a sort of Marxist-Hegelian dialectic took charge of our thinking: ethnocentrism was replaced by what we can only call ethnofugalism—a romantic flight away from our own culture. That which was foreign and strange, particularly if persecuted, became the ideal. Black became beautiful, and prolonged bilingual education replaced naturalization. Immigration lawyers grew rich serving their clients by finding ways around the law of the land to which they (the lawyers) owe their allegiance. Idealistic religious groups, claiming loyalty to a higher power than the nation, openly shielded and transported illegal immigrants.[148] One solution for some resources is to convert common good into private property (Coase 1960), giving the new owner an incentive to enforce its sustainability.[149]Libertariansandclassical liberalscite the tragedy of the commons as an example of what happens whenLockeanproperty rights to homestead resources are prohibited by a government.[150]They argue that the solution to the tragedy of the commons is to allow individuals to take over the property rights of a resource, that is, to privatize it.[151] In England, this solution was attempted in theinclosure acts. According toKarl MarxinDas Kapital, this solution leads to increasing numbers of people being pushed into smaller and smaller pockets of common land which has yet to be privatised, thereby merely displacing and exacerbating the problem while putting an increasing number of people in precarious situations.[152]Economic historianBob Allencoined the term "Engels' pause" to describe the period from 1790 to 1840, when British working-class wages stagnated and per-capitagross domestic productexpanded rapidly during a technological upheaval.[153] In a typical example, governmental regulations can limit the amount of a common good that is available for use by any individual.[154]Permit systems for extractive economic activities including mining, fishing, hunting, livestock raising, and timber extraction are examples of this approach.[155]Similarly, limits to pollution are examples of governmental intervention on behalf of the commons.[156]This idea is used by theUnited NationsMoon Treaty,Outer Space TreatyandLaw of the Sea Treatyas well as theUNESCOWorld Heritage Convention(treaty) which involves the international law principle that designates some areas or resources theCommon Heritage of Mankind.[157][158][159] German historianJoachim Radkauthought Hardin advocates strict management of common goods via increased government involvement or international regulation bodies.[160]An asserted impending "tragedy of the commons" is frequently warned of as a consequence of the adoption of policies which restrictprivate propertyand espouse expansion of public property.[161][162] Giving legal rights of personhood to objects in nature is another proposed solution. The idea of giving land a legal personality is intended to enable the democratic system of the rule of law to allow for prosecution, sanction, and reparation for damage to the earth.[163]For example, this has been put into practice in Ecuador in the form of a constitutional principle known as "Pacha Mama" (Mother Earth).[164] Privatization works when the person who owns the property (or rights of access to that property) pays the full price of its exploitation.[165]As discussed above negative externalities (negative results, such as air or water pollution, that do not proportionately affect the user of the resource) is often a feature driving the tragedy of the commons.[166]Internalizing the externalities, in other words ensuring that the users of resource pay for all of the consequences of its use, can provide an alternate solution between privatization and regulation.[167]One example is gasoline taxes which are intended to include both the cost of road maintenance and of air pollution.[168]This solution can provide the flexibility of privatization while minimizing the amount of government oversight and overhead that is needed.[169] One of the significant actions areas which can dwell as potential solution is to have co-shared communities that have partial ownership from governmental side and partial ownership from the community.[170]By ownership, here it is referred to planning, sharing, using, benefiting and supervision of the resources which ensure that the power is not held in one or two hands only.[171]Since, involvement of multiple stakeholders is necessary responsibilities can be shared across them based on their abilities and capacities in terms of human resources, infrastructure development ability, and legal aspects, etc.[172] The status of common land in England as mentioned in Lloyd's pamphlet has been widely misunderstood. Millions of acres were "common land", but this did not mean public land open to everybody, a popular fallacy. There was no such thing as ownerless land. Every parcel of "common" land had a legal owner, who was a private person or corporation. The owner was called thelord of the manor[173](which, likelandlord, was a legal term denoting ownership, not aristocratic status). It was true that there were local people, calledcommoners, defined as those who had a legal right to use his land for some purpose of their own, typically grazing their animals. Certainly their rights were strong, because the lord was not entitled to build on his own land, or fence off any part of it,[174][175]unless he could prove he had left enough pasture for the commoners.[176]But these individuals were not the general public at large: not everyone in the vicinity was a commoner.[173] Furthermore the commoners' right to graze the lord's land with their animals was restricted by law – precisely in order to prevent overgrazing.[177]If overgrazing did nevertheless occur, which it sometimes did, it was because of incompetent or weak land management,[178]and not because of the pressure of an unlimited right to graze, which did not exist. Hence Christopher Rodgers said that "Hardin's influential thesis on the 'tragedy of the commons' ... has no application to common land in England and Wales. It is based on a false premise". Rodgers, professor of law atNewcastle University, added: Far from suffering a tragedy of the commons in Hardin's sense, common land ... was subject to common law principles of customary origin that promoted 'sustainable management'. These were expressed through property rights, in the form of qualifications on the resource use conferred by property entitlements, and were administered by local manor courts ... Moreover, the administration of customary rules by the manor courts represented a wholly different means for organising the management of common resources than the model posited by Hardin, which stresses the need for exclusive ownership by either individuals or government in order to promote the effective management of the resource.[179] Every productive unit ("manor") had a manorial court; without it, the manor ceased to exist.[180]Manorial courts could fine commoners, and the lord of the manor for that matter,[181]for breaches of customary law, e.g. grazing too many cattle on the land. Customary law varied locally. It could not be altered without the consent of the whole body of the commoners,[173]except by getting an Act of Parliament.[182] By the time of Lloyd's pamphlet (1833) the majority of land in England had beenenclosedand had ceased to be common land.[183]That which remained may not have been good agricultural land anyway,[184]or the best managed. Lloyd takes for granted that common lands were inferior[185]and argues his over-grazing theory to explain it. He does not examine other possible causes e.g. common land was difficult to drain, to keep disease-free, and to use for improved cattle breeding.[186] Likewise, Susan Jane Buck Cox argues that the common land example used to argue this economic concept is on very weak historical ground, and misrepresents what she terms was actually the "triumph of the commons":[187]the successful common usage of land for many centuries. She argues that social changes and agricultural innovation, and not the behaviour of the commoners, led to the demise of the commons.[188]In a similar vein, Carl Dahlman argues that commons were effectively managed to prevent overgrazing.[189] Hardin's work is criticised as historically inaccurate in failing to account for thedemographic transition,[190]and for failing to distinguish betweencommon propertyandopen accessresources.[191][192]Radical environmentalistDerrick Jensenclaims the tragedy of the commons is used aspropagandaforprivate ownership.[193][194]He says it has been used by the politicalright wingto hasten the final enclosure of the "common resources" ofthird worldand indigenous people worldwide, as a part of theWashington Consensus.[195]He argues that in true situations, those who abuse the commons would have been warned to desist and if they failed would have punitive sanctions against them. He says that rather than being called "The Tragedy of the Commons", it should be called "the Tragedy of the Failure of the Commons".[196] Marxist geographerDavid Harveyhas a similar criticism: "The dispossession of indigenous populations in North America by 'productive' colonists, for instance, was justified because indigenous populations did not produce value",[197]asking: "Why, for instance, do we not focus in Hardin's metaphor on theindividual ownershipof the cattle rather than on the pasture as a common?"[198] Some authors, likeYochai Benkler, say that with the rise of the Internet and digitalisation, an economics system based on commons becomes possible again.[199]He wrote in his bookThe Wealth of Networksin 2006 that cheap computing power plus networks enable people to produce valuable products through non-commercial processes of interaction: "as human beings and as social beings, rather than as market actors through the price system".[200]He uses the termnetworkedinformation economyto refer to a "system of production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means that do not depend on market strategies".[201]He also coined the termcommons-based peer productionfor collaborative efforts based on sharing information.[202]Examples of commons-based peer production are Wikipedia,free and open source softwareandopen-source hardware.[203] The tragedy of the commons has served as a pretext for powerfulprivate companiesand/or governments to introduce regulatory agents oroutsourcingon less powerful entities or governments, for the exploitation of their natural resources.[204][205][206]Powerful companies and governments can easily corrupt and bribe less powerful institutions or governments, to allow them exploit or privatize their resources, which causes more concentration of power and wealth in powerful entities.[207]This phenomenon is known as theresource curse.[208] Other criticisms have focused on Hardin'sracist and eugenicist views, claiming that his arguments are directed towards forciblepopulation control, particularly forpeople of color.[209][210] In certain cases, exploiting a resource more may be a good thing. Carol M. Rose, in a 1986 article, discussed the concept of the "comedy of the commons", where the public property in question exhibits "increasing returns to scale" in usage (hence the phrase, "the more the merrier"),[211]in that the more people use the resource, the higher the benefit to each one. Rose cites as examples commerce and group recreational activities. According to Rose, public resources with the "comedic" characteristic may suffer from under-investment rather than over usage.[212] A modern example presented by Garrett Richards inenvironmental studiesis that the issue of excessivecarbon emissionscan be tackled effectively only when the efforts are directly addressing the issues along with the collective efforts from the world economies.[213]Additionally, the more that nations are willing to collaborate and contribute resources, the higher the chances are for successful technological developments.[214]
https://en.wikipedia.org/wiki/Tragedy_of_the_commons
Virtual private network(VPN) is anetwork architecturefor virtually extending aprivate network(i.e. anycomputer networkwhich is not the publicInternet) across one or multiple other networks which are either untrusted (as they are not controlled by the entity aiming to implement the VPN) or need to be isolated (thus making the lower network invisible or not directly usable).[1] A VPN can extend access to a private network to users who do not have direct access to it, such as an office network allowing secure access from off-site over the Internet.[2]This is achieved by creating a link betweencomputing devicesand computer networks by the use of networktunneling protocols. It is possible to make a VPN secure to use on top of insecure communication medium (such as the public internet) by choosing a tunneling protocol that implementsencryption. This kind of VPN implementation has the benefit of reduced costs and greater flexibility, with respect to dedicated communication lines, forremote workers.[3] The termVPNis also used to refer toVPN serviceswhich sell access to their own private networks for internet access by connecting their customers using VPN tunneling protocols. The goal of a virtual private network is to allownetwork hoststo exchange network messages across another network to access private content, as if they were part of the same network. This is done in a way that makes crossing the intermediate network transparent to network applications. Users of a network connectivity service may consider such an intermediate network to be untrusted, since it is controlled by a third-party, and might prefer a VPN implemented via protocols that protect the privacy of their communication. In the case of aProvider-provisioned VPN, the goal is not to protect against untrusted networks, but to isolate parts of the provider's own network infrastructure in virtual segments, in ways that make the contents of each segment private with respect to the others. This situation makes many other tunneling protocols suitable for building PPVPNs, even with weak or no security features (like inVLAN). How a VPN works depends on which technologies and protocols the VPN is built upon. Atunneling protocolis used to transfer the network messages from one side to the other. The goal is to take network messages from applications on one side of the tunnel and replay them on the other side. Applications do not need to be modified to let their messages pass through the VPN, because the virtual network or link is made available to the OS. Applications that do implement tunneling orproxyingfeatures for themselves without making such features available as a network interface, are not to be considered VPN implementations but may achieve the same or similar end-user goal of exchanging private contents with a remote network. Virtual private networks configurations can be classified depending on the purpose of the virtual extension, which makes different tunneling strategies appropriate for different topologies: In the context of site-to-site configurations, the termsintranetandextranetare used to describe two different use cases.[4]Anintranetsite-to-site VPN describes a configuration where the sites connected by the VPN belong to the same organization, whereas anextranetsite-to-site VPN joins sites belonging to multiple organizations. Typically, individuals interact with remote access VPNs, whereas businesses tend to make use of site-to-site connections forbusiness-to-business, cloud computing, andbranch officescenarios. However, these technologies are not mutually exclusive and, in a significantly complex business network, may be combined. Apart from the general topology configuration, a VPN may also be characterized by: A variety of VPN technics exist to adapt to the above characteristics, each providing different network tunneling capabilities and different security model coverage or interpretation. Operating systemsvendors and developers do typically offer native support to a selection of VPN protocols. These are subject to change over the years, as some have been proven to be unsecure with respect to modern requirements and expectations, and others have emerged. Desktop, smartphone and other end-user device operating systems usually support configuring remote access VPN from theirgraphicalorcommand-linetools.[5][6][7]However, due to the variety of, often non standard, VPN protocols there exists many third-party applications that implement additional protocols not yet or no longer natively supported by the OS. For instance,Androidlacked nativeIPsec IKEv2support until version 11,[8]and users needed to install third-party apps in order to connect that kind of VPN. Conversely, Windows does not natively support plain IPsec IKEv1 remote access native VPN configuration (commonly used byCiscoandFritz!BoxVPN solutions). Network appliances, such as firewalls, do often include VPN gateway functionality for either remote access or site-to-site configurations. Their administration interfaces do often facilitate setting up virtual private networks with a selection of supported protocols which have been integrated for an easy out-of-box setup. In some cases, like in the open source operating systems devoted to firewalls and network devices (likeOpenWrt,IPFire,PfSenseorOPNsense) it is possible to add support for additional VPN protocols by installing missing software components or third-party apps. Similarly, it is possible to get additional VPN configurations working, even if the OS does not facilitate the setup of that particular configuration, by manually editing internal configurations of by modifying the open source code of the OS itself. For instance, pfSense does not support remote access VPN configurations through its user interface where the OS runs on the remote host, while provides comprehensive support for configuring it as the central VPN gateway of such remote-access configuration scenario. Otherwise, commercial appliances with VPN features based on proprietary hardware/software platforms, usually support a consistent VPN protocol across their products but do not open up for customizations outside the use cases they intended to implement. This is often the case for appliances that rely on hardware acceleration of VPNs to provide higher throughput or support a larger amount of simultaneously connected users. Whenever a VPN is intended to virtually extend a private network over a third-party untrusted medium, it is desirable that the chosen protocols match the following security model: VPN are not intended to make connecting users anonymous or unidentifiable from the untrusted medium network provider perspective. If the VPN makes use of protocols that do provide those confidentiality features, their usage can increase userprivacyby making the untrusted medium owner unable to access the private data exchanged across the VPN. In order to prevent unauthorized users from accessing the VPN, most protocols can be implemented in ways that also enableauthenticationof connecting parties. This secures the joined remote network confidentiality, integrity and availability. Tunnel endpoints can be authenticated in various ways during the VPN access initiation. Authentication can happen immediately on VPN initiation (e.g. by simple whitelisting of endpoint IP address), or very lately after actual tunnels are already active (e.g. with aweb captive portal). Remote-access VPNs, which are typically user-initiated, may usepasswords,biometrics,two-factor authentication, or othercryptographicmethods. People initiating this kind of VPN from unknown arbitrary network locations are also called "road-warriors". In such cases, it is not possible to use originating network properties (e.g. IP addresses) as secure authentication factors, and stronger methods are needed. Site-to-site VPNs often use passwords (pre-shared keys) ordigital certificates. Depending on the VPN protocol, they may store the key to allow the VPN tunnel to establish automatically, without intervention from the administrator. A virtual private network is based on a tunneling protocol, and may be possibly combined with other network or application protocols providing extra capabilities and different security model coverage. Trusted VPNs do not use cryptographic tunneling; instead, they rely on the security of a single provider's network to protect the traffic.[24] From a security standpoint, a VPN must either trust the underlying delivery network or enforce security with a mechanism in the VPN itself. Unless the trusted delivery network runs among physically secure sites only, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.[citation needed] Mobile virtual private networksare used in settings where an endpoint of the VPN is not fixed to a singleIP address, but instead roams across various networks such as data networks from cellular carriers or between multipleWi-Fiaccess points without dropping the secure VPN session or losing application sessions.[28]Mobile VPNs are widely used inpublic safetywhere they give law-enforcement officers access to applications such ascomputer-assisted dispatchand criminal databases,[29]and in other organizations with similar requirements such asfield service managementand healthcare.[30][need quotation to verify] A limitation of traditional VPNs is that they are point-to-point connections and do not tend to supportbroadcast domains; therefore, communication, software, and networking, which are based onlayer 2and broadcastpackets, such asNetBIOSused inWindows networking, may not be fully supported as on alocal area network. Variants on VPN such asVirtual Private LAN Service(VPLS) and layer 2 tunneling protocols are designed to overcome this limitation.[31]
https://en.wikipedia.org/wiki/Virtual_private_network
Awebsite(also written as aweb site) is anyweb pagewhose content is identified by a commondomain nameand is published on at least oneweb server. Websites are typically dedicated to a particular topic or purpose, such as news, education, commerce, entertainment, orsocial media.Hyperlinkingbetween web pages guides the navigation of the site, which often starts with ahome page. Themost-visitedsites areGoogle,YouTube, andFacebook. All publicly-accessible websites collectively constitute theWorld Wide Web. There are also private websites that can only be accessed on aprivate network, such as a company's internal website for its employees.Userscan access websites on a range of devices, includingdesktops,laptops,tablets, andsmartphones. Theappused on these devices is called aweb browser. The World Wide Web (WWW) was created in 1989 by the British CERN computer scientistTim Berners-Lee.[1][2]On 30 April 1993,CERNannounced that the World Wide Web would be free to use for anyone, contributing to the immense growth of the Web.[3]Before the introduction of theHypertext Transfer Protocol(HTTP), other protocols such asFile Transfer Protocoland thegopher protocolwere used to retrieve individual files from a server. These protocols offer a simpledirectory structurein which the user navigates and where they choose files to download. Documents were most often presented as plain text files without formatting or were encoded inword processorformats. While "web site" was the original spelling (sometimes capitalized "Web site", since "Web" is a proper noun when referring to the World Wide Web), this variant has become rarely used, and "website" has become the standard spelling. All major style guides, such asThe Chicago Manual of Style[4]and theAP Stylebook,[5]have reflected this change. In February 2009,Netcraft, anInternet monitoringcompany that has tracked Web growth since 1995, reported that there were 215,675,903 websites with domain names and content on them in 2009, compared to just 19,732 websites in August 1995.[6]After reaching 1 billion websites in September 2014, a milestone confirmed by Netcraft in its October 2014 Web Server Survey and that Internet Live Stats was the first to announce—as attested by this tweet from the inventor of the World Wide Web himself, Tim Berners-Lee—the number of websites in the world have subsequently declined, reverting to a level below 1 billion. This is due to the monthly fluctuations in the count of inactive websites. The number of websites continued growing to over 1 billion by March 2016 and has continued growing since.[7]Netcraft Web Server Survey in January 2020 reported that there are 1,295,973,827 websites and in April 2021 reported that there are 1,212,139,815 sites across 10,939,637 web-facing computers, and 264,469,666 unique domains.[8]An estimated 85 percent of all websites are inactive.[9] A static website is one that has Web pages stored on the server in the format that is sent to a client Web browser. It is primarily coded inHypertext Markup Language(HTML);Cascading Style Sheets(CSS) are used to control appearance beyond basic HTML. Images are commonly used to create the desired appearance and as part of the main content. Audio or video might also be considered "static" content if it plays automatically or is generally non-interactive. This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos, and other content and may require basic website design skills and software. Simple forms or marketing examples of websites, such as aclassic website, afive-page websiteor abrochure websiteare often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services through text, photos, animations, audio/video, and navigation menus. Static websites may still useserver side includes(SSI) as an editing convenience, such as sharing a common menu bar across many pages. As the site's behaviorto the readeris still static, this is not considered a dynamic site. A dynamic website is one that changes or customizes itself frequently and automatically. Server-side dynamic pages are generated "on the fly" by computer code that produces the HTML (CSS are responsible for appearance and thus, are static files). There are a wide range of software systems, such asCGI,Java ServletsandJava Server Pages(JSP),Active Server PagesandColdFusion(CFML) that are available to generatedynamic Web systems and dynamic sites. VariousWeb application frameworksandWeb template systemsare available for general-useprogramming languageslikePerl,PHP,PythonandRubyto make it faster and easier to create complex dynamic websites. A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user. For example, when the front page of a news site is requested, the code running on the webserver might combine stored HTML fragments with news stories retrieved from adatabaseor another website viaRSSto produce a page that includes the latest information. Dynamic sites can be interactive by usingHTML forms, storing and reading backbrowser cookies, or by creating a series of pages that reflect the previous history of clicks. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request, e.g. for the keywordBeatles. In response, the content of the Web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CDs, DVDs, and books.Dynamic HTMLusesJavaScriptcode to instruct the Web browser how to interactively modify the page contents. One way to simulate a certain type of dynamic website while avoiding the performance loss of initiating the dynamic engine on a per-user or per-connection basis is to periodically automatically regenerate a large series of static pages. Early websites had only text, and soon after, images. Web browserplug-inswere then used to add audio, video, and interactivity (such as for arich Web applicationthat mirrors the complexity of a desktop application like a word processor). Examples of such plug-ins areMicrosoft Silverlight,Adobe Flash Player,Adobe Shockwave Player, andJava SE.HTML 5includes provisions for audio and video without plugins. JavaScript is also built into most modern web browsers, and allows for website creators to send code to the web browser that instructs it how to interactively modify page content and communicate with the web server if needed. The browser's internal representation of the content is known as theDocument Object Model(DOM). WebGL(Web Graphics Library) is a modern JavaScript API for rendering interactive 3D graphics without the use of plug-ins. It allows interactive content such as 3D animations, visualizations and video explainers to presented users in the most intuitive way.[10] A 2010-era trend in websites called "responsive design" has given the best viewing experience as it provides a device-based layout for users. These websites change their layout according to the device or mobile platform, thus giving a rich user experience.[11] Websites can be divided into two broad categories—static and interactive. Interactive sites are part of theWeb 2.0community of sites and allow for interactivity between the site owner and site visitors or users. Static sites serve or capture information but do not allow engagement with the audience or users directly. Some websites are informational or produced by enthusiasts or for personal use or entertainment. Many websites do aim to make money using one or more business models, including:
https://en.wikipedia.org/wiki/Website
Network-attached storage(NAS) is a file-levelcomputer data storageserver connected to acomputer networkproviding data access to aheterogeneousgroup of clients. In this context, the term "NAS" can refer to both the technology and systems involved, or a specializedcomputer appliancedevice unit built for such functionality – aNAS applianceorNAS box. NAS contrasts withblock-levelstorage area networks(SAN). A NAS device is optimised forserving fileseither by its hardware, software, or configuration. It is often manufactured as acomputer appliance– a purpose-built specialized computer. NAS systems are networked appliances that contain one or morestorage drives, often arranged intological, redundant storage containers orRAID. Network-attached storage typically provide access to files using network file sharing protocols such asNFS,SMB, orAFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers, as well as to remove the responsibility of file serving from other servers on the network; by doing so, a NAS can provide faster data access, easier administration, and simpler configuration as opposed to using general-purpose server to serve files.[1] Accompanying a NAS are purpose-builthard disk drives, which are functionally similar to non-NAS drives but may have different firmware, vibration tolerance, or power dissipation to make them more suitable for use in RAID arrays, a technology often used in NAS implementations.[2]For example, some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application, it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on a single drive can be recovered completely via the redundancy encoded across the RAID set. If a drive spends several seconds executing extensive retries it might cause the RAID controller to flag the drive as "down" whereas if it simply replied promptly that the block of data had a checksum error, the RAID controller would use the redundant data on the other drives to correct the error and continue without any problem. A NAS unit is a computer connected to a network that provides only file-based data storage services to other devices on the network. Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser.[3] A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used. NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers orRAID. NAS uses file-based protocols such asNFS(popular onUNIXsystems), SMB (Server Message Block) (used withMicrosoft Windowssystems),AFP(used withApple Macintoshcomputers), or NCP (used withOESandNovell NetWare). NAS units rarely limit clients to a single protocol. The key difference betweendirect-attached storage(DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. As the name suggests, DAS typically is connected via aUSBorThunderboltenabled cable. NAS is designed as an easy and self-contained solution for sharing files over the network. Both DAS and NAS can potentially increase availability of data by usingRAIDorclustering. Both NAS and DAS can have various amount ofcache memory, which greatly affects performance. When comparing use of NAS with use of local (non-networked) DAS, the performance of NAS depends mainly on the speed of and congestion on the network. Most NAS solutions will include the option to install a wide array of software applications to allow better configuration of the system or to include other capabilities outside of storage (like video surveillance, virtualization, media, etc). DAS typically is focused solely on data storage but capabilities can be available based on specific vendor options. NAS provides both storage and afile system. This is often contrasted with SAN (storage area network), which provides only block-based storage and leaves file system concerns on the "client" side. SAN protocols includeFibre Channel,iSCSI,ATA over Ethernet(AoE) andHyperSCSI. One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as afile server(the client canmapnetwork drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system andmounted. Despite their differences, SAN and NAS are not mutually exclusive and may be combined as a SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system[citation needed]. Ashared disk file systemcan also be run on top of a SAN to provide filesystem services. In the early 1980s, the "Newcastle Connection" byBrian Randelland his colleagues atNewcastle Universitydemonstrated and developed remote file access across a set of UNIX machines.[4][5]Novell'sNetWareserver operating system andNCPprotocol was released in 1983. Following the Newcastle Connection,Sun Microsystems' 1984 release ofNFSallowed network servers to share their storage space with networked clients. 3Com andMicrosoftwould develop theLAN Managersoftware and protocol to further this new market.3Com's3Serverand3+Sharesoftware was the first purpose-built server (including proprietary hardware, software, and multiple disks) for open systems servers. Inspired by the success offile serversfrom Novell,IBM, and Sun, several firms developed dedicated file servers. While 3Com was among the first firms to build a dedicated NAS for desktop operating systems,Auspex Systemswas one of the first to develop a dedicated NFS server for use in the UNIX market. A group of Auspex engineers split away in the early 1990s to create the integratedNetApp FAS, which supported both the Windows SMB and the UNIX NFS protocols and had superiorscalabilityand ease of deployment. This started the market forproprietaryNAS devices now led by NetApp and EMC Celerra. Starting in the early 2000s, a series of startups emerged offering alternative solutions to single filer solutions in the form of clustered NAS – Spinnaker Networks (acquired byNetAppin February 2004),Exanet(acquired byDellin February 2010),Gluster(acquired by RedHat in 2011), ONStor (acquired by LSI in 2009),IBRIX(acquired byHP),Isilon(acquired by EMC – November 2010), PolyServe (acquired byHPin 2007), andPanasas, to name a few. In 2009, NAS vendors (notably CTERA networks[6][7]andNetgear) began to introduceonline backupsolutions integrated in their NAS appliances, for online disaster recovery.[8][9] By 2021, three major types of NAS solutions are offered (all with hybrid cloud models where data can be stored both on-premise on the NAS and off site on a separate NAS or through a public cloud service provider). The first type of NAS is focused on consumer needs with lower-cost options that typically support 1–5 hot plug hard drives. The second is focused on small-to-medium-sized businesses – these NAS solutions range from 2–24+ hard drives and are typically offered in tower or rackmount form factors. Pricing can vary greatly depending on the processor, components, and overall features supported. The last type is geared toward enterprises or large businesses and are offered with more advanced software capabilities. NAS solutions are typically sold without hard drives installed to allow the buyer (or IT departments) to select the hard drive cost, size, and quality. The way manufacturers make NAS devices can be classified into three types: NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such asload-balancingand fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike theirrackmountedcounterparts, they are generally packaged in smaller form factors. The price of NAS appliances has fallen sharply in recent[when?]years, offering flexible network-based storage to the home consumer market for little more than the cost of a regularUSBorFireWireexternal hard disk. Many of these home consumer devices are built aroundARM, x86 orMIPSprocessors running anembedded Linuxoperating system. Apurpose-built backup appliance(PBBA) is a kind of NAS intended for storingbackupdata. PBBAs typically includedata deduplication, compression,RAID 6or other redundant hardware components, and automated maintenance.[10][11][12][13]A PBBA may also be called abackup and disaster recovery applianceor simply abackupappliance. Open-sourceNAS-oriented distributions ofLinuxandFreeBSDare available. These are designed to be easy to set up on commodity PC hardware, and are typically configured using a web browser. They can run from avirtual machine,Live CD,bootableUSB flash drive (Live USB), or from one of the mounted hard drives. They runSamba(anSMBdaemon),NFSdaemon, andFTPdaemons which are freely available for those operating systems. Network-attached secure disks(NASD) is 1997–2001 research project ofCarnegie Mellon University, with the goal of providing cost-effective scalablestoragebandwidth.[14]NASD reduces the overhead on the fileserver(file manager) by allowing storage devices to transfer data directly toclients. Most of the file manager's work is offloaded to the storage disk without integrating the file system policy into the disk. Most client operations like Read/Write go directly to the disks; less frequent operations like authentication go to the file manager. Disks transfer variable-length objects instead of fixed-size blocks to clients. The File Manager provides a time-limited cachable capability for clients to access the storage objects. A file access from the client to the disks has the following sequence: Aclustered NASis a NAS that is using a distributed file system running simultaneously on multiple servers. The key difference between a clustered and traditional NAS is the ability to distribute[citation needed](e.g. stripe) data andmetadataacross the cluster nodes or storage devices. Clustered NAS, like a traditional one, still provides unified access to the files from any of the cluster nodes, unrelated to the actual location of the data.
https://en.wikipedia.org/wiki/Clustered_NAS
Astorage area network(SAN) orstorage networkis acomputer networkwhich provides access to consolidated,block-level data storage. SANs are primarily used to accessdata storagedevices, such asdisk arraysandtape librariesfromserversso that the devices appear to theoperating systemasdirect-attached storage. A SAN typically is a dedicated network of storage devices not accessible through thelocal area network(LAN). Although a SAN provides only block-level access,file systemsbuilt on top of SANs do provide file-level access and are known asshared-disk file systems. Newer SAN configurations enable hybrid SAN[1]and allow traditional block storage that appears as local storage but also object storage for web services through APIs. Storage area networks (SANs) are sometimes referred to asnetwork behind the servers[2]: 11and historically developed out of acentralized data storagemodel, but with its owndata network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automaticbackupof data, and the monitoring of the storage as well as the backup process.[3]: 16–17A SAN is a combination of hardware and software.[3]: 9It grew out of data-centricmainframe architectures, where clients in a network can connect to severalserversthat store different types of data.[3]: 11To scale storage capacities as the volumes of data grew,direct-attached storage(DAS) was developed, wheredisk arraysorjust a bunch of disks(JBODs) were attached to servers. In this architecture, storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is asingle point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, adirect-attached shared storagearchitecture was implemented, where several servers could access the same storage device.[3]: 16–17 DAS was the first network storage system and is still widely used where data storage requirements are not very high. Out of it developed thenetwork-attached storage(NAS) architecture, where one or more dedicatedfile serveror storage devices are made available in a LAN.[3]: 18Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck.[3]: 21–22Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the SAN, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent.[3]: 22In a NAS architecture data is transferred using theTCPandIPprotocols overEthernet. Distinct protocols were developed for SANs, such asFibre Channel,iSCSI,Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.[3]: 29 SANs have their own networking devices, such as SAN switches. To access the SAN, so-called SAN servers are used, which in turn connect to SANhost adapters. Within the SAN, a range of data storage devices may be interconnected, such as SAN-capable disk arrays, JBODs andtape libraries.[3]: 32, 35–36 Servers that allow access to the SAN and its storage devices are said to form thehost layerof the SAN. Such servers have host adapters, which are cards that attach to slots on the servermotherboard(usually PCI slots) and run with a correspondingfirmwareanddevice driver. Through the host adapters theoperating systemof the server can communicate with the storage devices in the SAN.[4]: 26 In Fibre channel deployments, a cable connects to the host adapter through thegigabit interface converter(GBIC). GBICs are also used on switches and storage devices within the SAN, and they convert digital bits into light impulses that can then be transmitted over the Fibre Channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM).[4]: 27 The fabric layer consists of SAN networking devices that includeSAN switches, routers, protocol bridges, gateway devices, and cables. SAN network devices move data within the SAN, or between aninitiator, such as an HBA port of a server, and atarget, such as the port of a storage device. When SANs were first built, hubs were the only devices that were Fibre Channel capable, but Fibre Channel switches were developed and hubs are now rarely found in SANs. Switches have the advantage over hubs that they allow all attached devices to communicate simultaneously, as a switch provides a dedicated link to connect all its ports with one another.[4]: 34When SANs were first built, Fibre Channel had to be implemented over copper cables, these days multimodeoptical fibre cablesare used in SANs.[4]: 40 SANs are usually built with redundancy, so SAN switches are connected with redundant links. SAN switches connect the servers with the storage devices and are typically non-blocking allowing transmission of data across all attached wires at the same time.[4]: 29SAN switches are for redundancy purposes set up in ameshed topology. A single SAN switch can have as few as 8 ports and up to 32 ports with modular extensions.[4]: 35So-called director-class switches can have as many as 128 ports.[4]: 36 In switched SANs, the Fibre Channel switched fabric protocol FC-SW-6 is used under which every device in the SAN has a hardcodedWorld Wide Name(WWN) address in the host bus adapter (HBA). If a device is connected to the SAN its WWN is registered in the SAN switch name server.[4]: 47In place of a WWN, or worldwide port name (WWPN), SAN Fibre Channel storage device vendors may also hardcode a worldwide node name (WWNN). The ports of storage devices often have a WWN starting with 5, while the bus adapters of servers start with 10 or 21.[4]: 47 The serializedSmall Computer Systems Interface(SCSI) protocol is often used on top of the Fibre Channel switched fabric protocol in servers and SAN storage devices. TheInternet Small Computer Systems Interface(iSCSI) over Ethernet and theInfinibandprotocols may also be found implemented in SANs, but are often bridged into the Fibre Channel SAN. However, Infiniband and iSCSI storage devices, in particular, disk arrays, are available.[4]: 47–48 The various storage devices in a SAN are said to form thestorage layer. It can include a variety ofhard diskandmagnetic tapedevices that store data. In SANs, disk arrays are joined through aRAIDwhich makes a lot of hard disks look and perform like one big storage device.[4]: 48Every storage device, or even partition on that storage device, has alogical unit number(LUN) assigned to it. This is a unique number within the SAN. Every node in the SAN, be it a server or another storage device, can access the storage by referencing the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives a request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN.[4]: 148–149LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should never be accessed by the server are masked.[4]: 354Another method to restrict server access to particular SAN storage devices is fabric-based access control, or zoning, which is enforced by the SAN networking devices and servers. Under zoning, server access is restricted to storage devices that are in a particular SAN zone.[5] A mapping layer to other protocols is used to form a network: Storage networks may also be built usingSerial Attached SCSI(SAS) andSerial ATA(SATA) technologies. SAS evolved from SCSI direct-attached storage. SATA evolved fromParallel ATAdirect-attached storage. SAS and SATA devices can be networked usingSAS expanders. TheStorage Networking Industry Association(SNIA) defines a SAN as "a network whose primary purpose is the transfer of data between computer systems and storage elements". But a SAN does not just consist of a communication infrastructure, it also has a softwaremanagement layer. This software organizes the servers, storage devices, and the network so that data can be transferred and stored. Because a SAN does not usedirect attached storage(DAS), the storage devices in the SAN are not owned and managed by a server.[2]: 11A SAN allows a server to access a large data storage capacity and this storage capacity may also be accessible by other servers.[2]: 12Moreover, SAN software must ensure that data is directly moved between storage devices within the SAN, with minimal server intervention.[2]: 13 SAN management software is installed on one or more servers and management clients on the storage devices. Two approaches have developed in SAN management software: in-band and out-of-band management. In-band means that management data between server and storage devices is transmitted on the same network as the storage data. While out-of-band means that management data is transmitted over dedicated links.[2]: 174SAN management software will collect management data from all storage devices in the storage layer. This includes info on read and write failures, storage capacity bottlenecks and failure of storage devices. SAN management software may integrate with theSimple Network Management Protocol(SNMP).[2]: 176 In 1999 Common Information Model (CIM), an open standard, was introduced for managing storage devices and to provide interoperability, The web-based version of CIM is called Web-Based Enterprise Management (WBEM) and defines SAN storage device objects and process transactions. Use of these protocols involves a CIM object manager (CIMOM), to manage objects and interactions, and allows for the central management of SAN storage devices. Basic device management for SANs can also be achieved through theStorage Management Interface Specification(SMI-S), were CIM objects and processes are registered in a directory. Software applications and subsystems can then draw on this directory.[2]: 177Management software applications are also available to configure SAN storage devices, allowing, for example, the configuration of zones and LUNs.[2]: 178 Ultimately SAN networking and storage devices are available from many vendors and every SAN vendor has its own management and configuration software. Common management in SANs that include devices from different vendors is only possible if vendors make theapplication programming interface(API) for their devices available to other vendors. In such cases, upper-level SAN management software can manage the SAN devices from other vendors.[2]: 180 In a SAN, data is transferred, stored and accessed on a block level. As such, a SAN does not providedata fileabstraction, onlyblock-level storageand operations. Server operating systems maintain their ownfile systemson their own dedicated, non-shared LUNs on the SAN, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires software.File systemshave been developed to work with SAN software to providefile-level access. These are known asshared-disk file system. Video editingsystems require very high data transfer rates and very low latency. SANs in media and entertainment are often referred to as serverless due to the nature of the configuration which places the video workflow (ingest, editing, playout) desktop clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system. Per-node bandwidth usage control, sometimes referred to asquality of service(QoS), is especially important in video editing as it ensures fair and prioritized bandwidth usage across the network. SAN Storage QoS enables the desired storage performance to be calculated and maintained for network customers accessing the device. Some factors that affect SAN QoS are: Alternatively,over-provisioningcan be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation. Storage virtualizationis the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept calledlocation transparency. This is implemented in modern disk arrays, often using vendor-proprietary technology. However, the goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly.[8]
https://en.wikipedia.org/wiki/Storage_area_network
Direct-attached storage(DAS) isdigital storagedirectly attached to thecomputeraccessing it, as opposed to storage accessed over a computer network (i.e.network-attached storage). DAS consists of one or more storage units such ashard drives,solid-state drives,optical disc driveswithin anexternal enclosure. The term "DAS" is aretronymto contrast withstorage area network(SAN) andnetwork-attached storage(NAS). A typical DAS system is made of adata storage device(for exampleenclosuresholding a number ofhard disk drives) connected directly to a computer through ahost bus adapter(HBA). Between those two points there is no network device (like hub, switch, or router), and this is the main characteristic of DAS. The mainprotocolsused for DAS connections areParallel ATA,SATA,eSATA,[1]NVMe,Parallel SCSI,SAS,USB, andIEEE 1394. Most functions found in modern storage do not depend on whether the storage is attached directly to servers (DAS), or via a network (SAN and NAS). In enterprise environments, direct-attached storage systems can utilize storage devices that have higher endurance in terms of data workload capability, along with scalability in the amount of capacity that storage arrays can achieve compared to consumer-grade NAS and other storage devices.[2] The key difference between DAS and NAS is that DAS storage does not incorporate any network hardware and related operating environment to provide a facility to share storage resources independently of the host so is only available via the host to which the DAS is attached. DAS is typically considered much faster than NAS due to lower latency in the type of host connection although contemporary network and direct connection throughput typically exceeds the raw read/write performance of the storage units themselves. ASAN (storage area network)has more in common with a DAS than a NAS with the key difference being that DAS is a 1:1 relationship between storage and host whereas SAN is many to many.
https://en.wikipedia.org/wiki/Direct-attached_storage
Peer-to-peer file sharingis the distribution and sharing ofdigital mediausingpeer-to-peer(P2P) networking technology. P2P file sharing allows users to access media files such as books, music, movies, and games using a P2P software program that searches for other connected computers on a P2P network to locate the desired content.[1]The nodes (peers) of such networks are end-user computers and distribution servers (not required). The early days of file-sharing were done predominantly by client-server transfers from web pages,FTPandIRCbeforeNapsterpopularised a Windows application that allowed users to both upload and download with a freemium style service. Record companies and artists called for its shutdown and FBI raids followed.Napsterhad been incredibly popular at its peak, spawning a grass-roots movement following from themixtapescene of the 80's and left a significant gap in music availability with its followers. After much discussion on forums and in chat-rooms, it was decided thatNapsterhad been vulnerable due to its reliance on centralised servers and their physical location and thus competing groups raced to build a decentralised peer-to-peer system. Peer-to-peer file sharing technology has evolved through several design stages from the early networks likeGnutella, which popularized the technology in several iterations that used various front ends such asKazaa,LimewireandWinMXbeforeEdonkeythen on to later models like theBitTorrentprotocol. Microsoft uses it for Update distribution (Windows 10) and online video games use it as their content distribution network for downloading large amounts of data without incurring the dramatic costs for bandwidth inherent when providing just a single source. Several factors contributed to the widespread adoption and facilitation of peer-to-peer file sharing. These included increasing Internet bandwidth, the widespread digitization of physical media, and the increasing capabilities of residential personal computers. Users are able to transfer one or more files from one computer to another across the Internet through variousfile transfersystems and other file-sharing networks.[1] The central index server indexed the users and their shared content. When someone searched for a file, the server searched all available copies of that file and presented them to the user. The files would be transferred directly between private computers (peers/nodes). A limitation was that only music files could be shared.[2]Because this process occurred on a central server, however, Napster was held liable for copyright infringement and shut down in July 2001. It later reopened as a pay service.[3] After Napster was shut down, peer-to-peer services were invented such asGnutellaandKazaa. These services also allowed users to download files other than music, such as movies and games.[2] NapsterandeDonkey2000both used a central server-based model. These systems relied on the operation of the respective central servers, and thus were susceptible to centralized shutdown. Their demise led to the rise of networks likeLimewire,Kazaa,Morpheus,Gnutella, andGnutella2, which are able to operate without any central servers, eliminating the central vulnerability by connecting users remotely to each other. However, these networks still relied on specific, centrally distributed client programs, so they could be crippled by taking legal action against a sufficiently large number of publishers of the client programs. Sharman Networks, the publisher of Kazaa, has been inactive since 2006. StreamCast Networks, the publisher of Morpheus, shut down on April 22, 2008. Limewire LLC was shut down in late 2010 or early 2011. This cleared the way for the dominance of theBittorrentprotocol, which differs from its predecessors in two major ways. The first is that no individual, group, or company owns the protocol or the terms "Torrent" or "Bittorrent", meaning that anyone can write and distribute client software that works with the network. The second is that Bittorrent clients have no search functionality of their own. Instead, users must rely on third-party websites like Isohunt or The Pirate Bay to find "torrent" files, which function like maps that tell the client how to find and download the files that the user actually wants. These two characteristics combined offer a level of decentralization that makes Bittorrent practically impossible to shut down. File-sharing networks are sometimes organized into three "generations" based on these different levels of decentralization.[4][5]Darknets, including networks likeFreenet, are sometimes considered to be third-generation file-sharing networks.[6] Peer-to-peer file sharing is also efficient in terms of cost.[7]The system administration overhead is smaller because the user is the provider and usually the provider is the administrator as well. Hence each network can be monitored by the users themselves. At the same time, large servers sometimes require more storage and this increases the cost since the storage has to be rented or bought exclusively for a server. However, usually peer-to-peer file sharing does not require adedicated server.[8] There are ongoing discussion about the economic impact of P2P file sharing. Norbert Michel, a policy analyst atThe Heritage Foundation, said that studies had produced "disparate estimates of file sharing's impact on album sales".[9] In the bookThe Wealth of Networks,Yochai Benklerstates that peer-to-peer file sharing is economically efficient and that the users pay the full transaction cost and marginal cost of such sharing even if it "throws a monkey wrench into the particular way in which our society has chosen to pay musicians and re-cording executives. This trades off efficiency for longer-term incentive effects for the recording industry. However, it is efficient within the normal meaning of the term in economics in a way that it would not have been had Jack and Jane used subsidized computers or network connections".[10] A calculation example: with peer to peer file sharing:total cost=filesizecustomers×cost-per-byte{\displaystyle {\text{total cost}}={\frac {\text{filesize}}{\text{customers}}}\times {\text{cost-per-byte}}} with casual content delivery networks:total cost=filesize×customers×cost-per-byte{\displaystyle {\text{total cost}}={\text{filesize}}\times {\text{customers}}\times {\text{cost-per-byte}}} The economic effect of copyright infringement through peer-to-peer file sharing on music revenue has been controversial and difficult to determine. Unofficial studies found that file sharing had a negative impact on record sales.[11][12][13][14][15]It has proven difficult to untangle the cause and effect relationships among a number of different trends, including an increase in legal online purchases of music; illegal file-sharing; drop in the prices ofcompact disks; and the closure of many independent music stores with a concomitant shift to sales by big-box retailers.[16] TheMotion Picture Association(MPAA) reported that American studios lost $2,373 billion in 2005 (equivalent to $3,820 billion in 2024) representing approximately one third of the total cost of film piracy in the United States.[17]The MPAA's estimate was doubted by commentators since it was based on the assumption that one download was equivalent to one lost sale, and downloaders might not purchase the movie if illegal downloading was not an option.[18][19][20]Due to the private nature of the study, the figures could not be publicly checked for methodology or validity.[21][22][23]In January 2008, as the MPAA was lobbying for a bill which would compel Universities to crack down on piracy, it was admitted by MPAA that its figures on piracy in colleges had been inflated by up to 300%.[24][25] A 2010 study, commissioned by theInternational Chamber of Commerceand conducted by independent Paris-based economics firm TERA, estimated that unlawful downloading of music, film and software cost Europe's creative industries several billion dollars in revenue each year.[26]A further TERA study predicted losses due to piracy reaching as much as 1.2 million jobs and €240 billion in retail revenue by 2015 if the trend continued.[citation needed]Researchers applied a substitution rate of ten percent to the volume of copyright infringements per year. This rate corresponded to the number of units potentially traded if unlawful file sharing were eliminated and did not occur.[27]Piracy rates for popularsoftwareandoperating systemshave been common, even in regions with strong intellectual property enforcement, such as theUnited Statesor theEuropean Union.[28] In 2004, an estimated 70 million people participated in online file sharing.[29]According to aCBS Newspoll, nearly 70 percent of 18- to 29-year-olds thought file sharing was acceptable in some circumstances and 58 percent of allAmericanswho followed the file sharing issue considered it acceptable in at least some circumstances.[30]In January 2006, 32 million Americans over the age of 12 had downloaded at least one feature-length movie from the Internet, 80 percent of whom had done so exclusively over P2P. Of the population sampled, 60 percent felt that downloading copyrighted movies off the Internet did not constitute a very serious offense, however 78 percent believed taking a DVD from a store without paying for it constituted a very serious offense.[31] In July 2008, 20 percent of Europeans used file sharing networks to obtain music, while 10 percent used paid-for digital music services such asiTunes.[32]In February 2009, a survey undertaken byTiscaliin the UK found that 75 percent of the English public polled were aware of what was legal and illegal in relation to file sharing, but there was a divide as to where they felt the legal burden should be placed: 49 percent of people believed P2P companies should be held responsible for illegal file sharing on their networks and 18 percent viewed individual file sharers as the culprits.[33] According to an earlier poll, 75 percent of young voters inSweden(18-20) supported file sharing when presented with the statement: "I think it is OK to download files from the Net, even if it is illegal." Of the respondents, 38 percent said they "adamantly agreed" while 39 percent said they "partly agreed".[34]An academic study among American and European college students found that users of file-sharing technologies were relatively anti-copyright and that copyright enforcement created backlash, hardening pro-file sharing beliefs among users of these technologies.[35] Communities have a prominent role in many peer to peer networks and applications, such as BitTorrent, Gnutella andDC++. There are different elements that contribute to the formation, development and the stability of these communities, which include interests, user attributes, cost reduction, user motivation and the dimension of the community.[citation needed] Peer communities are formed on the basis of common interests. For Khambatti, Ryu and Dasgupta common interests can be labelled as attributes "which are used to determine the peer communities in which a particular peer can participate".[36]There are two ways in which these attributes can be classified: explicit and implicit attributes. Explicit values are information that peers provide about themselves to a specific community, such as their interest in a subject or their taste in music. With implicit values, users do not directly express information about themselves, albeit, it is still possible to find information about that specific user by uncovering his or her past queries and research carried out in a P2P network. Khambatti, Ryu and Dasgupta divide these interests further into three classes: personal, claimed and group attributes.[36] A full set of attributes (common interests) of a specific peer is defined as personal attributes, and is a collection of information a peer has about him or herself. Peers may decide not to disclose information about themselves to maintain their privacy and online security. It is for this reason that the authors specify that "a subset of...attributes is explicitly claimed public by a peer", and they define such attributes as "claimed attributes".[36]The third category of interests is group attributes, defined as "location or affiliation oriented" and are needed to form a...basis for communities", an example being the "domain name of an internet connection" which acts as an online location and group identifier for certain users. Cost reduction influences the sharing component of P2P communities. Users who share do so to attempt "to reduce...costs" as made clear by Cunningham, Alexander and Adilov.[37]In their workPeer-to-peer File Sharing Communities, they explain that "the act of sharing is costly since any download from a sharer implies that the sharer is sacrificing bandwidth".[37]As sharing represents the basis of P2P communities, such as Napster, and without it "the network collapses", users share despite its costs in order to attempt to lower their own costs, particularly those associated with searching, and with the congestion of internet servers.[37] User motivation and the size of the P2P community contribute to its sustainability and activity. In her work Motivating Participation in Peer to Peer Communities, Vassileva studies these two aspects through an experiment carried out in the University of Saskatchewan (Canada), where a P2P application (COMUTELLA) was created and distributed among students. In her view, motivation is "a crucial factor" in encouraging users to participate in an online P2P community, particularly because the "lack of a critical mass of active users" in the form of a community will not allow for a P2P sharing to function properly.[38] Usefulness is a valued aspect by users when joining a P2P community. The specific P2P system must be perceived as "useful" by the user and must be able to fulfil his or her needs and pursue his or her interests. Consequently, the "size of the community of users defines the level of usefulness" and "the value of the system determines the number of users".[38]This two way process is defined by Vassileva as a feedback loop, and has allowed for the birth of file-sharing systems like Napster and KaZaA. However, in her research Vassileva has also found that "incentives are needed for the users in the beginning", particularly for motivating and getting users into the habit of staying online.[38]This can be done, for example, by providing the system with a wide amount of resources or by having an experienced user provide assistance to a less experienced one. Users participating in P2P systems can be classified in different ways. According to Vassileva, users can be classified depending on their participation in the P2P system. There are five types of users to be found: users who create services, users who allow services, users who facilitate search, users who allow communication, users who are uncooperative and free ride.[38] In the first instance, the user creates new resources or services and offers them to the community. In the second, the user provides the community with disk space "to store files for downloads" or with "computing resources" to facilitate a service provided by another users.[38]In the third, the user provides a list of relationships to help other users find specific files or services. In the fourth, the user participates actively in the "protocol of the network", contributing to keeping the network together. In the last situation, the user does not contribute to the network, downloads what he or she needs but goes immediately offline once the service is not needed anymore, thus free-riding on the network and community resources.[38] Corporations continue to combat the use of the internet as a tool to illegally copy and share various files, especially that of copyrighted music. TheRecording Industry Association of America(RIAA) has been active in leading campaigns against infringers. Lawsuits have been launched against individuals as well as programs such asNapsterin order to "protect" copyright owners.[39]One effort of the RIAA has been to implant decoy users to monitor the use of copyrighted material from a firsthand perspective.[40] In early June 2002, Researcher Nathaniel Good atHP Labsdemonstrated that user interface design issues could contribute to users inadvertently sharing personal and confidential information over P2P networks.[41][42][43] In 2003, Congressional hearings before the House Committee of Government Reform (Overexposed: The Threats to Privacy & Security on File Sharing Networks)[44]and the Senate Judiciary Committee (The Dark Side of a Bright Idea: Could Personal and National Security Risks Compromise the Potential of P2P File-Sharing Networks?)[45]were convened to address and discuss the issue of inadvertent sharing on peer-to-peer networks and its consequences to consumer and national security. Researchers have examined potential security risks including the release of personal information, bundledspyware, andvirusesdownloaded from the network.[46][47]Some proprietary file sharing clients have been known to bundlemalware, thoughopen sourceprograms typically have not. Some open source file sharing packages have even provided integrated anti-virus scanning.[48] Since approximately 2004 the threat ofidentity thefthad become more prevalent, and in July 2008 there was another inadvertent revealing of vast amounts of personal information through P2P sites. The "names, dates of birth, andSocial Security numbersof about 2,000 of (an investment) firm's clients" were exposed, "including [those of]Supreme Court Justice Stephen Breyer."[49]A drastic increase in inadvertent P2P file sharing of personal and sensitive information became evident in 2009 at the beginning ofPresident Obama's administration when the blueprints to the helicopterMarine Onewere made available to the public through a breach in security via a P2P file sharing site. Access to this information has the potential of being detrimental to US security.[49]Furthermore, shortly before this security breach, theTodayshow had reported that more than 150,000 tax returns, 25,800 student loan applications and 626,000 credit reports had been inadvertently made available through file sharing.[49] The United States government then attempted to make users more aware of the potential risks involved with P2P file sharing programs[50]through legislation such as H.R. 1319, the Informed P2P User Act, in 2009.[51]According to this act, it would be mandatory for individuals to be aware of the risks associated with peer-to-peer file sharing before purchasing software with informed consent of the user required prior to use of such programs. In addition, the act would allow users to block and remove P2P file sharing software from their computers at any time,[52]with theFederal Trade Commissionenforcing regulations.US-CERTalso warns of the potential risks.[53] The act of file sharing is not illegal per se and peer-to-peer networks are also used for legitimate purposes. The legal issues in file sharing involve violating the laws ofcopyrightedmaterial. Most discussions about the legality of file sharing are implied to be about solely copyright material. Many countries havefair useexceptions that permit limited use of copyrighted material without acquiring permission from the rights holders. Such documents include commentary, news reporting, research and scholarship. Copyright laws are territorial- they do not extend beyond the territory of a specific state unless that state is a party to an international agreement. Most countries today are parties to at least one such agreement. In the area of privacy, recent court rulings seem to indicate that there can be noexpectation of privacyin data exposed over peer-to-peer file-sharing networks. In a 39-page ruling released November 8, 2013, US District Court JudgeChristina Reissdenied the motion to suppress evidence gathered by authorities without a search warrant through an automated peer-to-peer search tool.[54] Media industries have made efforts to curtail the spread of copyrighted materials through P2P systems. Initially, the corporations were able to successfully sue the distribution platforms such as Napster and have them shut down. Additionally, they litigated users who prominently shared copyrighted materials en masse. However, as more decentralized systems such asFastTrackwere developed, this proved to be unenforceable. There are also millions of users worldwide who use P2P systems illegally, which made it impractical to seek widespread legal action. One major effort involves distributing polluted files into the P2P network. For instance, one may distribute unrelated files that has the metadata of a copyrighted media. This way, users who downloads the media would receive something unrelated to what they have been expecting.[55]
https://en.wikipedia.org/wiki/Peer-to-peer_file_sharing
Incomputing, ashared resource, ornetwork share, is acomputer resourcemade available from onehostto other hosts on acomputer network.[1][2]It is a device or piece of information on a computer that can be remotely accessed from another computer transparently as if it were a resource in the local machine. Network sharing is made possible byinter-process communicationover the network.[2][3] Some examples of shareable resources arecomputer programs,data,storage devices, andprinters. E.g.shared file access(also known asdisk sharingandfolder sharing), shared printer access, shared scanner access, etc. The shared resource is called ashared disk,shared folderorshared document The termfile sharingtraditionally means shared file access, especially in the context of operating systems andLANandIntranetservices, for example in Microsoft Windows documentation.[4]Though, asBitTorrentand similar applications became available in the early 2000s, the termfile sharingincreasingly has become associated withpeer-to-peer file sharingover the Internet. Shared file and printer access require anoperating systemon the client that supports access to resources on a server, an operating system on the server that supports access to its resources from a client, and anapplication layer(in the four or five layerTCP/IP reference model) file sharingprotocolandtransport layerprotocol to provide that shared access. Modern operating systems forpersonal computersincludedistributed file systemsthat support file sharing, while hand-held computing devices sometimes require additional software for shared file access. The most common such file systems and protocols are: The "primary operating system" is the operating system on which the file sharing protocol in question is most commonly used. OnMicrosoft Windows, a network share is provided by the Windows network component "File and Printer Sharing for Microsoft Networks", using Microsoft's SMB (Server Message Block) protocol. Other operating systems might also implement that protocol; for example,Sambais an SMB server running onUnix-likeoperating systems and some other non-MS-DOS/non-Windows operating systems such asOpenVMS. Samba can be used to create network shares which can be accessed, using SMB, from computers runningMicrosoft Windows. An alternative approach is ashared disk file system, where each computer has access to the "native" filesystem on a shared disk drive. Shared resource access can also be implemented withWeb-based Distributed Authoring and Versioning(WebDAV). The share can be accessed by client computers through some naming convention, such asUNC(Universal Naming Convention) used onDOSandWindowsPC computers. This implies that a network share can be addressed according to the following: whereServerComputerNameis theWINSname,DNSname orIP addressof the server computer, andShareNamemay be a folder or file name, or itspath. The shared folder can also be given a ShareName that is different from the folder local name at the server side. For example,\\ServerComputerName\c$usually denotes a drive with drive letterC:on a Windows machine. A shared drive or folder is oftenmappedat the client PC computer, meaning that it is assigned adrive letteron the local PC computer. For example, the drive letterH:is typically used for the user home directory on a central file server. A network share can become a security liability when access to the shared files is gained (often by devious means) by those who should not have access to them. Manycomputer wormshave spread through network shares. Network shares would consume extensive communication capacity in non-broadband network access. Because of that, shared printer and file access is normally prohibited infirewallsfrom computers outside thelocal area networkor enterpriseIntranet. However, by means ofvirtual private networks(VPN), shared resources can securely be made available for certified users outside the local network. A network share is typically made accessible to other users by marking anyfolderor file as shared, or by changing thefile system permissionsor access rights in the properties of the folder. For example, a file or folder may be accessible only to one user (the owner), to system administrators, to a certain group of users to public, i.e. to all logged in users. The exact procedure varies by platform. In operating system editions for homes and small offices, there may be a specialpre-shared folderthat is accessible to all users with a user account and password on the local computer. Network access to the pre-shared folder can be turned on. In the English version of theWindows XP Home Editionoperating system, the preshared folder is namedShared documents, typically with thepathC:\Documents and Settings\All users\Shared documents. InWindows VistaandWindows 7, the pre-shared folder is namedPublic documents, typically with the pathC:\Users\Public\Public documents.[6] In home and small office networks, adecentralizedapproach is often used, where every user may make their local folders and printers available to others. This approach is sometimes denoted aWorkgrouporpeer-to-peernetwork topology, since the same computer may be used as client as well as server. In large enterprise networks, a centralizedfile serverorprint server, sometimes denotedclient–server paradigm, is typically used. A client process on the local user computer takes the initiative to start the communication, while a server process on thefile serverorprint serverremote computer passively waits for requests to start a communication session In very large networks, aStorage Area Network(SAN) approach may be used. Online storageon a server outside the local network is currently an option, especially for homes and small office networks. Shared file access should not be confused with file transfer using thefile transfer protocol(FTP), or theBluetoothIRDAOBject EXchange(OBEX) protocol. Shared access involves automatic synchronization of folder information whenever a folder is changed on the server, and may provide server side file searching, while file transfer is a more rudimentary service.[7] Shared file access is normally considered as a local area network (LAN) service, while FTP is an Internet service. Shared file access is transparent to the user, as if it was a resource in the local file system, and supports a multi-user environment. This includesconcurrency controlorlockingof a remote file while a user is editing it, andfile system permissions. Shared file access involves but should not be confused withfile synchronizationand other information synchronization. Internet-based information synchronization may, for example, use theSyncMLlanguage. Shared file access is based on server-side pushing of folder information, and is normally used over an "always on"Internet socket. File synchronization allows the user to be offline from time to time and is normally based on an agent software that polls synchronized machines at reconnect, and sometimes repeatedly with a certain time interval, to discover differences. Modern operating systems often include a localcacheof remote files, allowingoffline accessand synchronization when reconnected. The first international heterogenous network for resource sharing was the 1973 interconnection of theARPANETwith earlyBritish academic networksthrough the computer science department atUniversity College London(UCL).[8][9][10]
https://en.wikipedia.org/wiki/Disk_sharing
Adistributed file system for cloudis afile systemthat allows many clients to have access to data and supports operations (create, delete, modify, read, write) on that data. Each data file may be partitioned into several parts calledchunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in ahierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured.Confidentiality,availabilityandintegrityare the main keys for a secure system. Users can share computing resources through theInternetthanks tocloud computingwhich is typically characterized byscalableandelasticresources – such as physicalservers, applications and any services that arevirtualizedand allocated dynamically.Synchronizationis required to make sure that all devices are up-to-date. Distributed file systems enable many big, medium, and small enterprises to store and access their remote data as they do local data, facilitating the use of variable resources. Today, there are many implementations of distributed file systems. The first file servers were developed by researchers in the 1970s. Sun Microsystem'sNetwork File Systembecame available in the 1980s. Before that, people who wanted to share files used thesneakernetmethod, physically transporting files on storage media from place to place. Once computer networks started to proliferate, it became obvious that the existing file systems had many limitations and were unsuitable for multi-user environments. Users initially usedFTPto share files.[1]FTP first ran on thePDP-10at the end of 1973. Even with FTP, files needed to be copied from the source computer onto a server and then from the server onto the destination computer. Users were required to know the physical addresses of all computers involved with the file sharing.[2] Modern data centers must support large, heterogenous environments, consisting of large numbers of computers of varying capacities. Cloud computing coordinates the operation of all such systems, with techniques such asdata center networking(DCN), theMapReduceframework, which supportsdata-intensive computingapplications in parallel and distributed systems, andvirtualizationtechniques that provide dynamic resource allocation, allowing multiple operating systems to coexist on the same physical server. Cloud computingprovides large-scale computing thanks to its ability to provide the needed CPU and storage resources to the user with complete transparency. This makes cloud computing particularly suited to support different types of applications that require large-scale distributed processing. Thisdata-intensive computingneeds a high performancefile systemthat can share data betweenvirtual machines(VM).[3] Cloud computing dynamically allocates the needed resources, releasing them once a task is finished, requiring users to pay only for needed services, often via aservice-level agreement. Cloud computing andcluster computingparadigms are becoming increasingly important to industrial data processing and scientific applications such asastronomyand physics, which frequently require the availability of large numbers of computers to carry out experiments.[4] Most distributed file systems are built on the client-server architecture, but other, decentralized, solutions exist as well. Network File System(NFS) uses aclient-server architecture, which allows sharing of files between a number of machines on a network as if they were located locally, providing a standardized view. The NFS protocol allows heterogeneous clients' processes, probably running on different machines and under different operating systems, to access files on a distant server, ignoring the actual location of files. Relying on a single server results in the NFS protocol suffering from potentially low availability and poor scalability. Using multiple servers does not solve the availability problem since each server is working independently.[5]The model of NFS is a remote file service. This model is also called the remote access model, which is in contrast with the upload/download model: The file system used by NFS is almost the same as the one used byUnixsystems. Files are hierarchically organized into a naming graph in which directories and files are represented by nodes. Acluster-based architectureameliorates some of the issues in client-server architectures, improving the execution of applications in parallel. The technique used here is file-striping: a file is split into multiple chunks, which are "striped" across several storage servers. The goal is to allow access to different parts of a file in parallel. If the application does not benefit from this technique, then it would be more convenient to store different files on different servers. However, when it comes to organizing a distributed file system for large data centers, such as Amazon and Google, that offer services to web clients allowing multiple operations (reading, updating, deleting,...) to a large number of files distributed among a large number of computers, then cluster-based solutions become more beneficial. Note that having a large number of computers may mean more hardware failures.[7]Two of the most widely used distributed file systems (DFS) of this type are theGoogle File System(GFS) and theHadoop Distributed File System(HDFS). The file systems of both are implemented by user level processes running on top of a standard operating system (Linuxin the case of GFS).[8] Google File System(GFS) andHadoop Distributed File System(HDFS) are specifically built for handlingbatch processingon very large data sets. For that, the following hypotheses must be taken into account:[9] Load balancingis essential for efficient operation in distributed environments. It means distributing work among different servers,[11]fairly, in order to get more work done in the same amount of time and to serve clients faster. In a system containing N chunkservers in a cloud (N being 1000, 10000, or more), where a certain number of files are stored, each file is split into several parts or chunks of fixed size (for example, 64 megabytes), the load of each chunkserver being proportional to the number of chunks hosted by the server.[12]In a load-balanced cloud, resources can be efficiently used while maximizing the performance of MapReduce-based applications. In a cloud computing environment, failure is the norm,[13][14]and chunkservers may be upgraded, replaced, and added to the system. Files can also be dynamically created, deleted, and appended. That leads to load imbalance in a distributed file system, meaning that the file chunks are not distributed equitably between the servers. Distributed file systems in clouds such as GFS and HDFS rely on central or master servers or nodes (Master for GFS and NameNode for HDFS) to manage the metadata and the load balancing. The master rebalances replicas periodically: data must be moved from one DataNode/chunkserver to another if free space on the first server falls below a certain threshold.[15]However, this centralized approach can become a bottleneck for those master servers, if they become unable to manage a large number of file accesses, as it increases their already heavy loads. The load rebalance problem isNP-hard.[16] In order to get a large number of chunkservers to work in collaboration, and to solve the problem of load balancing in distributed file systems, several approaches have been proposed, such as reallocating file chunks so that the chunks can be distributed as uniformly as possible while reducing the movement cost as much as possible.[12] Google, one of the biggest internet companies, has created its own distributed file system, named Google File System (GFS), to meet the rapidly growing demands of Google's data processing needs, and it is used for all cloud services. GFS is a scalable distributed file system for data-intensive applications. It provides fault-tolerant, high-performance data storage a large number of clients accessing it simultaneously. GFS usesMapReduce, which allows users to create programs and run them on multiple machines without thinking about parallelization and load-balancing issues. GFS architecture is based on having a single master server for multiple chunkservers and multiple clients.[17] The master server running in dedicated node is responsible for coordinating storage resources and managing files'smetadata(the equivalent of, for example, inodes in classical file systems).[9]Each file is split into multiple chunks of 64 megabytes. Each chunk is stored in a chunk server. A chunk is identified by a chunk handle, which is a globally unique 64-bit number that is assigned by the master when the chunk is first created. The master maintains all of the files's metadata, including file names, directories, and the mapping of files to the list of chunks that contain each file's data. The metadata is kept in the master server's main memory, along with the mapping of files to chunks. Updates to this data are logged to an operation log on disk. This operation log is replicated onto remote machines. When the log becomes too large, a checkpoint is made and the main-memory data is stored in aB-treestructure to facilitate mapping back into the main memory.[18] To facilitatefault tolerance, each chunk is replicated onto multiple (default, three) chunk servers.[19]A chunk is available on at least one chunk server. The advantage of this scheme is simplicity. The master is responsible for allocating the chunk servers for each chunk and is contacted only for metadata information. For all other data, the client has to interact with the chunk servers. The master keeps track of where a chunk is located. However, it does not attempt to maintain the chunk locations precisely but only occasionally contacts the chunk servers to see which chunks they have stored.[20]This allows for scalability, and helps prevent bottlenecks due to increased workload.[21] In GFS, most files are modified by appending new data and not overwriting existing data. Once written, the files are usually only read sequentially rather than randomly, and that makes this DFS the most suitable for scenarios in which many large files are created once but read many times.[22][23] When a client wants to write-to/update a file, the master will assign a replica, which will be the primary replica if it is the first modification. The process of writing is composed of two steps:[9] Consequently, we can differentiate two types of flows: the data flow and the control flow. Data flow is associated with the sending phase and control flow is associated to the writing phase. This assures that the primary chunk server takes control of the write order. Note that when the master assigns the write operation to a replica, it increments the chunk version number and informs all of the replicas containing that chunk of the new version number. Chunk version numbers allow for update error-detection, if a replica wasn't updated because its chunk server was down.[24] Some new Google applications did not work well with the 64-megabyte chunk size. To solve that problem, GFS started, in 2004, to implement theBigtableapproach.[25] HDFS, developed by theApache Software Foundation, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes). Its architecture is similar to GFS, i.e. a server/client architecture. The HDFS is normally installed on a cluster of computers. The design concept of Hadoop is informed by Google's, with Google File System, Google MapReduce andBigtable, being implemented by Hadoop Distributed File System (HDFS), Hadoop MapReduce, and Hadoop Base (HBase) respectively.[26]Like GFS, HDFS is suited for scenarios with write-once-read-many file access, and supports file appends and truncates in lieu of random reads and writes to simplify data coherency issues.[27] An HDFS cluster consists of a single NameNode and several DataNode machines. The NameNode, a master server, manages and maintains the metadata of storage DataNodes in its RAM. DataNodes manage storage attached to the nodes that they run on. NameNode and DataNode are software designed to run on everyday-use machines, which typically run under a Linux OS. HDFS can be run on any machine that supports Java and therefore can run either a NameNode or the Datanode software.[28] On an HDFS cluster, a file is split into one or more equal-size blocks, except for the possibility of the last block being smaller. Each block is stored on multiple DataNodes, and each may be replicated on multiple DataNodes to guarantee availability. By default, each block is replicated three times, a process called "Block Level Replication".[29] The NameNode manages the file system namespace operations such as opening, closing, and renaming files and directories, and regulates file access. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for servicing read and write requests from the file system's clients, managing the block allocation or deletion, and replicating blocks.[30] When a client wants to read or write data, it contacts the NameNode and the NameNode checks where the data should be read from or written to. After that, the client has the location of the DataNode and can send read or write requests to it. The HDFS is typically characterized by its compatibility with data rebalancing schemes. In general, managing the free space on a DataNode is very important. Data must be moved from one DataNode to another, if free space is not adequate; and in the case of creating additional replicas, data should be moved to assure system balance.[29] Distributed file systems can be optimized for different purposes. Some, such as those designed for internet services, including GFS, are optimized for scalability. Other designs for distributed file systems support performance-intensive applications usually executed in parallel.[31]Some examples include:MapR File System(MapR-FS),Ceph-FS,Fraunhofer File System (BeeGFS),Lustre File System,IBM General Parallel File System(GPFS), andParallel Virtual File System. MapR-FS is a distributed file system that is the basis of the MapR Converged Platform, with capabilities for distributed file storage, a NoSQL database with multiple APIs, and an integrated message streaming system. MapR-FS is optimized for scalability, performance, reliability, and availability. Its file storage capability is compatible with the Apache Hadoop Distributed File System (HDFS) API but with several design characteristics that distinguish it from HDFS. Among the most notable differences are that MapR-FS is a fully read/write filesystem with metadata for files and directories distributed across the namespace, so there is no NameNode.[32][33][34][35][36] Ceph-FS is a distributed file system that provides excellent performance and reliability.[37]It answers the challenges of dealing with huge files and directories, coordinating the activity of thousands of disks, providing parallel access to metadata on a massive scale, manipulating both scientific and general-purpose workloads, authenticating and encrypting on a large scale, and increasing or decreasing dynamically due to frequent device decommissioning, device failures, and cluster expansions.[38] BeeGFS is the high-performance parallel file system from the Fraunhofer Competence Centre for High Performance Computing. The distributed metadata architecture of BeeGFS has been designed to provide the scalability and flexibility needed to runHPCand similar applications with high I/O demands.[39] Lustre File System has been designed and implemented to deal with the issue of bottlenecks traditionally found in distributed systems. Lustre is characterized by its efficiency, scalability, and redundancy.[40]GPFS was also designed with the goal of removing such bottlenecks.[41] High performance of distributed file systems requires efficient communication between computing nodes and fast access to the storage systems. Operations such as open, close, read, write, send, and receive need to be fast, to ensure that performance. For example, each read or write request accesses disk storage, which introduces seek, rotational, and network latencies.[42] The data communication (send/receive) operations transfer data from the application buffer to the machine kernel,TCPcontrolling the process and being implemented in the kernel. However, in case of network congestion or errors, TCP may not send the data directly. While transferring data from a buffer in thekernelto the application, the machine does not read the byte stream from the remote machine. In fact, TCP is responsible for buffering the data for the application.[43] Choosing the buffer-size, for file reading and writing, or file sending and receiving, is done at the application level. The buffer is maintained using acircular linked list.[44]It consists of a set of BufferNodes. Each BufferNode has a DataField. The DataField contains the data and a pointer called NextBufferNode that points to the next BufferNode. To find the current position, twopointersare used: CurrentBufferNode and EndBufferNode, that represent the position in the BufferNode for the last write and read positions. If the BufferNode has no free space, it will send a wait signal to the client to wait until there is available space.[45] More and more users have multiple devices with ad hoc connectivity. The data sets replicated on these devices need to be synchronized among an arbitrary number of servers. This is useful for backups and also for offline operation. Indeed, when user network conditions are not good, then the user device will selectively replicate a part of data that will be modified later and off-line. Once the network conditions become good, the device is synchronized.[46]Two approaches exist to tackle the distributed synchronization issue: user-controlled peer-to-peer synchronization and cloud master-replica synchronization.[46] In cloud computing, the most importantsecurityconcepts areconfidentiality,integrity, andavailability("CIA"). Confidentiality becomes indispensable in order to keep private data from being disclosed. Integrity ensures that data is not corrupted.[47] Confidentialitymeans that data and computation tasks are confidential: neither cloud provider nor other clients can access the client's data. Much research has been done about confidentiality, because it is one of the crucial points that still presents challenges for cloud computing. A lack of trust in the cloud providers is also a related issue.[48]The infrastructure of the cloud must ensure that customers' data will not be accessed by unauthorized parties. The environment becomes insecure if the service provider can do all of the following:[49] The geographic location of data helps determine privacy and confidentiality. The location of clients should be taken into account. For example, clients in Europe won't be interested in using datacenters located in United States, because that affects the guarantee of the confidentiality of data. In order to deal with that problem, some cloud computing vendors have included the geographic location of the host as a parameter of the service-level agreement made with the customer,[50]allowing users to choose themselves the locations of the servers that will host their data. Another approach to confidentiality involves data encryption.[51]Otherwise, there will be serious risk of unauthorized use. A variety of solutions exists, such as encrypting only sensitive data,[52]and supporting only some operations, in order to simplify computation.[53]Furthermore, cryptographic techniques and tools asFHE, are used to preserve privacy in the cloud.[47] Integrity in cloud computing impliesdata integrityas well ascomputing integrity. Such integrity means that data has to be stored correctly on cloud servers and, in case of failures or incorrect computing, that problems have to be detected. Data integrity can be affected by malicious events or from administration errors (e.g. duringbackupand restore,data migration, or changing memberships inP2Psystems).[54] Integrity is easy to achieve using cryptography (typically throughmessage-authentication code, or MACs, on data blocks).[55] There exist checking mechanisms that effect data integrity. For instance: Availabilityis generally effected byreplication.[61][62][63][64]Meanwhile, consistency must be guaranteed. However, consistency and availability cannot be achieved at the same time; each is prioritized at some sacrifice of the other. A balance must be struck.[65] Data must have an identity to be accessible. For instance, Skute[61]is a mechanism based on key/value storage that allows dynamic data allocation in an efficient way. Each server must be identified by a label in the form continent-country-datacenter-room-rack-server. The server can reference multiple virtual nodes, with each node having a selection of data (or multiple partitions of multiple data). Each piece of data is identified by a key space which is generated by a one-way cryptographic hash function (e.g.MD5) and is localised by the hash function value of this key. The key space may be partitioned into multiple partitions with each partition referring to a piece of data. To perform replication, virtual nodes must be replicated and referenced by other servers. To maximize data durability and data availability, the replicas must be placed on different servers and every server should be in a different geographical location, because data availability increases with geographical diversity. The process of replication includes an evaluation of space availability, which must be above a certain minimum thresh-hold on each chunk server. Otherwise, data are replicated to another chunk server. Each partition, i, has an availability value represented by the following formula: availi=∑i=0|si|∑j=i+1|si|confi.confj.diversity(si,sj){\displaystyle avail_{i}=\sum _{i=0}^{|s_{i}|}\sum _{j=i+1}^{|s_{i}|}conf_{i}.conf_{j}.diversity(s_{i},s_{j})} wheresi{\displaystyle s_{i}}are the servers hosting the replicas,confi{\displaystyle conf_{i}}andconfj{\displaystyle conf_{j}}are the confidence of serversi{\displaystyle _{i}}andj{\displaystyle _{j}}(relying on technical factors such as hardware components and non-technical ones like the economic and political situation of a country) and the diversity is the geographical distance betweensi{\displaystyle s_{i}}andsj{\displaystyle s_{j}}.[66] Replication is a great solution to ensure data availability, but it costs too much in terms of memory space.[67]DiskReduce[67]is a modified version of HDFS that's based onRAIDtechnology (RAID-5 and RAID-6) and allows asynchronous encoding of replicated data. Indeed, there is a background process which looks for widely replicated data and deletes extra copies after encoding it. Another approach is to replace replication with erasure coding.[68]In addition, to ensure data availability there are many approaches that allow for data recovery. In fact, data must be coded, and if it is lost, it can be recovered from fragments which were constructed during the coding phase.[69]Some other approaches that apply different mechanisms to guarantee availability are: Reed-Solomon code of Microsoft Azure and RaidNode for HDFS. Also Google is still working on a new approach based on an erasure-coding mechanism.[70] There is no RAID implementation for cloud storage.[68] The cloud computing economy is growing rapidly. The US government has decided to spend 40% of itscompound annual growth rate(CAGR), expected to be 7 billion dollars by 2015.[71] More and more companies have been utilizing cloud computing to manage the massive amount of data and to overcome the lack of storage capacity, and because it enables them to use such resources as a service, ensuring that their computing needs will be met without having to invest in infrastructure (Pay-as-you-go model).[72] Every application provider has to periodically pay the cost of each server where replicas of data are stored. The cost of a server is determined by the quality of the hardware, the storage capacities, and its query-processing and communication overhead.[73]Cloud computing allows providers to scale their services according to client demands. The pay-as-you-go model has also eased the burden on startup companies that wish to benefit from compute-intensive business. Cloud computing also offers an opportunity to many third-world countries that wouldn't have such computing resources otherwise. Cloud computing can lower IT barriers to innovation.[74] Despite the wide utilization of cloud computing, efficient sharing of large volumes of data in an untrusted cloud is still a challenge.
https://en.wikipedia.org/wiki/Distributed_file_system_for_cloud
Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: TheGopherprotocol (/ˈɡoʊfər/ⓘ) is acommunication protocoldesigned for distributing, searching, and retrieving documents inInternet Protocolnetworks. The design of the Gopher protocol and user interface is menu-driven, and presented an alternative to theWorld Wide Webinits early stages, but ultimately fell into disfavor, yielding to Hypertext Transfer Protocol (HTTP). The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web.[1][2] The Gopher protocol was invented by a team led byMark P. McCahill[3]at theUniversity of Minnesota. It offers some features not natively supported by the Web and imposes a much stronger hierarchy on the documents it stores. Its text menu interface is well-suited to computing environments that rely heavily on remotetext-oriented computer terminals, which were still common at the time of its creation in1991, and the simplicity of its protocol facilitated a wide variety of client implementations. Gopher's hierarchical structure provided a platform for the first large-scale electronic library connections.[4]The Gopher protocol is still in use by enthusiasts, and although it has been almost entirely supplanted by the Web, a small population of actively-maintained servers remains.[2] The Gopher system was released in mid-1991 by Mark P. McCahill, Farhad Anklesaria, Paul Lindner, Daniel Torrey, and Bob Alberti of theUniversity of Minnesotain the United States.[5]Its central goals were, as stated inRFC1436: Gopher combines document hierarchies with collections of services, includingWAIS, theArchieandVeronicasearch engines, and gateways to other information systems such asFile Transfer Protocol(FTP) andUsenet. The general interest in campus-wide information systems (CWISs) in higher education at the time,[6]and the ease of setup of Gopher servers to create an instant CWIS with links to other sites' online directories and resources, were the factors contributing to Gopher's rapid adoption. The name was coined by Anklesaria as a play on several meanings of the word "gopher".[7]TheUniversity of Minnesotamascot is thegopher,[8]agoferis an assistant who "goes for" things, and agopherburrows through the ground to reach a desired location.[9] TheWorld Wide Webwas in its infancy in 1991, and Gopher services quickly became established.[10]By the late 1990s, Gopher had ceased expanding. Several factors contributed to Gopher's stagnation: Gopher remains in active use by its enthusiasts, and there have been attempts to revive Gopher on modern platforms and mobile devices. One attempt is The Overbite Project,[17]which hosts various browser extensions and modern clients. The conceptualization of knowledge in "Gopher space" or a "cloud" as specific information in a particular file, and the prominence of the FTP, influenced the technology and the resulting functionality of Gopher. Gopher is designed to function and to appear much like a mountable read-only globalnetwork file system(and software, such asgopherfs, is available that can actually mount a Gopher server as aFUSEresource). At a minimum, whatever can be done with data files on aCD-ROM, can be done on Gopher. A Gopher system consists of a series of hierarchical hyperlinkable menus. The choice of menu items and titles is controlled by the administrator of the server. Similar to a file on a Web server, a file on a Gopher server can be linked to as a menu item from any other Gopher server. Many servers take advantage of this inter-server linking to provide a directory of other servers that the user can access. The Gopher protocol was first described inRFC1436.Internet Assigned Numbers Authority(IANA) has assignedTransmission Control Protocol(TCP)port70 to the Gopher protocol. The protocol is simple to negotiate, making it possible to browse without using a client. First, the client establishes a TCP connection with the server on port 70, the standard gopher port. The client then sends a string followed by a carriage return followed by aline feed(a "CR + LF" sequence). This is the selector, which identifies the document to be retrieved. If the item selector were an empty line, the default directory would be selected. The server then replies with the requested item and closes the connection. According to the protocol, before the connection closes, the server should send a full-stop (i.e., a period character) on a line by itself. However, not all servers conform to this part of the protocol and the server may close a connection without returning a final full-stop. The main type of reply from the server is a text or binary resource. Alternatively, the resource can be a menu: a form of structured text resource providing references to other resources. Because of the simplicity of the Gopher protocol, tools such asnetcatmake it possible to download Gopher content easily from a command line: The protocol is also supported bycURLsince 7.21.2-DEV, which was released in 2010.[23] The selector string in the request can optionally be followed by a tab character and a search string. This is used by item type 7. Gopher menu items are defined by lines oftab-separated valuesin atext file. This file is sometimes called agophermap. As thesource codeto a gopher menu, a gophermap is roughly analogous to anHTMLfile for aweb page. Each tab-separated line (called aselector line) gives theclient softwarea description of the menu item: what it is, what it is called, and where it leads to. The client displays the menu items in the order that they appear in the gophermap. The first character in a selector line indicates theitem type, which tells the client what kind of file or protocol the menu item points to. This helps the client decide what to do with it. Gopher's item types are a more basic precursor to themedia typesystem used by the Web andemail attachments. The item type is followed by theuser display string(a description or label that represents the item in the menu); the selector (apathor other string for the resource on the server); thehostname(thedomain nameorIP addressof the server), and thenetwork port. All lines in a gopher menu are terminated by "CR + LF". Example of a selector line in a menu source: The following selector line generates a link to the "/home"directoryat thesubdomaingopher.floodgap.com, onport70. The item type of1indicates that the linked resource is a Gopher menu itself. The string "Floodgap Home" is what the client will show to the user when visiting the example menu. In a Gopher menu's source code, a one-character code indicates what kind of content the client should expect. This code may either be a digit or a letter of the alphabet; letters arecase-sensitive. Thetechnical specificationfor Gopher,RFC1436, defines 14 item types. The later gopher+ specification defined an additional 3 types.[24]A one-character code indicates what kind of content the client should expect. Item type3is anerror codeforexception handling. Gopher client authors improvised item typesh(HTML),i(informational message), ands(sound file) after the publication of RFC 1436. Browsers like Netscape Navigator and early versions of Microsoft Internet Explorer would prepend the item type code to the selector as described inRFC4266, so that the type of the gopher item could be determined by the url itself. Most gopher browsers still available, use these prefixes in their urls. Here is an example gopher session where the user requires a gopher menu (/Referenceon the first line): The gopher menu sent back from the server, is a sequence of lines each of which describes an item that can be retrieved. Most clients will display these ashypertextlinks, and so allow the user to navigate through gopherspace by following the links.[5]This menu includes a text resource (itemtype0on the third line), multiple links to submenus (itemtype1, on the second line as well as lines 4–6) and a non-standard information message (from line 7 on), broken down to multiple lines by providing dummy values for selector, host and port. Historically, to create a link to a Web server, "GET /" was used as a pseudo-selector to emulate anHTTPGET request.[26]John Goerzen created an addition[27]to the Gopher protocol, commonly referred to as "URLlinks", that allows links to any protocol that supports URLs. For example, to create a link tohttp://gopher.quux.org/, the item type ish, the display string is the title of the link, the item selector is "URL:http://gopher.quux.org/", and the domain and port are that of the originating Gopher server (so that clients that do not support URL links will query the server and receive an HTML redirection page). Gopher+ is a forward compatible enhancement to the Gopher protocol. Gopher+ works by sendingmetadatabetween the client and the server. The enhancement was never widely adopted by Gopher servers.[28][29][30]The client sends a tab followed by a +. A Gopher+ server will respond with a status line followed by the content the client requested. An item is marked as supporting Gopher+ in the Gopher directory listing by a tab + after the port (this is the case of some of the items in the example above). Other features of Gopher+ include: These are clients, libraries, and utilities primarily designed to access gopher resources. Clients like web browsers, libraries, and utilities primarily designed to access World Wide Web resources, but which maintain(ed) gopher support. Browsers with no Gopher native support can still access servers using one of the available Gopher toHTTPgateways orproxy serverthat converts Gopher menus intoHTML; known proxies are the Floodgap Public Gopher proxy and Gopher Proxy. Similarly, certain server packages such as GN and PyGopherd have built-in Gopher toHTTPinterfaces.Squid Proxysoftware gateways anygopher://URL to HTTP content, enabling any browser or web agent to access gopher content easily. ForMozillaFirefoxandSeaMonkey, Overbite[17]extensions extend Gopher browsing and support the current versions of the browsers (Firefox Quantum v ≥57 and equivalent versions of SeaMonkey): OverbiteWX includes support for accessing Gopher servers not on port 70 using a whitelist and forCSO/ph queries. OverbiteFF always uses port 70. ForChromiumandGoogle Chrome, Burrow[38]is available. It redirectsgopher://URLs to a proxy. In the past an Overbite proxy-based extension for these browsers was available but is no longer maintained and does not work with the current (>23) releases.[17]ForKonqueror, Kio gopher[43]is available. As the bandwidth-sparing simple interface of Gopher can be a good match formobile phonesandpersonal digital assistants(PDAs),[44]the early 2010s saw a renewed interest in native Gopher clients for popularsmartphones. Gopher popularity was at its height at a time when there were still many equally competing computer architectures and operating systems. As a result, there are several Gopher clients available forAcorn RISC OS,AmigaOS, AtariMiNT,Conversational Monitor System(CMS),DOS,classic Mac OS,MVS,NeXT,OS/2 Warp, mostUnix-likeoperating systems,VMS,Windows 3.x, andWindows 9x.GopherVRwas a client designed for 3D visualization, and there is even a Gopher client inMOO.[45][46]Most such clients arehard-codedto work onTransmission Control Protocol(TCP)port70.[47] Because the protocol is trivial to implement in a basic fashion, there are many server packages still available, and some are still maintained.
https://en.wikipedia.org/wiki/Gopher_(protocol)
The following lists identify, characterize, and link to more thorough information onfile systems. Many olderoperating systemssupport only their one "native" file system, which does not bear any name apart from the name of the operating system itself. Disk file systems are usually block-oriented. Files in a block-oriented file system are sequences of blocks, often featuring fully random-access read, write, and modify operations. These file systems have built-in checksumming and either mirroring or parity for extra redundancy on one or several block devices: Solid state media, such asflash memory, are similar to disks in their interfaces, but have different problems. At low level, they require special handling such aswear levelingand differenterror detection and correctionalgorithms. Typically a device such as asolid-state drivehandles such operations internally and therefore a regular file system can be used. However, for certain specialized installations (embedded systems, industrial applications) a file system optimized for plain flash memory is advantageous. Inrecord-oriented file systemsfiles are stored as a collection ofrecords. They are typically associated withmainframeandminicomputeroperating systems. Programs read and write whole records, rather than bytes or arbitrary byte ranges, and can seek to a record boundary but not within records. The more sophisticated record-oriented file systems have more in common with simpledatabasesthan with other file systems. Shared-disk file systems (also calledshared-storage file systems,SAN file system,Clustered file systemor evencluster file systems) are primarily used in astorage area networkwhere all nodes directly access theblock storagewhere the file system is located. This makes it possible for nodes to fail without affecting access to the file system from the other nodes. Shared-disk file systems are normally used in ahigh-availability clustertogether with storage on hardwareRAID. Shared-disk file systems normally do not scale over 64 or 128 nodes. Shared-disk file systems may besymmetricwheremetadatais distributed among the nodes orasymmetricwith centralizedmetadataservers. Distributed file systemsare also called network file systems. Many implementations have been made, they are location dependent and they haveaccess control lists(ACLs), unless otherwise stated below. Distributedfault-tolerantreplication of data between nodes (between servers or servers/clients) forhigh availabilityandoffline(disconnected) operation. Distributedparallelfile systems stripe data over multiple servers for high performance. They are normally used inhigh-performance computing (HPC). Some of the distributed parallel file systems use anobject storage device(OSD) (in Lustre called OST) for chunks of data together with centralizedmetadataservers. Distributed file systems, which also areparallelandfault tolerant, stripe and replicate data over multiple servers for high performance and to maintaindata integrity. Even if a server fails no data is lost. The file systems are used in bothhigh-performance computing (HPC)andhigh-availability clusters. All file systems listed here focus onhigh availability,scalabilityand high performance unless otherwise stated below. In development: Some of these may be calledcooperative storage cloud. These are not really file systems; they allow access to file systems from an operating system standpoint.
https://en.wikipedia.org/wiki/List_of_file_systems#Distributed_file_systems
CacheFSis a family of software technologies designed to speed updistributed file systemfile access for networked computers.[citation needed]They store copies (caches) of files on secondary memory, typically a localhard disk, so that if a file is accessed again, it can be fetched locally at much higher speeds than networks typically allow. CacheFS software is used on severalUnix-likeoperating systems. The original Unix version was developed bySun Microsystemsin 1993. Another version was written for Linux and released in 2003. Network filesystems are dependent on anetworklink and a remoteserver; obtaining a file from such afilesystemcan be significantly slower than getting the file locally. For this reason, it can be desirable to cache data from these filesystems on a local disk, thus potentially speeding up future accesses to that data by avoiding the need to go to the network and fetch it again. The software has to check that the remote file has not changed since it was cached, but this is much faster than reading the whole file again. Spriteused large disk block caches. These were located in main-memory to achieve high performance in its file system. The term CacheFS has found little or no use to describe caches in main memory. The first CacheFS implementation, in 6502 assembler, was a write through cache developed by Mathew R Mathews at Grossmont College. It was used from fall 1986 to spring 1990 on three diskless 64 kB main memory Apple IIe computers to cache files from a Nestar file server onto Big Board, a 1 MB DRAM secondary memory device partitioned into CacheFS and TmpFS. The computers ran Pineapple DOS, an Apple DOS 3.3 derivative developed in the course of a follow on to WR Bornhorst's NSF funded Instructional Computing System. Pineapple DOS features, including caching, were unnamed; the name CacheFS was introduced seven years later by Sun Microsystems. The first Unix CacheFS implementation was developed bySun Microsystemsand released in theSolaris 2.3operating system release in 1993, as part of an expanded feature set for theNFSor Network File System suite known asOpen Network Computing Plus (ONC+).[1]It was subsequently used in other UNIX operating systems such asIRIX(starting with the 5.3 release in 1994).[2][3] Linuxoperating systems now commonly use a new version of CacheFS developed by David Howells. Howells appears to have rewritten CacheFS from scratch, not using Sun's original code. The Linux CacheFS currently is designed to operate onAndrew File SystemandNetwork File System(NFS) filesystems. Because of its similar naming to FS-Cache, CacheFS'terminologyis confusing to outsiders. CacheFS is a backend for FS-Cache and handles the actual data storage and retrieval. FS-Cache passes the requests from netfs to CacheFS. The cache facility/layer between the cache backends just like CacheFS and NFS or AFS. CacheFS is a Filesystem for the FS-Cache facility. Ablock devicecan be used as cache by simplymountingit. Needs no special activation and is deactivated by unmounting it. Daemonusing an existing filesystem (ext3withuser_xattr) as cache. Cache is bound with "cachefilesd -s". As of 2010, work on the project appeared to have stalled, and some people were attempting to revive the code and bring it up to date.[4] The facility can be conceptualized by the following diagram: The facility (known as FS-Cache) is designed to be as transparent as possible to a user of the system. Applications should just be able to use NFS files as normal, without any knowledge of there being a cache. Outdated articles?
https://en.wikipedia.org/wiki/CacheFS
Incomputer science, arecord-oriented filesystemis afile systemwhere data is stored as collections ofrecords. This is in contrast to a byte-oriented filesystem, where the data is treated as an unformatted stream ofbytes. There are several different possible record formats; the details vary depending on the particular system. In general the formats can be fixed-length or variable length, with different physical organizations or padding mechanisms;metadatamay be associated with the file records to define the record length, or the data may be part of the record. Differentaccess methodsfor records may be provided, for example records may be retrieved insequential order, bykey, or by record number. Record-oriented filesystems are frequently associated with mainframe operating systems, such asOS/360 and successors[1]andDOS/360 and successors, and midrange operating systems, such asRSX-11andVMS. However, they originated earlier in software such asInput/Output Control System(IOCS).[2]Records, sometimes called logical records, are often written together in blocks, sometimes called physical records; this is the norm for direct access and tape devices, but files onunit recorddevices are normally unblocked, i.e., there is only one record per block. Record-oriented filesystems can be supported on media other than direct access devices. A deck of punched cards can be considered a record-oriented file. A magnetic tape is an example of a medium that can support records of uniform length or variable length. In a record file system, a programmer designs the records that may be used in a file. All application programs accessing the file, whether adding, reading, or updating records share an understanding of the design of the records. In DOS/360, OS/360, and their successors there is no restriction on the bit patterns composing the data record, i.e. there is no delimiter character; this is not always true in other software, e.g., certain record types for RCA File Control Processor (FCP) on the 301, 501, 601 and 3301. The file comes into existence when a file create request is issued to the filesystem. Some information about the file may be included with the create request. This information may specify that the file has fixed-length records (all records are the same size) along with the size of the records. Alternatively, the specification may state that the records are of variable length, along with the maximum record length. Additional information including blocking factor, binary vs. text and the maximum number of records may be specified. It may be permitted to read only the beginning of a record; the next sequential read returns the next collection of data (record) that the writer intended to be grouped together. It may also be permitted to write only the beginning of a record. In these cases, the record is padded with binary zeros or with spaces, depending on whether the file is recognized as a binary file or a text file. Some operating systems require that library routines specific to the record format be included in the program. This means that a program originally expected to read a variable length record file cannot read a fixed length file. These operating systems must provide file system utilities for converting files between one format and another. This means copying the file (which requires additional storage space, time and coordination) may be necessary. Other operating systems include various routines and associate the appropriate routine, based on the file organization, at execution time. In either case, significant amounts of code to manage records must be provided in protected routines to ensure file integrity. An alternative to a Record-oriented file is a stream file, in which the file system treats a file as an unstructured sequence of bytes. The applications may, but need not, impose a record structure. This approach significantly reduces the size and complexity of the library and reduces the number of utilities required to maintain files. A common application convention fortext filesrepresented as streams is to use anew linedelimiterto separate or terminate records, commonlyCR, CRLF or LF. Unfortunately, the CPU time required to parse for the record delimiter is significant and the exclusion of the record delimiter pattern from the data is frequently undesirable. An alternate convention is to include a length field in each record. The writer application is responsible for imposing any record structure and the reader application is responsible for separating out the records. A record-oriented file has several advantages. After a program writes a collection of data as a record the program that reads that record has the understanding of those data as a collection. Often a file will contain several related records in sequence; after the program reads the beginning of the sequence, the next sequential read returns the next collection of data (record) that the writer intended to be grouped together. Another advantage is that the record has a length and there is usually no restriction on the bit patterns composing the data record, i.e. there is no delimiter character. There is usually a cost associated with record-oriented files. For fixed-length records, some records may have unused space, while for variable-length records the delimiter or length field takes up space. Variable-length blocks may have overhead due to delimiters or length fields. In addition, there is overhead imposed by the device. On a magnetic tape overhead typically takes the form of an inter-record gap. On a direct access device with fixed-length sectors, there may be unused space in the last sector of a block. On a direct access device with variable-length physical records, that overhead typically takes the form of metadata and inter-record gaps. On a file composed of varying-length records a maximum record length is defined to determine the size of the length metadata associated with each record. A major advantage of record-oriented file systems is that they abstract files kept on paper in earlier times. A record might contain data associated with a particular, e.g., building, contact, employee, part, venue. A second motivator for the idea of record orientation is that it is in some sense the more natural orientation for persistent storage on a non-volatile but slow physical storage device. Most physical storage devices can communicate only in units of a block. Significant portions of modern operating system kernels and associated device drivers are devoted to hiding the naturally structured and delimited (and in some sense a block is just a physical record) nature of physical storage devices. It is not coincidental that record-oriented file systems arose earlier in the history of computing than byte-stream oriented file systems, when the capabilities for abstraction were far less.
https://en.wikipedia.org/wiki/Record-oriented_filesystem
Batch renamingis a form ofbatch processingused torenamemultiplecomputer filesand folders in an automated fashion, in order to save time and reduce the amount of work involved. Some sort ofsoftwareis required to do this. Such software can be more or less advanced, but most have the same basic functions. Batch renaming can also be referred to as 'mass file renaming', rename 'en masse' and 'bulk renaming'. Most batch renamers share a basic set of functions to manipulate the filenames: Some batch rename software can do more than just renaming filenames. Features include changing the dates of files and changing the file attributes (such as the write protected attribute). There are many situations where batch renaming software can be useful. Here is a list of some common uses: There are a few problems to take in consideration when renaming a file list.(→ means: renamed to) file01 →file02(file02 already exists in file-system) file01 →file03file02 →file03(file03 is already used) file01→file02(file02 already exists in file-system)file02→file03(file03 already exists in file-system)file03→file01(file01 already exists in file-system) Two-pass renaming uses a temporary filename (that doesn't exist in file-system) as shown below.(→ means: renamed to) file01 → file01_AAAAAfile02 → file02_AAAABfile03 → file03_AAAAC file01_AAAAA → file02file02_AAAAB → file03file03_AAAAC → file01 It solves the cycle renaming problem. If this approach is to be used care should be taken not to exceed any filename length limits during the rename, and also that the temporary names do not clash with any existing files. This is a list of notable batch renaming programs in the form of a comparison table.
https://en.wikipedia.org/wiki/Batch_renaming
The following tables compare general and technical information for a number of notablefile managers. This table shows theoperating systemsthat the file managers can run on, without emulation. Information about what common file manager views are implemented natively (without third-party add-ons). Note that the "Column View" does not refer to theMiller Columnsbrowsing / visualization technique that can be applied totree structures/ folders. Twin-panel file managers have obligatory connected panels where action in one panel results in reaction in the second. Konqueror supports multiple panels divided horizontally, vertically or both, but these panels do not act as twin panels by default (the user has to mark the panels he wants to act as twin-panels). Information on what networking protocols the file managers support. Note that many of these protocols might be supported, in part or in whole, by software layers below the file manager, rather than by the file manager itself; for example, themacOSFinder doesn't implement those protocols, and the Windows Explorer doesn't implement most of them, they just make ordinary file system calls to access remote files, and Konqueror either uses ordinary file system calls orKIOslave calls to access remote files. Some functions, such as browsing for servers or shares, might be implemented in the file manager even if most functions are implemented below the file manager. Information on what basic file features the file managers support. Information on what file searching features the file managers support. RegExp include the possibilities of nested Boolean searches, thus implicitly all file managers supporting RegExp search support also Boolean searches. Column Definitions (D) Entry Notes (s) Information on which parts of the application can be extended by plugins.
https://en.wikipedia.org/wiki/Comparison_of_file_managers
Adisk utilityis autility programthat allows a user to perform various functions on acomputer disk, such asdisk partitioningandlogical volume management, as well as multiple smaller tasks such as changingdrive lettersand othermount points, renaming volumes,disk checking, anddisk formatting, which are otherwise handled separately by multiple other built-incommands.[1]Eachoperating system(OS) has its own basic disk utility, and there are also separate programs which can recognize and adjust the differentfilesystemsof multiple OSes. Types of disk utilities include disk checkers, disk cleaners and disk space analyzers Disk cleanersare computer programs that find and delete potentially unnecessary or potentially unwanted files from a computer. The purpose of such deletion may be to free up disk space, to eliminate clutter or to protect privacy. Disk space consuming unnecessary files includetemporary files,trash, oldbackupsandweb cachesmade by web browsers. Privacy risks includeHTTP cookies,local shared objects,log filesor any other trace that may tell which computer program opened which files. Disk cleaners must not be mistaken withantivirus software(which deletemalware),registry cleaners(which cleanMicrosoft WindowsRegistry) ordata erasuresoftware (which securely delete files), although multifunction software (such as those included below) may fit into all these categories. Adisk compression utilityincreases the amount of information that can be stored on a hard disk drive of given size. Unlike afile compressionutility which compresses only specified files – and which requires the user designate the files to be compressed – an on-the-fly disk compression utility works automatically without the user needing to be aware of its existence. When information needs to be stored to the hard disk, the utility will compress the information. When information needs to be read, the utility will decompress the information. A disk compression utility overrides the standard operating system routines. Since all software applications access the hard disk using these routines, they continue to work after disk compression has been installed. The compression/expansion process adds a small amount of overhead to disk access and may complicate error recovery on the affected volume. Also, if the compression utility's device driver was uninstalled or became corrupted, all data on the disk would be lost. Disk compression utilities were popular especially in the early 1990s, when microcomputer hard disks were still relatively small (20 to 80 megabytes).[2]Hard drives were also rather expensive at the time, costing roughly 10 USD per megabyte. For the users who bought disk compression applications, the software proved to be in the short term a more economic means of acquiring more disk space as opposed to replacing their current drive with a larger one. A good disk compression utility could, on average, double the available space with negligible speed loss. Disk compression fell into disuse by the late 1990s, as advances in hard drive technology and manufacturing led to increased capacities and lower prices. Some examples of disk compression utilities: Adisk checkeris a utility program which can scan ahard diskto findfilesor areas that are corrupted in some way, or were not correctly saved, and eliminate them for a more efficiently operating hard drive. This is not to be confused with adisk cleaner, which can find files that are unnecessary for computer operation, or take up considerable amounts of space. Some disk checkers can perform a whole surface scan to attempt to find any possiblebad sectors, whereas others scan only the logical structure of files on the hard disk. Operating systems often include one such tool. For example: Disk formattinganddisk partitioningtoolsare responsible for generating low level disk layouts andfile systems. Operating systems typically supply one or more programs performing these functions as part of their standard install: In Windows: In Mac OS: In Linux: Adisk space analyzer(ordisk usage analysis software) is asoftware utilityfor the visualization ofdiskspace usage by getting the size for eachfolder(including sub-folders) andfilesin a folder or drive. Most of these applications analyze this information to generate a graphical chart showing disk usage distribution according to folders or other user defined criteria. Some disk space analyzers like DiskReport allow analysis of history of size and file count for each folder, to help find growing folders. Examples:
https://en.wikipedia.org/wiki/Disk_space_analyzer
Incomputing, thedesktop metaphoris aninterface metaphorwhich is a set of unifying concepts used bygraphical user interfacesto help users interact more easily with the computer.[1]The desktop metaphor treats thecomputer monitoras if it is the top of the user'sdesk, upon whichobjectssuch asdocumentsandfoldersof documents can be placed. A document can be opened into awindow, which represents a paper copy of the document placed on the desktop. Small applications calleddesk accessoriesare also available, such as a desk calculator or notepad, etc. The desktop metaphor itself has been extended and stretched with various implementations ofdesktop environments, since access to features andusabilityof the computer are usually more important than maintaining the 'purity' of themetaphor. Hence one can find trash cans on the desktop, as well as disks and network volumes (which can be thought of asfiling cabinets—not something normally foundona desktop). Other features such asmenu barsortaskbarshave no direct counterpart on a real-world desktop, though this may vary by environment and the function provided; for instance, a familiarwall calendarcan sometimes be displayed or otherwise accessed via a taskbar or menu bar belonging to the desktop. The desktop metaphor was first introduced byAlan Kay, David C. Smith, and others atXerox PARCin 1970 and elaborated in a series of innovative software applications developed by PARC scientists throughout the ensuing decade. The first computer to use an early version of the desktop metaphor was the experimentalXerox Alto,[2][3]and the first commercial computer that adopted this kind of interface was theXerox Star. The use ofwindow controlsto contain related information predates the desktop metaphor, with a primitive version appearing inDouglas Engelbart's "Mother of All Demos",[4]though it was incorporated by PARC in the environment of theSmalltalklanguage.[5] One of the first desktop-like interfaces on the market was a program calledMagic DeskI. Built as a cartridge for theCommodore 64home computerin 1983, a very primitive GUI presented alow resolutionsketch of a desktop, complete with telephone, drawers, calculator, etc. The user made their choices by moving aspritedepicting a hand pointing by using the samejoystickthe user may have used forvideo gaming. Onscreen options were chosen by pushing the fire button on the joystick. The Magic Desk I program featured atypewritergraphically emulated complete with audio effects. Other applications included a calculator,rolodexorganiser, and aterminal emulator. Files could be archived into the drawers of the desktop. Atrashcanwas also present. The first computer to popularise the desktop metaphor, using it as a standard feature over the earliercommand-line interfacewas theApple Macintoshin 1984. The desktop metaphor is ubiquitous in modern-day personal computing; it is found in mostdesktop environmentsof modern operating systems:Windowsas well asmacOS,Linux, and otherUnix-likesystems. BeOSobserved the desktop metaphor more strictly than many other systems. For example, external hard drives appeared on the 'desktop', while internal ones were accessed clicking on aniconrepresenting the computer itself. By comparison, the Mac OS places all drives on the desktop itself by default, while in Windows the user can access the drives through an icon labelled "Computer". Amigaterminology for its desktop metaphor was taken directly from workshop jargon. The desktop was calledWorkbench, programs were calledtools, small applications (applets) were utilities, directories were drawers, etc. Icons of objects were animated and the directories are shown as drawers which were represented as either open or closed. As in theclassic Mac OSandmacOSdesktop, an icon for afloppy diskorCD-ROMwould appear on the desktop when the disk was inserted into the drive, as it was a virtual counterpart of a physical floppy disk or CD-ROM on the surface of a workbench. Thepaper paradigmrefers to theparadigmused by most modern computers and operating systems. The paper paradigm consists of, usually, black text on a white background, files within folders, and a "desktop". The paper paradigm was created by many individuals and organisations, such asDouglas Engelbart,Xerox PARC, andApple Computer, and was an attempt to make computers more user-friendly by making them resemble the common workplace of the time (with papers, folders, and a desktop).[6]It was first presented to the public by Engelbart in 1968, in what is now referred to as "The Mother of All Demos". From John Siracusa:[7] Back in 1984, explanations of the originalMacinterface to users who had never seen aGUIbefore inevitably included an explanation oficonsthat went something like this: "This icon represents your file on disk." But to the surprise of many, users very quickly discarded any semblance of indirection. This iconismy file. My fileisthis icon. One is not a "representation of" or an "interface to" the other. Such relationships were foreign to most people, and constituted unnecessary mental baggage when there was a much more simple and direct connection to what they knew of reality. Since then, many aspects of computers have wandered away from the paper paradigm by implementing features such as "shortcuts" to files,hypertext, and non-spatial file browsing. A shortcut (a link to a file that acts as a redirecting proxy, not the actual file) and hypertext have no real-world equivalent. Non-spatial file browsing, as well, may confuse novice users, as they can often have more than one window representing the same folder open at the same time, something that is impossible in reality. These and other departures from real-world equivalents are violations of the pure paper paradigm.
https://en.wikipedia.org/wiki/Desktop_metaphor
Incomputing,spatial navigationis the ability to navigate between focusable elements, such ashyperlinksand form controls, within a structured document oruser interfaceaccording to the spatial location. This method is widely used inapplication softwarelikecomputer games. In the pastWeb browsershave usedtabbing navigationto change the focus within an interface, by pressing thetab keyof acomputer keyboardto focus on the next element (or⇧ Shift+Tab ↹to focus on the previous one). The order is based on that in the source document. For HTML without any style, this method usually works as the spatial location of the element is in the same order of the source document. However, with the introduction of style via presentational attributes orstyle sheetssuch asCSS, this type of navigation is being used less often. Spatial navigation uses thearrow keys(with one or moremodifier keyheld) to navigate on the "2D plane" of the interface. For example, pressing the "up" arrow key will focus on the closest focusable element on the top (relative to the current element). In many cases, this could save many key presses. This accessibility feature is available in a number of applications, e.g.Vivaldi web browser.[1]For Vivaldi users, this allows a faster way to "jump" to different areas in long web pages or articles without manually scrolling and scanning with their eyes. Some examples, as noted above, include theTab ↹key to jump to the next input field, but also the⇧ Shiftkey with arrow keys (↑,↓,→,←) to jump to various links and text headers. Doug Turner (Mozilla), theMinimolead developer, has created a couple of specialMozilla Firefoxbuilds with this feature. Eventually, this may build as a default part of Firefox. Nightly builds ofWebKit(thelayout engineused byAppleSafariandGoogle Chrome, among others) now[2]have support for spatial navigation. In games such navigation is represented by (for example)camera-relative movement.
https://en.wikipedia.org/wiki/Spatial_navigation
The following is a comparison of notablefile systemdefragmentationsoftware:
https://en.wikipedia.org/wiki/List_of_defragmentation_software
TheFAT file systemis afile systemused onMS-DOSandWindows 9x family of operating systems.[3]It continues to be used onmobile devicesandembedded systems, and thus is a well-suited file system for data exchange between computers and devices of almost any type and age from 1981 through to the present. A FAT file system is composed of four regions: Important information from the Boot Sector is accessible through an operating system structure called theDrive Parameter Block(DPB) in DOS and OS/2. The total count of reserved sectors is indicated by a field inside the Boot Sector, and is usually 32 on FAT32 file systems.[4] For FAT32 file systems, the reserved sectors include aFile System Information Sectorat logical sector 1 and aBackup Boot Sectorat logical sector 6. While many other vendors have continued to utilize a single-sector setup (logical sector 0 only) for the bootstrap loader, Microsoft's boot sector code has grown to span over logical sectors 0 and 2 since the introduction of FAT32, with logical sector 0 depending on sub-routines in logical sector 2. The Backup Boot Sector area consists of three logical sectors 6, 7, and 8 as well. In some cases, Microsoft also uses sector 12 of the reserved sectors area for an extended boot loader. FAT useslittle-endianformat for all entries in the header (except for, where explicitly mentioned, some entries on Atari ST boot sectors) and the FAT(s).[5]It is possible to allocate more FAT sectors than necessary for the number of clusters. The end of the last sector of each FAT copy can be unused if there are no corresponding clusters. The total number of sectors (as noted in the boot record) can be larger than the number of sectors used by data (clusters × sectors per cluster), FATs (number of FATs × sectors per FAT), the root directory (n/a for FAT32), and hidden sectors including the boot sector: this would result in unused sectors at the end of the volume. If a partition contains more sectors than the total number of sectors occupied by the file system it would also result in unused sectors, at the end of the partition, after the volume. On non-partitionedstorage devices, such asfloppy disks, theBoot Sector(VBR) is the first sector (logical sector 0 with physical CHS address 0/0/1 or LBA address 0). For partitioned storage devices such as hard disks, the Boot Sector is the first sector of a partition, as specified in the partition table of the device. Since DOS 2.0, valid x86-bootable disks must start with either a short jump followed by a NOP (opstringsequence0xEB 0x?? 0x90[6][7]as seen since DOS 3.0[nb 1]—and on DOS 1.1[8][9]) or a near jump (0xE9 0x?? 0x??[6][7]as seen on most (Compaq,TeleVideo) DOS 2.x formatted disks as well as on some (Epson,Olivetti) DOS 3.1 disks). For backward compatibility MS-DOS, PC DOS and DR-DOS also accept a jump (0x69 0x?? 0x??)[6][7][10]on removable disks. On hard disks, DR DOS additionally accepts the swapped JMPS sequence starting with a NOP (0x90 0xEB 0x??),[10]whereas MS-DOS/PC DOS do not. (See below for Atari ST compatibility.) The presence of one of these opstring patterns (in combination with a test for a valid media descriptor value at offset0x015) serves as indicator to DOS 3.3 and higher that some kind of BPB is present (although the exact size should not be determined from the jump target since some boot sectors contain private boot loader data following the BPB), while for DOS 1.x (and some DOS 3.0) volumes, they will have to fall back to the DOS 1.x method to detect the format via the media byte in the FAT (in logicalsector 1). Although officially documented as free for OEM use, MS-DOS/PC DOS (since 3.1), Windows 95/98/SE/ME and OS/2 check this field to determine which other parts of the boot record can be relied upon and how to interpret them. Therefore, setting the OEM label to arbitrary or bogus values may cause MS-DOS, PC DOS and OS/2 to not recognize the volume properly and cause data corruption on writes.[11][12][13]Common examples are "IBM␠␠3.3", "MSDOS5.0", "MSWIN4.1", "IBM␠␠7.1", "mkdosfs␠", and "FreeDOS␠". Some vendors store licensing info or access keys in this entry. The Volume Tracker in Windows 95/98/SE/ME will overwrite the OEM label with "?????IHC" signatures (a left-over from "␠OGACIHC" for "Chicago") even on a seemingly read-only disk access (such as aDIR A:) if the medium is not write-protected. Given the dependency on certain values explained above, this may, depending on the actual BPB format and contents, cause MS-DOS/PC DOS and OS/2 to no longer recognize a medium and throw error messages despite the fact that the medium is not defective and can still be read without problems under other operating systems.Windows 9xreads that self-marked disks without any problems but giving some strange values for non-meaning parameters which not exist or are not used when the disk was formatted with older BPB specification, e.g. disk serial number (which exists only for disks formatted on DOS 5.0 or later, and inWindows 9xafter overwriting OEM label with?????IHCwill report it as0000-0000or any other value stored in disk serial number field when using disk formatted on other system).[14]This applies only to removable disk drives. Some boot loaders make adjustments or refuse to pass control to a boot sector depending on certain values detected here (e.g., NEWLDR offset0x018). The boot ROM of theWang Professional Computerwill only treat a disk as bootable if the first four characters of the OEM label are "Wang". Similarly, the ROM BIOS of thePhilips :YESwill only boot from a disk if the first four characters of the OEM label are ":YES". If, in anFAT32 EBPB, the signature at sector offset0x042is0x29and both total sector entries are 0, the file system entry may serve as a 64-bit total sector count entry and the OEM label entry may be used as alternative file system type instead of the normal entry at offset0x052. In a similar fashion, if this entry is set to "EXFAT␠␠␠", it indicates the usage of anexFAT BPBlocated at sector offset0x040to0x077, whereasNTFSvolumes use "NTFS␠␠␠␠"[15]to indicate anNTFS BPB. (In conjunction with at least aDOS 3.31 BPBsomeGPTboot loaders (likeBootDuet) use0x1FA–0x1FDto store the high 4 bytes of thehidden sectorsfor volumes located outside the first 232-1 sectors. Since this location may contain code or other data in other boot sectors, it may not be written to when0x1F9–0x1FDdo not all contain zero.) If this belongs to a boot volume, the DR-DOS 7.07 enhanced MBR can be configured (see NEWLDR offset0x014) to dynamically update this entry to the DL value provided at boot time or the value stored in the partition table. This enables booting off alternative drives, even when theVBRcode ignores the DL value. This signature must be located at fixed sector offset0x1FEfor sector sizes 512 or higher. If the physical sector size is larger, it may be repeated at the end of the physical sector. Atari STswill assume a disk to be Atari68000bootable if the checksum over the 256big-endianwords of the boot sector equals0x1234.[17][nb 3]If the boot loader code is IBM compatible, it is important to ensure that the checksum over the boot sector does not match this checksum by accident. If this would happen to be the case, changing an unused bit (e.g., before or after the boot code area) can be used to ensure this condition is not met. In rare cases, a reversed signature0xAA 0x55has been observed on disk images. This can be the result of a faulty implementation in the formatting tool based on faulty documentation,[nb 2]but it may also indicate a swapped byte order of the disk image, which might have occurred in transfer between platforms using a differentendianness. BPB values and FAT12, FAT16 and FAT32 file systems are meant to uselittle-endianrepresentation only and there are no known implementations of variants usingbig-endianvalues instead. If the logical sector size is larger than 512 bytes, the remainder is not included in the checksum and is typically zero-filled.[17]Since some PC operating systems erroneously do not accept FAT formatted floppies if the0x55 0xAA[nb 2]signature is not present here, it is advisable to place the0x55 0xAAin this place (and add an IBM compatible boot loader or stub) and use an unused word in the private data or the boot code area or the serial number in order to ensure that the checksum0x1234[nb 3]is not matched (unless the sharedfat codeoverlay would be both IBM PC and Atari ST executable at the same time). The minimum allowed value for non-bootable FAT12/FAT16 volumes with up to 65,535 logical sectors is 32 bytes, or 64 bytes for more than 65,535 logical sectors. The minimum practical value is 128. Some pre-DOS 3.31 OEM versions of DOS used logical sector sizes up to 8192 bytes forlogical sectored FATs. Atari STGEMDOSsupports logical sector sizes between 512 and 4096.[17]DR-DOS supports booting off FAT12/FAT16 volumes with logical sector sizes up to 32 KB and INT 13h implementations supporting physical sectors up to 1024 bytes/sector.[nb 4]The minimum logical sector size for standard FAT32 volumes is 512 bytes, which can be reduced downto 128 bytes without support for theFS Information Sector. Floppy drives and controllers use physical sector sizes of 128, 256, 512 and 1024 bytes (e.g., PC/AX). TheAtari Portfoliosupports a sector size of 512 for volumes larger than 64 KB, 256 bytes for volumes larger 32 KB and 128 bytes for smaller volumes.Magneto-optical drivesused sector sizes of 512, 1024 and 2048 bytes. In 2005 someSeagatecustom hard disks used sector sizes of 1024 bytes instead of the default 512 bytes.[18]Advanced Formathard disks use 4096 bytes per sector (4Kn) since 2010, but will also be able to emulate 512 byte sectors (512e) for a transitional period. Linux, and by extension Android, supports a logical sector size far larger, officially documented in the Man page for the filesystem utilities as up to 32KB. For most DOS-based operating systems, the maximum cluster size remains at 32 KB (or 64 KB) even for sector sizes larger than 512 bytes. For logical sector sizes of 1 KB, 2 KB and 4 KB, Windows NT 4.0 supports cluster sizes of 128 KB, while for 2 KB and 4 KB sectors the cluster size can reach 256 KB. Some versions of DR-DOS provide limited support for 128 KB clusters with 512 bytes/sector using a sectors/cluster value of 0. MS-DOS/PC DOS will hang on startup if this value is erroneously specified as 0.[19]: INT 21h AX=53h Since DR-DOS 7.0x FAT32 formatted volumes use a single-sector boot sector, FS info sector and backup sector, some volumes formatted under DR-DOS use a value of 4 here. Volumes declaring 2 FATs in this entry will never be treated asTFATvolumes. If the value differs from 2, some Microsoft operating systems may attempt to mount the volume as a TFAT volume and use the second cluster (cluster 1) of the first FAT to determine the TFAT status. A value of 0 without aFAT32 EBPB(no signature0x29or0x28at offset0x042) may also indicate a variable-sized root directory in some non-standard FAT12 and FAT16 implementations, which store the root directory start cluster in thecluster 1entry in the FAT.[20]This extension, however, is not supported by mainstream operating systems,[20]as it can conflict with other uses of the cluster 1 entry for maintenance flags, the current end-of-chain-marker, orTFATextensions. This value must be adjusted so that directory entries always consume full logical sectors, given that eachdirectory entrytakes up 32 bytes. MS-DOS/PC DOS require this value to be a multiple of 16. The maximum value supported on floppy disks is 240,[6]the maximum value supported by MS-DOS/PC DOS on hard disks is 512.[6]DR-DOS supports booting off FAT12/FAT16 volumes, if the boot file is located in the first 2048 root directory entries. This value must reflect the media descriptor stored (in the entry forcluster 0) in the first byte of each copy of the FAT. Certain operating systems before DOS 3.2 (86-DOS,MS-DOS/PC DOS1.x andMSX-DOSversion 1.0) ignore the boot sector parameters altogether and use the media descriptor value from the first byte of the FAT to choose among internally pre-defined parameter templates. Must be greater or equal to0xF0since DOS 4.0.[6] On removable drives, DR-DOS will assume the presence of a BPB if this value is greater or equal to0xF0,[6]whereas for fixed disks, it must be0xF8to assume the presence of a BPB. Initially, these values were meant to be used as bit flags; for any removable media without a recognized BPB format and a media descriptor of either0xF8or0xFAto0xFFMS-DOS/PC DOS treats bit 1 as a flag to choose a 9-sectors per track format rather than an 8-sectors format, and bit 0 as a flag to indicate double-sided media.[7]Values0x00to0xEFand0xF1to0xF7are reserved and must not be used. DOS 3.0 BPB: The following extensions were documented since DOS 3.0, however, they were already supported by some issues of DOS 2.11.[28]MS-DOS 3.10 still supported the DOS 2.0 format, but could use the DOS 3.0 format as well. A zero entry indicates that this entry is reserved, but not used. A bug in all versions of MS-DOS/PC DOS up to including 7.10 causes these operating systems to crash for CHS geometries with 256 heads, therefore almost all BIOSes choose a maximum of 255 heads only. A zero entry indicates that this entry is reserved, but not used. It must not be used if the logical sectors entry at offset0x013is zero. DOS 3.2 BPB: Officially, MS-DOS 3.20 still used the DOS 3.0 format, butSYSandFORMATwere adapted to support a 6 bytes longer format already (of which not all entries were used). It must not be used if the logical sectors entry at offset0x013is zero. DOS 3.31 BPB: Officially introduced with DOS 3.31 and not used by DOS 3.2, some DOS 3.2 utilities were designed to be aware of this new format already. Official documentation recommends to trust these values only if the logical sectors entry at offset0x013is zero. A zero entry indicates that this entry is reserved, but not used. A value of 0 may indicate LBA-only access, but may cause a divide-by-zero exception in some boot loaders, which can be avoided by storing a neutral value of 1 here, if no CHS geometry can be reasonably emulated. A bug in all versions of MS-DOS/PC DOS up to including 7.10 causes these operating systems to crash for CHS geometries with 256 heads, therefore almost all BIOSes choose a maximum of 255 heads only. A zero entry indicates that this entry is reserved, but not used. A value of 0 may indicate LBA-only access, but may cause a divide-by-zero exception in some boot loaders, which can be avoided by storing a neutral value of 1 here, if no CHS geometry can be reasonably emulated. If this belongs to anAdvanced Active Partition(AAP) selected at boot time, the BPB entry will be dynamically updated by the enhanced MBR to reflect the "relative sectors" value in the partition table, stored at offset0x1B6in the AAP or NEWLDR MBR, so that it becomes possible to boot the operating system fromEBRs. (SomeGPTboot loaders (likeBootDuet) use boot sector offsets0x1FA–0x1FDto store the high 4 bytes of a 64-bit hidden sectors value for volumes located outside the first 232−1 sectors.) For partitioned media, if this and the entry at0x013are both 0 (as seen on some DOS 3.x FAT16 volumes), many operating systems (including MS-DOS/PC DOS) will retrieve the value from the corresponding partition's entry (at offset0xC) in theMBRinstead. If both of these entries are 0 on volumes using aFAT32 EBPBwith signature0x29, values exceeding the 4,294,967,295 (232−1) limit (f.e. someDR-DOSvolumes with 32-bit cluster entries) can use a 64-bit entry at offset0x052instead. A simple formula translates a volume's given cluster numberCNto a logical sector numberLSN:[24][25][26] On unpartitioned media the volume's number of hidden sectors is zero and thereforeLSNandLBAaddresses become the same for as long as a volume's logical sector size is identical to the underlying medium's physical sector size. Under these conditions, it is also simple to translate betweenCHSaddresses andLSNsas well: LSN=SPT×(HN+(NOS×TN))+SN−1, where the sectors per trackSPTare stored at offset0x018, and the number of sidesNOSat offset0x01A. Track numberTN, head numberHN, and sector numberSNcorrespond toCylinder-head-sector: the formula gives the known CHS toLBAtranslation. Further structure used by FAT12 and FAT16 since OS/2 1.0 and DOS 4.0, also known asExtended BIOS Parameter Block(EBPB) (bytes below sector offset0x024are the same as for the DOS 3.31 BPB): A similar entry existed (only) in DOS 3.2 to 3.31 boot sectors at sector offset0x1FD. If this belongs to a boot volume, the DR-DOS 7.07 enhanced MBR can be configured (see NEWLDR offset0x014) to dynamically update this EBPB entry to the DL value provided at boot time or the value stored in the partition table. This enables booting off alternative drives, even when theVBRcode ignores the DL value. Typically the serial number "xxxx-xxxx" is created by a 16-bit addition of both DX values returned by INT 21h/AH=2Ah (get system date)[nb 5]and INT 21h/AH=2Ch (get system time)[nb 5]for the high word and another 16-bit addition of both CX values for the low word of the serial number. Alternatively, some DR-DOS disk utilities provide a/#option to generate a human-readable time stamp "mmdd-hhmm" build from BCD-encoded 8-bit values for the month, day, hour and minute instead of a serial number. Not available if the signature at0x026is set to0x28. This area was used by boot sectors of DOS 3.2 to 3.3 to store a private copy of theDisk Parameter Table(DPT) instead of using the INT 1Eh pointer to retrieve the ROM table as in later issues of the boot sector. The re-usage of this location for the mostly cosmetical partition volume label minimized problems if some older system utilities would still attempt to patch the former DPT. This entry is meant for display purposes only and must not be used by the operating system to identify the type of the file system. Nevertheless, it is sometimes used for identification purposes by third-party software and therefore the values should not differ from those officially used. Supported since OS/2 1.2 and MS-DOS 4.0 and higher. Not available if the signature at0x026is set to0x28. In essence FAT32 inserts 28 bytes into the EBPB, followed by the remaining 26 (or sometimes only 7)EBPBbytes as shown above for FAT12 and FAT16. Microsoft and IBM operating systems determine the type of FAT file system used on a volume solely by the number of clusters, not by the used BPB format or the indicated file system type, that is, it is technically possible to use a "FAT32 EBPB" also for FAT12 and FAT16 volumes as well as a DOS 4.0 EBPB for small FAT32 volumes. Since such volumes were found to be created by Windows operating systems under some odd conditions,[nb 6]operating systems should be prepared to cope with these hybrid forms. The byte at offset0x026in this entry should never become0x28or0x29in order to avoid any misinterpretation with the EBPB format under non-FAT32 aware operating systems. Fortunately, under normal circumstances (sector size of 512 bytes), this cannot happen, as a FAT32 file system has at most 0xffffff6 = 268435446 clusters. One Fat sector fits 512 / 4 = 128 cluster descriptors. So at most only ceil(268435446 / 128) = 2097152 = 0x200000 sectors would be needed, making third byte of the number of FAT sectors 0x20 at most, which is less than the forbidden 0x28 and 0x29 values. DR-DOS 7.07 FAT32 boot sectors with dual LBA and CHS support utilize bits 15-8 to store an access flag and part of a message. These bits contain either bit pattern0110:1111b(low-case letter 'o', bit 13 set for CHS access) or0100:1111b(upper-case letter 'O', bit 13 cleared for LBA access). The byte is also used for the second character in a potential "No␠IBMBIO␠␠COM" error message (see offset0x034), displayed either in mixed or upper case, thereby indicating which access type failed). Formatting tools or non-DR SYS-type tools may clear these bits, but other disk tools should leave bits 15-8 unchanged. A cluster value of0is not officially allowed and can never indicate a valid root directory start cluster. Some non-standard FAT32 implementations may treat it as an indicator to search for a fixed-sized root directory where it would be expected on FAT16 volumes; see offset0x011. Some FAT32 implementations support a slight variation of Microsoft's specification in making the FS Information Sector optional by specifying a value of0xFFFF[19](or0x0000) in this entry. Since logical sector 0 can never be a valid FS Information Sector, but FS Information Sectors use the same signature as found on many boot sectors[citation needed], file system implementations should never attempt to use logical sector 0 as FS Information sector and instead assume that the feature is unsupported on that particular volume. Without a FS Information Sector, the minimum allowedlogical sector sizeof FAT32 volumes can be reduced downto 128 bytes for special purposes. Since DR-DOS 7.0x FAT32 formatted volumes use a single-sector boot sector, some volumes formatted under DR-DOS use a value of 2 here. Values of0x0000[4](and/or0xFFFF[19]) are reserved and indicate that no backup sector is available. DR-DOS 7.07 FAT32 boot sectors use these 12 bytes to store the filename of the "IBMBIO␠␠COM"[nb 8]file to be loaded (up to the first 29,696 bytes or the actual file size, whatever is smaller) and executed by the boot sector, followed by a terminating NUL (0x00) character. This is also part of an error message, indicating the actual boot file name and access method (see offset0x028). exFAT BPBsare located at sector offset0x040to0x077, overlapping all the remaining entries of a standard FAT32 EBPB including this one. They can be detected via their OEM label signature "EXFAT␠␠␠" at sector offset0x003. In this case, the bytes at0x00Bto0x03Fare normally set to0x00. May hold format filler byte0xF6[nb 7]artifacts after partitioning with MS-DOS FDISK, but not yet formatted. Most FAT32 file system implementations do not support an alternative signature of0x28[15]to indicate a shortened form of the FAT32 EBPB with only the serial number following (and no Volume Label and File system type entries), but since these 19 mostly unused bytes might serve different purposes in some scenarios, implementations should accept0x28as an alternative signature and then fall back to use the directory volume label in the file system instead of in the EBPB for compatibility with potential extensions. Not available if the signature at offset0x042is set to0x28. Not available if the signature at0x042is set to0x28. If both total logical sectors entries at offset0x020and0x013are 0 on volumes using aFAT32 EBPBwith signature0x29, volumes with more than 4,294,967,295 (232-1) sectors (f.e. someDR-DOSvolumes with 32-bit cluster entries) can use this entry as64-bit total logical sectorsentry instead. In this case, the OEM label at sector offset0x003may be retrieved as new-stylefile system typeinstead. Versions of DOS before 3.2 totally or partially relied on themedia descriptor bytein the BPB or theFAT IDbyte in cluster 0 of the first FAT in order to determine FAT12 diskette formats even if a BPB is present. Depending on the FAT ID found and the drive type detected they default to use one of the following BPB prototypes instead of using the values actually stored in the BPB.[nb 1] Originally, the FAT ID was meant to be a bit flag with all bits set except for bit 2 cleared to indicate an 80 track (vs. 40 track) format, bit 1 cleared to indicate a 9 sector (vs. 8 sector) format, and bit 0 cleared to indicate a single-sided (vs. double-sided) format,[7]but this scheme was not followed by all OEMs and became obsolete with the introduction of hard disks and high-density formats. Also, the various 8-inch formats supported by86-DOSand MS-DOS do not fit this scheme. Microsoft recommends to distinguish between the two 8-inch formats for FAT ID0xFEby trying to read of a single-density address mark. If this results in an error, the medium must be double-density.[23] The table does not list a number of incompatible 8-inch and 5.25-inchFAT12 floppy formats supported by 86-DOS, which differ either in the size of the directory entries (16 bytes vs. 32 bytes) or in the extent of the reserved sectors area (several whole tracks vs. one logical sector only). The implementation of a single-sided 315 KB FAT12 format used inMS-DOSfor theApricot PCandF1e[34]had a different boot sector layout, to accommodate that computer's non-IBM compatible BIOS. The jump instruction and OEM name were omitted, and the MS-DOS BPB parameters (offsets0x00B-0x017in the standard boot sector) were located at offset0x050. ThePortable,F1,PC duoandXi FDsupported a non-standard double-sided 720 KB FAT12 format instead.[34]The differences in the boot sector layout and media IDs made these formats incompatible with many other operating systems. The geometry parameters for these formats are: Later versions ofApricot MS-DOSgained the ability to read and write disks with the standard boot sector in addition to those with the Apricot one. These formats were also supported byDOS Plus2.1e/g for the Apricot ACT series. The DOS Plus adaptation for theBBC Master 512supported two FAT12 formats on 80-track, double-sided, double-density 5.25" drives, which did not use conventional boot sectors at all. 800 KB data disks omitted a boot sector and began with a single copy of the FAT.[35]The first byte of the relocated FAT in logical sector 0 was used to determine the disk's capacity. 640 KB boot disks began with a miniatureADFSfile system containing the boot loader, followed by a single FAT.[35][36]Also, the 640 KB format differed by using physical CHS sector numbers starting with 0 (not 1, as common) and incrementing sectors in the order sector-track-head (not sector-head-track, as common).[36]The FAT started at the beginning of the next track. These differences make these formats unrecognizable by other operating systems. The geometry parameters for these formats are: DOS Plus for the Master 512 could also access standard PC disks formatted to180 KBor360 KB, using the first byte of the FAT in logical sector 1 to determine the capacity. The DEC Rainbow 100 (all variations) supported one FAT12 format on 80-track, single-sided, quad-density 5.25" drives. The first two tracks were reserved for the boot loader, but didn't contain an MBR nor a BPB (MS-DOS used a static in-memory BPB instead). The boot sector (track 0, side 0, sector 1) was Z80 code beginning with DI0xF3. The 8088 bootstrap was loaded by the Z80. Track 1, side 0, sector 2 starts with the Media/FAT ID byte0xFA. Unformatted disks use0xE5instead. The file system starts on track 2, side 0, sector 1. There are 2 copies of the FAT and 96 entries in the root directory. In addition, there is a physical to logical track mapping to effect a 2:1 sector interleaving. The disks were formatted with the physical sectors in order numbered 1 to 10 on each track after the reserved tracks, but the logical sectors from 1 to 10 were stored in physical sectors 1, 6, 2, 7, 3, 8, 4, 9, 5, 10.[37] The "FS Information Sector" was introduced in FAT32[38]for speeding up access times of certain operations (in particular, getting the amount of free space). It is located at a logical sector number specified in the FAT32 EBPB boot record at position0x030(usually logical sector 1, immediately after the boot record itself). For as long as the FS Information Sector is located in logical sector 1, the location, where the FAT typically started in FAT12 and FAT16 file systems (with only one reserved sectors), the presence of this signature ensures that early versions of DOS will never attempt to mount a FAT32 volume, as they expect the values incluster 0andcluster 1to follow certain bit patterns, which are not met by this signature. The sector's data may be outdated and not reflect the current media contents, because not all operating systems update or use this sector, and even if they do, the contents is not valid when the medium has been ejected without properly unmounting the volume or after a power-failure. Therefore, operating systems should first inspect a volume's optional shutdown status bitflags residing in the FAT entry ofcluster 1or the FAT32 EBPB at offset0x041and ignore the data stored in the FS information sector, if these bitflags indicate that the volume was not properly unmounted before. This does not cause any problems other than a possible speed penalty for the first free space query or data cluster allocation; seefragmentation. If this sector is present on a FAT32 volume, the minimum allowedlogical sector sizeis 512 bytes, whereas otherwise it would be 128 bytes. Some FAT32 implementations support a slight variation of Microsoft's specification by making the FS information sector optional by specifying a value of0xFFFF[19](or0x0000) in the entry at offset0x030. A volume's data area is divided into identically sizedclusters—small blocks of contiguous space. Cluster sizes vary depending on the type of FAT file system being used and the size of the drive; typical cluster sizes range from 2 to32KiB.[39] Each file may occupy one or more clusters depending on its size. Thus, a file is represented by a chain of clusters (referred to as asingly linked list). These clusters are not necessarily stored adjacent to one another on the disk's surface but are often insteadfragmentedthroughout the Data Region. Each version of the FAT file system uses a different size for FAT entries. Smaller numbers result in a smaller FAT, but waste space in large partitions by needing to allocate in large clusters. TheFAT12file system uses 12 bits per FAT entry, thus two entries span 3 bytes. It is consistentlylittle-endian: if those three bytes are considered as one little-endian 24-bit number, the 12 least significant bits represent the first entry (e.g. cluster 0) and the 12 most significant bits the second (e.g. cluster 1). In other words, while the low eight bits of the first cluster in the row are stored in the first byte, the top four bits are stored in the low nibble of the second byte, whereas the low four bits of the subsequent cluster in the row are stored in the high nibble of the second byte and its higher eight bits in the third byte. TheFAT16file system uses 16 bits per FAT entry, thus one entry spans two bytes in little-endian byte order: TheFAT32file system uses 32 bits per FAT entry, thus one entry spans four bytes in little-endian byte order. The four top bits of each entry are reserved for other purposes; they are cleared during formatting and should not be changed otherwise. They must be masked off before interpreting the entry as 28-bit cluster address. TheFile Allocation Table(FAT) is a contiguous number of sectors immediately following the area of reserved sectors. It represents a list of entries that map to each cluster on the volume. Each entry records one of four things: For very early versions of DOS to recognize the file system, the system must have been booted from the volume or the volume's FAT must start with the volume's second sector (logical sector 1 with physical CHS address 0/0/2 or LBA address 1), that is, immediately following the boot sector. Operating systems assume this hard-wired location of the FAT in order to find theFAT IDin the FAT's cluster 0 entry on DOS 1.0-1.1 FAT diskettes, where no valid BPB is found. The first two entries in a FAT store special values: The first entry (cluster 0 in the FAT) holds the FAT ID sinceMS-DOS 1.20andPC DOS 1.1(allowed values0xF0-0xFFwith0xF1-0xF7reserved) in bits 7-0, which is also copied into the BPB of the boot sector, offset0x015since DOS 2.0. The remaining 4 bits (if FAT12), 8 bits (if FAT16) or 20 bits (if FAT32, the 4 MSB bits are zero) of this entry are always 1. These values were arranged so that the entry would also function as a "trap-all" end-of-chain marker for all data clusters holding a value of zero. Additionally, for FAT IDs other than0xFF(and0x00) it is possible to determine the correct nibble and byte order (to be) used by the file system driver, however, the FAT file system officially uses alittle-endianrepresentation only and there are no known implementations of variants usingbig-endianvalues instead.86-DOS 0.42up toMS-DOS 1.14used hard-wired drive profiles instead of a FAT ID, but used this byte to distinguish between media formatted with 32-byte or 16-byte directory entries, as they were used prior to 86-DOS 0.42. The second entry (cluster 1 in the FAT) nominally stores the end-of-cluster-chain marker as used by the formater, but typically always holds0xFFF/0xFFFF/0x0FFFFFFF, that is, with the exception of bits 31-28 on FAT32 volumes these bits are normally always set. Some Microsoft operating systems, however, set these bits if the volume is not the volume holding the running operating system (that is, use0xFFFFFFFFinstead of0x0FFFFFFFhere).[40](In conjunction with alternative end-of-chain markers the lowest bits 2-0 can become zero for the lowest allowed end-of-chain marker0xFF8/0xFFF8/0x?FFFFFF8; bit 3 should be reserved as well given that clusters0xFF0/0xFFF0/0x?FFFFFF0and higher are officially reserved. Some operating systems may not be able to mount some volumes if any of these bits are not set, therefore the default end-of-chain marker should not be changed.) For DOS 1 and 2, the entry was documented as reserved for future use. Since DOS 7.1 the two most-significant bits of this cluster entry may hold two optional bitflags representing the current volume status on FAT16 and FAT32, but not on FAT12 volumes. These bitflags are not supported by all operating systems, but operating systems supporting this feature would set these bits on shutdown and clear the most significant bit on startup:If bit 15 (on FAT16) or bit 27 (on FAT32)[41]is not set when mounting the volume, the volume was not properly unmounted before shutdown or ejection and thus is in an unknown and possibly "dirty" state.[27]On FAT32 volumes, theFS Information Sectormay hold outdated data and thus should not be used. The operating system would then typically runSCANDISKorCHKDSKon the next startup[nb 9][41](but not on insertion of removable media) to ensure and possibly reestablish the volume's integrity.If bit 14 (on FAT16) or bit 26 (on FAT32)[41]is cleared, the operating system has encountered disk I/O errors on startup,[41]a possible indication for bad sectors. Operating systems aware of this extension will interpret this as a recommendation to carry out a surface scan (SCANDISK) on the next boot.[27][41](A similar set of bitflags exists in the FAT12/FAT16 EBPB at offset0x1Aor the FAT32 EBPB at offset0x36. While the cluster 1 entry can be accessed by file system drivers once they have mounted the volume, the EBPB entry is available even when the volume is not mounted and thus easier to use by disk block device drivers or partitioning tools.) If the number of FATs in the BPB is not set to 2, the second cluster entry in the first FAT (cluster 1) may also reflect the status of aTFATvolume for TFAT-aware operating systems. If the cluster 1 entry in that FAT holds the value 0, this may indicate that the second FAT represents the last known valid transaction state and should be copied over the first FAT, whereas the first FAT should be copied over the second FAT if all bits are set. Some non-standard FAT12/FAT16 implementations utilize the cluster 1 entry to store the starting cluster of a variable-sized root directory (typically 2[33]). This may occur when thenumber of root directory entriesin the BPB holds a value of 0 and no FAT32 EBPB is found (no signature0x29or0x28at offset0x042).[20]This extension, however, is not supported by mainstream operating systems,[20]as it conflicts with other possible uses of the cluster 1 entry. Most conflicts can be ruled out if this extension is only allowed for FAT12 with less than0xFEFand FAT16 volumes with less than0x3FEFclusters and 2 FATs. Because these first two FAT entries store special values, there are no data clusters 0 or 1. The first data cluster (after the root directory if FAT12/FAT16) is cluster 2,[33]marking the beginning of the data area. FAT entry values: Otherwise, if this value occurs in cluster chains (e.g. in directory entries of zero length or deleted files), file system implementations should treat this like an end-of-chain marker.[7] If this value occurs in on-disk cluster chains, file system implementations should treat this like an end-of-chain marker. MS-DOS/PC DOS 3.3 and higher treats a value of0xFF0[nb 10][6]on FAT12 (but not on FAT16 or FAT32) volumes as additional end-of-chain marker similar to0xFF8-0xFFF.[6]For compatibility with MS-DOS/PC DOS, file systems should avoid to use data cluster0xFF0in cluster chains on FAT12 volumes (that is, treat it as a reserved cluster similar to0xFF7). (NB. The correspondence of the low byte of the cluster number with the FAT ID and media descriptor values is the reason, why these cluster values are reserved.) The cutover values for the maximum number of clusters for FAT12 and FAT16 file systems are defined as such that the highest possible data cluster values (0xFF5and0xFFF5,[6]respectively) will always be smaller than this value.[6]Therefore, this value cannot normally occur in cluster-chains, but if it does, it may be treated as a normal data cluster, since0xFF7could have been a non-standard data cluster on FAT12 volumes before the introduction of the bad cluster marker with DOS 2.0 or the introduction of FAT16 with DOS 3.0,[7]and0xFFF7could have been a non-standard data cluster on FAT16 volumes before the introduction of FAT32 with DOS 7.10. Theoretically,0x0FFFFFF7can be part of a valid cluster chain on FAT32 volumes, but disk utilities should avoid creating FAT32 volumes, where this condition could occur. The file system should avoid to allocate this cluster for files.[7] Disk utilities must not attempt to restore "lost clusters" holding this value in the FAT, but count them as bad clusters. File system implementations should check cluster values in cluster-chains against the maximum allowed cluster value calculated by the actual size of the volume and treat higher values as if they were end-of-chain markers as well. (The low byte of the cluster number conceptually corresponds with theFAT IDandmedia descriptorvalues;[7]see note above for MS-DOS/PC DOS special usage of0xFF0[nb 10]on FAT12 volumes.[6]) FAT32 uses 28 bits for cluster numbers. The remaining 4 bits in the 32-bit FAT entry are usually zero, but are reserved and should be left untouched. A standard conformant FAT32 file system driver or maintenance tool must not rely on the upper 4 bits to be zero and it must strip them off before evaluating the cluster number in order to cope with possible future expansions where these bits may be used for other purposes. They must not be cleared by the file system driver when allocating new clusters, but should be cleared during a reformat. The root directory table in FAT12 and FAT16 file systems occupies the specialRoot Directory Regionlocation. Aside from the root directory table in FAT12 and FAT16 file systems, which occupies the specialRoot Directory Regionlocation, all directory tables are stored in the data region. The actual number of entries in a directory stored in the data region can grow by adding another cluster to the chain in the FAT. Adirectory tableis a special type of file that represents adirectory(also known as a folder). Since86-DOS 0.42,[46]each file or (since MS-DOS 1.40 and PC DOS 2.0) subdirectory stored within it is represented by a 32-byte entry in the table. Each entry records the name, extension, attributes (archive, directory, hidden, read-only, system and volume), the address of the first cluster of the file/directory's data, the size of the file/directory, and the date[46]and (since PC DOS 1.1) also the time of last modification. Earlier versions of 86-DOS used 16-byte directory entries only, supporting no files larger than 16 MB and no time of last modification.[46] The FAT file system itself does not impose any limits on the depth of a subdirectory tree for as long as there are free clusters available to allocate the subdirectories, however, the internal Current Directory Structure (CDS) under MS-DOS/PC DOS limits the absolute path of a directory to 66 characters (including the drive letter, but excluding the NUL byte delimiter),[24][25][26]thereby limiting the maximum supported depth of subdirectories to 32, whatever occurs earlier. Concurrent DOS, Multiuser DOS and DR DOS 3.31 to 6.0 (up to including the 1992-11 updates) do not store absolute paths to working directories internally and therefore do not show this limitation.[47]The same applies to Atari GEMDOS, but the Atari Desktop does not support more than 8 sub-directory levels. Most applications aware of this extension support paths up to at least 127 bytes. FlexOS, 4680 OS and 4690 OS support a length of up to 127 bytes as well, allowing depths down to 60 levels.[48]PalmDOS, DR DOS 6.0 (since BDOS 7.1) and higher, Novell DOS, and OpenDOS sport a MS-DOS-compatible CDS and therefore have the same length limits as MS-DOS/PC DOS. Each entry can be preceded by "fake entries" to support aVFAT long filename(LFN); see further below. Legal characters for DOS short filenames include the following: This excludes the followingASCIIcharacters: Character 229 (0xE5) was not allowed as first character in a filename in DOS 1 and 2 due to its use as free entry marker. A special case was added to circumvent this limitation with DOS 3.0 and higher. The following additional characters are allowed on Atari's GEMDOS, but should be avoided for compatibility with MS-DOS/PC DOS: The semicolon (;) should be avoided in filenames under DR DOS 3.31 and higher, PalmDOS, Novell DOS, OpenDOS, Concurrent DOS, Multiuser DOS, System Manager and REAL/32, because it may conflict with the syntax to specify file and directory passwords: "...\DIRSPEC.EXT;DIRPWD\FILESPEC.EXT;FILEPWD". The operating system will strip off one[47](and also two—since DR-DOS 7.02) semicolons and pending passwords from the filenames before storing them on disk. (The command processor4DOSuses semicolons for include lists and requires the semicolon to be doubled for password protected files with any commands supporting wildcards.[47]) The at-sign character (@) is used for filelists by many DR-DOS, PalmDOS, Novell DOS, OpenDOS and Multiuser DOS, System Manager and REAL/32 commands, as well as by 4DOS and may therefore sometimes be difficult to use in filenames.[47] Under Multiuser DOS and REAL/32, the exclamation mark (!) is not a valid filename character since it is used to separate multiple commands in a single command line.[47] Under IBM 4680 OS and 4690 OS, the following characters are not allowed in filenames: Additionally, the following special characters are not allowed in the first, fourth, fifth and eight character of a filename, as they conflict with the host command processor (HCP) and input sequence table build file names: The DOS file names are in the currentOEM character set: this can have surprising effects if characters handled in one way for a given code page are interpreted differently for another code page (DOS commandCHCP) with respect to lower and upper case, sorting, or validity as file name character. Before Microsoft added support for long filenames and creation/access time stamps, bytes0x0C–0x15of the directory entry were used by other operating systems to store additional metadata, most notably the operating systems of the Digital Research family stored file passwords, access rights, owner IDs, and file deletion data there. While Microsoft's newer extensions are not fully compatible with these extensions by default, most of them can coexist in third-party FAT implementations (at least on FAT12 and FAT16 volumes). 32-byte directory entries, both in the Root Directory Region and in subdirectories, are of the following format (see also8.3 filename): The first byte can have the following special values: Under DR DOS 6.0 and higher, including PalmDOS, Novell DOS and OpenDOS,0x05is also used for pending delete files under DELWATCH. Once they are removed from the deletion tracking queue, the first character of an erased file is replaced by0xE5. The value0xE5was chosen for this purpose in 86-DOS because 8-inch CP/M floppies came pre-formatted with this value filled and so could be used to store files out of the box.[42][nb 12] Versions of DOS prior to 5.0 start scanning directory tables from the top of the directory table to the bottom. In order to increase chances for successful file undeletion, DOS 5.0 and higher will remember the position of the last written directory entry and use this as a starting point for directory table scans. Deliberately setting this bit for files which will not be written to (executables, shared libraries and data files) may help avoid problems with concurrent file access in multi-tasking, multi-user or network environments with applications not specifically designed to work in such environments (i.e. non-SHARE-enabled programs). TheDCFdigital camera file system standard utilizes the Read Only attribute to allow directories or individual files (DCF objects) to be marked as "protected" from deletion by the user.[52] Under DR DOS 3.31 and higher, under PalmDOS, Novell DOS, OpenDOS, Concurrent DOS, Multiuser DOS, REAL/32, password protected files and directories also have the hidden attribute set.[47]Password-aware operating systems should not hide password-protected files from directory views, even if this bit may be set. The password protection mechanism does not depend on the hidden attribute being set up to including DR-DOS 7.03, but if the hidden attribute is set, it should not be cleared for any password-protected files. Pending delete files and directories under DELWATCH have the volume attribute set until they are purged or undeleted.[47] Under DR DOS 6.0 and higher, including PalmDOS, Novell DOS and OpenDOS, the volume attribute is set for pending delete files and directories under DELWATCH. An attribute combination of0x0Fis used to designate aVFAT long file nameentry since MS-DOS 7.0. Older versions of DOS can mistake this for a directory volume label, as they take the first entry with volume attribute set as volume label. This problem can be avoided if a directory volume label is enforced as part of the format process; for this reason some disk tools explicitly write dummy "NO␠NAME␠␠␠␠" directory volume labels when the user does not specify a volume label.[nb 13]Since volume labels normally don't have the system attribute set at the same time, it is possible to distinguish between volume labels and VFAT LFN entries. The attribute combination0x0Fcould occasionally also occur as part of a valid pending delete file under DELWATCH, however on FAT12 and FAT16 volumes, VFAT LFN entries always have the cluster value at0x1Aset to0x0000and the length entry at0x1Cis never0x00000000, whereas the entry at0x1Ais always non-zero for pending delete files under DELWATCH. This check does not work on FAT32 volumes. Double usage for create time ms and file char does not create a conflict, since the creation time is no longer important for deleted files. If bits 15-11 > 23 or bits 10-5 > 59 or bits 4-0 > 29 here, or when bits 12-0 at offset0x14hold an access bitmap and this is not a FAT32 volume or a volume using OS/2 Extended Attributes, then this entry actually holds a password hash, otherwise it can be assumed to be a file creation time. The usage for creation date for existing files does not conflict with last modified time for deleted files because they are never used at the same time. For the same reason, the usage for the record size of existing files and last modified time of deleted files does not conflict. Creation dates and record sizes cannot be used at the same time, however, both are stored only on file creation and never changed later on, thereby limiting the conflict to FlexOS, 4680 OS and 4690 OS systems accessing files created under foreign operating systems as well as potential display or file sorting problems on systems trying to interpret a record size as creation time. To avoid the conflict, the storage of creation dates should be an optional feature of operating systems supporting it. The usage for the owner IDs of existing files does not conflict with last modified date stamp for deleted files because they are never used at the same time.[47]The usage of the last modified date stamp for deleted files does not conflict with access date since access dates are no longer important for deleted files. However, owner IDs and access dates cannot be used at the same time. The storage of the high two bytes of the first cluster in a file on FAT32 partially conflicts with access rights bitmaps. Entries with the Volume Label flag, subdirectory ".." pointing to the FAT12 and FAT16 root, and empty files with size 0 should have first cluster 0. VFAT LFN entries also have this entry set to 0; on FAT12 and FAT16 volumes this can be used as part of a detection mechanism to distinguish between pending delete files under DELWATCH and VFAT LFNs; see above. VFAT LFN entries never store the value0x00000000here. This can be used as part of a detection mechanism to distinguish between pending delete files under DELWATCH and VFAT LFNs; see above. TheFlexOS-based operating systemsIBM 4680 OSandIBM 4690 OSsupport unique distribution attributes stored in some bits of the previously reserved areas in the directory entries:[62] Some incompatible extensions found in some operating systems include: The FAT12, FAT16, FAT16B, and FAT32 variants of the FAT file systems have clear limits based on the number of clusters and the number of sectors per cluster (1, 2, 4, ..., 128). For the typical value of 512 bytes per sector: FAT12 requirements : 3 sectors on each copy of FAT for every 1,024 clustersFAT16 requirements : 1 sector on each copy of FAT for every 256 clustersFAT32 requirements : 1 sector on each copy of FAT for every 128 clustersFAT12 range : 1 to 4,084 clusters : 1 to 12 sectors per copy of FATFAT16 range : 4,085 to 65,524 clusters : 16 to 256 sectors per copy of FATFAT32 range : 65,525 to 268,435,444 clusters : 512 to 2,097,152 sectors per copy of FATFAT12 minimum : 1 sector per cluster × 1 clusters = 512 bytes (0.5 KiB)FAT16 minimum : 1 sector per cluster × 4,085 clusters = 2,091,520 bytes (2,042.5 KB)FAT32 minimum : 1 sector per cluster × 65,525 clusters = 33,548,800 bytes (32,762.5 KB)FAT12 maximum : 64 sectors per cluster × 4,084 clusters = 133,824,512 bytes (≈ 127 MB)[FAT12 maximum : 128 sectors per cluster × 4,084 clusters = 267,694,024 bytes (≈ 255 MB)]FAT16 maximum : 64 sectors per cluster × 65,524 clusters = 2,147,090,432 bytes (≈2,047 MB)[FAT16 maximum : 128 sectors per cluster × 65,524 clusters = 4,294,180,864 bytes (≈4,095 MB)]FAT32 maximum : 8 sectors per cluster × 268,435,444 clusters = 1,099,511,578,624 bytes (≈1,024 GB)FAT32 maximum : 16 sectors per cluster × 268,173,557 clusters = 2,196,877,778,944 bytes (≈2,046 GB)[FAT32 maximum : 32 sectors per cluster × 134,152,181 clusters = 2,197,949,333,504 bytes (≈2,047 GB)][FAT32 maximum : 64 sectors per cluster × 67,092,469 clusters = 2,198,486,024,192 bytes (≈2,047 GB)][FAT32 maximum : 128 sectors per cluster × 33,550,325 clusters = 2,198,754,099,200 bytes (≈2,047 GB)] Because each FAT32 entry occupies 32 bits (4 bytes) the maximal number of clusters (268435444) requires 2097152 FAT sectors for a sector size of 512 bytes. 2097152 is0x200000, and storing this value needs more than two bytes. Therefore, FAT32 introduced a new 32-bit value in the FAT32 boot sector immediately following the 32-bit value for the total number of sectors introduced in the FAT16B variant. The boot record extensions introduced with DOS 4.0 start with a magic 40 (0x28) or 41 (0x29). Typically FAT drivers look only at the number of clusters to distinguish FAT12, FAT16, and FAT32: the human readable strings identifying the FAT variant in the boot record are ignored, because they exist only for media formatted with DOS 4.0 or later. Determining the number of directory entries per cluster is straightforward. Each entry occupies 32 bytes; this results in 16 entries per sector for a sector size of 512 bytes. The DOS 5RMDIR/RDcommand removes the initial "." (this directory) and ".." (parent directory) entries in subdirectories directly, therefore sector size 32 on a RAM disk is possible for FAT12, but requires 2 or more sectors per cluster. A FAT12 boot sector without the DOS 4 extensions needs 29 bytes before the first unnecessary FAT16B 32-bit number of hidden sectors, this leaves three bytes for the (on a RAM disk unused) boot code and the magic0x55 0xAAat the end of all boot sectors. OnWindows NTthe smallest supported sector size is 128. OnWindows NToperating systems theFORMATcommand options/A:128Kand/A:256Kcorrespond to the maximal cluster size0x80(128) with a sector size 1024 and 2048, respectively. For the common sector size 512/A:64Kyields 128 sectors per cluster. Both editions of each ECMA-107[24]and ISO/IEC 9293[25][26]specify aMax Cluster NumberMAXdetermined by the formulaMAX=1+trunc((TS-SSA)/SC), and reserve cluster numbersMAX+1up to 4086 (0xFF6, FAT12) and later 65526 (0xFFF6, FAT16) for future standardization. Microsoft's EFI FAT32 specification[4]states that any FAT file system with less than 4085 clusters is FAT12, else any FAT file system with less than 65,525 clusters is FAT16, and otherwise it is FAT32. The entry for cluster 0 at the beginning of the FAT must be identical to the media descriptor byte found in the BPB, whereas the entry for cluster 1 reflects the end-of-chain value used by the formatter for cluster chains (0xFFF,0xFFFFor0x0FFFFFFF). The entries for cluster numbers 0 and 1 end at a byte boundary even for FAT12, e.g.,0xF9FFFFfor media descriptor0xF9. The first data cluster is 2,[33]and consequently the last clusterMAXgets numberMAX+1. This results in data cluster numbers 2...4085 (0xFF5) for FAT12, 2...65525 (0xFFF5) for FAT16, and 2...268435445 (0x0FFFFFF5) for FAT32. The only available values reserved for future standardization are therefore0xFF6(FAT12) and0xFFF6(FAT16). As noted below "less than 4085" is also used for Linux implementations,[44]or asMicrosoft's FAT specification puts it:[4] ...when it says <, it does not mean <=. Note also that the numbers are correct. The first number for FAT12 is 4085; the second number for FAT16 is 65525. These numbers and the "<" signs are not wrong." The FAT file system does not contain built-in mechanisms which prevent newly written files from becoming scattered across the partition.[65]On volumes where files are created and deleted frequently or their lengths often changed, the medium will become increasingly fragmented over time. While the design of the FAT file system does not cause any organizational overhead in disk structures or reduce the amount of free storage space with increased amounts offragmentation, as it occurs withexternal fragmentation, the time required to read and write fragmented files will increase as the operating system will have to follow the cluster chains in the FAT (with parts having to be loaded into memory first in particular on large volumes) and read the corresponding data physically scattered over the whole medium reducing chances for the low-level block device driver to perform multi-sector disk I/O or initiate larger DMA transfers, thereby effectively increasing I/O protocol overhead as well as arm movement and head settle times inside the disk drive. Also, file operations will become slower with growing fragmentation as it takes increasingly longer for the operating system to find files or free clusters. Other file systems, e.g.,HPFSorexFAT, usefree space bitmapsthat indicate used and available clusters, which could then be quickly looked up in order to find free contiguous areas. Another solution is the linkage of all free clusters into one or more lists (as is done inUnixfile systems). Instead, the FAT has to be scanned as an array to find free clusters, which can lead to performance penalties with large disks. In fact, seeking for files in large subdirectories or computing the free disk space on FAT volumes is one of the most resource intensive operations, as it requires reading the directory tables or even the entire FAT linearly. Since the total amount of clusters and the size of their entries in the FAT was still small on FAT12 and FAT16 volumes, this could still be tolerated on FAT12 and FAT16 volumes most of the time, considering that the introduction of more sophisticated disk structures would have also increased the complexity andmemory footprintof real-mode operating systems with their minimum total memory requirements of 128 KB or less (such as with DOS) for which FAT has been designed and optimized originally. With the introduction of FAT32, long seek and scan times became more apparent, particularly on very large volumes. A possible justification suggested by Microsoft'sRaymond Chenfor limiting the maximum size of FAT32 partitions created on Windows was the time required to perform a "DIR" operation, which always displays the free disk space as the last line.[66]Displaying this line took longer and longer as the number of clusters increased. FAT32 therefore introduced a special file system information sector where the previously computed amount of free space is preserved over power cycles, so that the free space counter needs to be recalculated only when a removable FAT32 formatted medium gets ejected without first unmounting it or if the system is switched off without properly shutting down the operating system, a problem mostly visible with pre-ATX-style PCs, on plain DOS systems and some battery-powered consumer products. With the huge cluster sizes (16 KB, 32 KB, 64 KB) forced by larger FAT partitions,internal fragmentationin form of disk space waste by file slack due tocluster overhang(as files are rarely exact multiples of cluster size) starts to be a problem as well, especially when there are a great many small files. Various optimizations and tweaks to the implementation of FAT file system drivers, block device drivers and disk tools have been devised to overcome most of the performance bottlenecks in the file system's inherent design without having to change the layout of the on-disk structures.[67][68]They can be divided into on-line and off-line methods and work by trying to avoid fragmentation in the file system in the first place, deploying methods to better cope with existing fragmentation, and by reordering and optimizing the on-disk structures. With optimizations in place, the performance on FAT volumes can often reach that of more sophisticated file systems in practical scenarios, while at the same time retaining the advantage of being accessible even on very small or old systems. DOS 3.0 and higher will not immediately reuse disk space of deleted files for new allocations but instead seek for previously unused space before starting to use disk space of previously deleted files as well. This not only helps to maintain the integrity of deleted files for as long as possible but also speeds up file allocations and avoids fragmentation, since never before allocated disk space is always unfragmented. DOS accomplishes this by keeping a pointer to the last allocated cluster on each mounted volume in memory and starts searching for free space from this location upwards instead of at the beginning of the FAT, as it was still done by DOS 2.x.[13]If the end of the FAT is reached, it would wrap around to continue the search at the beginning of the FAT until either free space has been found or the original position has been reached again without having found free space.[13]These pointers are initialized to point to the start of the FATs after bootup,[13]but on FAT32 volumes, DOS 7.1 and higher will attempt to retrieve the last position from theFS Information Sector. This mechanism is defeated, however, if an application often deletes and recreates temporary files as the operating system would then try to maintain the integrity of void data effectively causing more fragmentation in the end.[13]In some DOS versions, the usage of a special API function to create temporary files can be used to avoid this problem. Additionally, directory entries of deleted files will be marked0xE5since DOS 3.0.[42]DOS 5.0 and higher will start to reuse these entries only when previously unused directory entries have been used up in the table and the system would otherwise have to expand the table itself.[6] Since DOS 3.3 the operating system provides means to improve the performance of file operations withFASTOPENby keeping track of the position of recently opened files or directories in various forms of lists (MS-DOS/PC DOS) or hash tables (DR-DOS), which can reduce file seek and open times significantly. Before DOS 5.0 special care must be taken when using such mechanisms in conjunction with disk defragmentation software bypassing the file system or disk drivers. Windows NT will allocate disk space to files on FAT in advance, selecting large contiguous areas, but in case of a failure, files which were being appended will appear larger than they were ever written into, with a lot of random data at the end. Other high-level mechanisms may read in and process larger parts or the complete FAT on startup or on demand when needed and dynamically build up in-memory tree representations of the volume's file structures different from the on-disk structures.[67][68]This may, on volumes with many free clusters, occupy even less memory than an image of the FAT itself. In particular on highly fragmented or filled volumes, seeks become much faster than with linear scans over the actual FAT, even if an image of the FAT would be stored in memory. Also, operating on the logically high level of files and cluster-chains instead of on sector or track level, it becomes possible to avoid some degree of file fragmentation in the first place or to carry out local file defragmentation and reordering of directory entries based on their names or access patterns in the background. Some of the perceived problems withfragmentationof FAT file systems also result from performance limitations of the underlying blockdevice drivers, which becomes more visible the lesser memory is available for sector buffering and track blocking/deblocking: While the single-tasking DOS had provisions for multi-sector reads and track blocking/deblocking, the operating system and the traditional PC hard disk architecture (only one outstanding input/output request at a timeandno DMA transfers) originally did not contain mechanisms which could alleviate fragmentation by asynchronously prefetching next data while the application was processing the previous chunks. Such features became available later. Later DOS versions also provided built-in support for look-ahead sector buffering and came with dynamically loadable disk caching programs working on physical or logical sector level, often utilizingEMSorXMSmemory and sometimes providing adaptive caching strategies or even run inprotected modethroughDPMSorCloakingto increase performance by gaining direct access to the cached data in linear memory rather than through conventional DOS APIs. Write-behind caching was often not enabled by default with Microsoft software (if present) given the problem of data loss in case of a power failure or crash, made easier by the lack of hardware protection between applications and the system. VFATLong File Names (LFNs) are stored on a FAT file system using a trick: adding additional entries into the directory before the normal file entry. The additional entries are marked with the Volume Label, System, Hidden, and Read Only attributes (yielding0x0F), which is a combination that is not expected in the MS-DOS environment, and therefore ignored by MS-DOS programs and third-party utilities. Notably, a directory containing only volume labels is considered as empty and is allowed to be deleted; such a situation appears if files created with long names are deleted from plain DOS. This method is very similar to the DELWATCH method to utilize the volume attribute to hide pending delete files for possible future undeletion since DR DOS 6.0 (1991) and higher. It is also similar to a method publicly discussed to store long filenames on Ataris and under Linux in 1992.[69][70] Because older versions of DOS could mistake LFN names in the root directory for the volume label, VFAT was designed to create a blank volume label in the root directory before adding any LFN name entries (if a volume label did not already exist).[nb 13] Each phony entry can contain up to 13UCS-2characters (26 bytes) by using fields in the record which contain file size or time stamps (but not the starting cluster field, for compatibility with disk utilities, the starting cluster field is set to a value of 0. See8.3 filenamefor additional explanations). Up to 20 of these 13-character entries may be chained, supporting a maximum length of 255 UCS-2 characters.[55] If the position of the LFN's last character isnotat a directory entry boundary (13, 26, 39, ...), then a0x0000terminator is added in the next character position. Then, if that terminator is also not at the boundary, remaining character positions are filled with0xFFFF. No directory entry containing a lone terminator will exist. LFN entries use the following format: If there are multiple LFN entries required to represent a file name, the entry representing theendof the filename comes first. The sequence number of this entry has bit 6 (0x40) set to represent that it is the last logical LFN entry, and it has the highest sequence number. The sequence number decreases in the following entries. The entry representing thestartof the filename has sequence number 1. A value of0xE5is used to indicate that the entry is deleted. On FAT12 and FAT16 volumes, testing for the values at0x1Ato be zero and at0x1Cto be non-zero can be used to distinguish between VFAT LFNs and pending delete files under DELWATCH. For example, a filename like "File with very long filename.ext" would be formatted like this: Achecksumalso allows verification of whether a long file name matches the 8.3 name; such a mismatch could occur if a file was deleted and re-created using DOS in the same directory position. The checksum is calculated using the algorithm below. (pFCBName is a pointer to the name as it appears in a regular directory entry, i.e. the first eight characters are the filename, and the last three are the extension. The dot is implicit. Any unused space in the filename is padded with space characters (ASCII0x20). For example, "Readme.txt" would be "README␠␠TXT".) If a filename contains only lowercase letters, or is a combination of a lowercasebasenamewith an uppercaseextension, or vice versa; and has no special characters, and fits within the 8.3 limits, a VFAT entry is not created on Windows NT and later versions of Windows such as XP. Instead, two bits in byte0x0Cof the directory entry are used to indicate that the filename should be considered as entirely or partially lowercase. Specifically, bit 4 means lowercaseextensionand bit 3 lowercasebasename, which allows for combinations such as "example.TXT" or "HELLO.txt" but not "Mixed.txt". Few other operating systems support it. This creates a backwards-compatibility problem with older Windows versions (Windows 95 / 98 / 98 SE / ME) that see all-uppercase filenames if this extension has been used, and therefore can change the name of a file when it is transported between operating systems, such as on a USB flash drive. Current 2.6.x versions of Linux will recognize this extension when reading (source: kernel 2.6.18/fs/fat/dir.candfs/vfat/namei.c); the mount optionshortnamedetermines whether this feature is used when writing.[71]
https://en.wikipedia.org/wiki/FAT_file_fragmentation
Adisk compressionsoftware utilityincreases the amount of information that can be stored on ahard diskdrive of given size. Unlike afile compressionutility, which compresses only specified files—and which requires theuserto designate the files to be compressed—anon-the-fly disk compressionutility works automatically through resident software without the user needing to be aware of its existence. On-the-fly disk compression is therefore also known astransparent,real-timeoronline disk compression. When information needs to be stored to the hard disk, the utilitycompressesthe information. When information needs to be read, the utility decompresses the information. A disk compression utility overrides the standardoperating systemroutines. Since allsoftware applicationsaccess the hard disk using these routines, they continue to work after disk compression has been installed. Disk compression utilities were popular especially in the early 1990s, whenmicrocomputerhard disks were still relatively small (20 to 80megabytes). Hard drives were also rather expensive at the time, costing roughly 10USDper megabyte. For the users who bought disk compression applications, the software proved to be in the short term a more economic means of acquiring more disk space as opposed to replacing their current drive with a larger one. A good disk compression utility could, on average, double the available space with negligible speed loss. Disk compression fell into disuse by the late 1990s, as advances in hard drive technology and manufacturing led to increased capacities and lower prices. Some of the initial disk compression solutions were hardware-assisted and utilized add-on compressor/decompressorcoprocessorcards in addition to a software driver. Known solutions include: With increasing PC processor power software-only solutions began to reach or even outperform the performance of hardware-assisted solutions in most scenarios. These compression utilities were sold independently. A user had to specifically choose to install and configure the software. The idea ofbundlingdisk compression into new machines appealed to resellers and users. Resellers liked that they could claim more storage space; users liked that they did not have to configure the software. Bundled utilities included (in chronological order): WhileWindows XP, from Microsoft, included both a native support and acommandnamedcompactthat compresses files onNTFSsystems, that is not implemented as a separate "compressed drive" like those above. Disk compression usually creates a single large file, which becomes avirtualhard drive. This is similar to how a single physical hard drive can bepartitionedinto multiple virtual drives. The compressed drive is accessed via adevice driver. All drives would initially be empty. The utility to create a drive would usually offer to "compress a current drive". This meant the utility would: Usually certainsystem fileswould not be transferred. For example, OSswap fileswould remain only on the host drive. A device driver had to be loaded to access the compressed drive. A compressed drive C: required changes to theboot processas follows: On systems with slower hard drives, disk compression could actually increase system performance. This was accomplished two ways: If the system had to wait frequently for hard drive access to complete (I/O bound) converting the hard drive to compressed drives could speed up the system significantly. Compression and decompression of the data increases the CPU utilization. If the system was alreadyCPU bound, disk compression decreases overall performance.[11] Some common drawbacks to using disk compression:
https://en.wikipedia.org/wiki/Disk_compression
This is alist of file formatsused bycomputers, organized by type.Filename extensionis usually noted in parentheses if they differ from thefile format's name or abbreviation. Manyoperating systemsdo not limit filenames to one extension shorter than 4 characters, as was common with some operating systems that supported theFile Allocation Table(FAT) file system. Examples of operating systems that do not impose this limit includeUnix-likesystems, andMicrosoft WindowsNT,95-98, andMEwhich have no three character limit on extensions for32-bitor64-bitapplications onfile systemsother than pre-Windows 95 and Windows NT 3.5 versions of the FAT file system. Some filenames are given extensions longer than three characters. While MS-DOS and NT always treat the suffix after the last period in a file's name as its extension, in UNIX-like systems, the final period does not necessarily mean that the text after the last period is the file's extension.[1] Some file formats, such as.txtor.text, may be listed multiple times. Computer-aidedis a prefix for several categories of tools (e.g., design, manufacture, engineering) which assist professionals in their respective fields (e.g.,machining,architecture,schematics). Computer-aided design(CAD) software assists engineers, architects and other design professionals in project design. Electronic design automation(EDA), or electronic computer-aided design (ECAD), is specific to the field of electrical engineering. Files output fromAutomatic Test Equipmentor post-processed from such. These files storeformatted textandplain text. These file formats allow for the rapid creation of new binary file formats. Raster or bitmapfiles store images as a group ofpixels. Vector graphicsuse geometric primitives such as points, lines, curves, and polygons to represent images. 3D graphicsare 3D models that allow building models in real-time or non-real-time 3D rendering. Object extensions: Formats of files used for bibliographic information (citation) management. Molecular biology and bioinformatics: Authentication and general encryption formats are listed here. This section shows file formats for encrypted general data, rather than a specific program's data. Passwordfiles (sometimes called keychain files) contain lists of other passwords, usually encrypted. List of common file formats of data for video games on systems that support filesystems, most commonly PC games. These formats are used by the video gameosu!. These formats are used by the video gameMinecraft. Formats used by games based on theTrackManiaengine. Formats used by games based on theDoomengine. Formats used by games based on theQuakeengine. Formats used by games based on theUnrealengine. Formats used by games based on this engine. Formats used byDiabloby Blizzard Entertainment. Formats used byBohemia Interactive.Operation:Flashpoint,ARMA 2, VBS2 Formats used byValve.Half-Life 2,Counter-Strike: Source,Day of Defeat: Source,Half-Life 2: Episode One,Team Fortress 2,Half-Life 2: Episode Two,Portal,Left 4 Dead,Left 4 Dead 2,Alien Swarm,Portal 2,Counter-Strike: Global Offensive,Titanfall,Insurgency,Titanfall 2,Day of Infamy Formats used inMetal Gear Rising: Revengeance,Bayonetta,Vanquish (video game),Nier: Automata List of the most common filename extensions used when a game'sROM imageor storage medium is copied from an originalread-only memory(ROM) device to an external memory such ashard diskforback uppurposes or for making the game playable with anemulator. In the case of cartridge-based software, if the platform specific extension is not used then filename extensions ".rom" or ".bin" are usually used to clarify that the file contains a copy of a content of a ROM. ROM, disk or tape images usually do not consist of one file or ROM, rather an entire file or ROM structure contained within one file on the backup medium.[36] Static Dynamically generated These file formats are fairly well defined by long-term use or a general standard, but the content of each file is often highly specific to particular software or has been extended by further standards for specific uses. These are filename extensions and broad types reused frequently with differing formats or no specific format by different programs.
https://en.wikipedia.org/wiki/List_of_file_formats
Lists of filename extensionsinclude:
https://en.wikipedia.org/wiki/List_of_filename_extensions
.propertiesis afile extensionforfilesmainly used inJava-related technologies to store the configurable parameters of anapplication. They can also be used for storing strings forInternationalization and localization; these are known as Property Resource Bundles. Each parameter is stored as a pair ofstrings, one storing the name of the parameter (called thekey), and the other storing the value. Unlike many popular file formats, there is noRFCfor .properties files and specification documents are not always clear, most likely due to the simplicity of the format. Each line in a .properties file normally stores a single property. Several formats are possible for each line, includingkey=value,key = value,key:value, andkey value. Single-quotes or double-quotes are considered part of the string. Trailing space is significant and presumed to be trimmed as required by the consumer. Commentlines in .properties files are denoted by thenumber sign(#) or theexclamation mark(!) as the first nonblankcharacter, in which all remaining text on that line is ignored. The backwards slash is used to escape a character. An example of a properties file is provided below. In the example above: Before Java 9, the encoding of a .properties file isISO-8859-1, also known as Latin-1. All non-ASCII characters must be entered by usingUnicodeescape characters, e.g. \uHHHH where HHHH is a hexadecimal index of the character in the Unicode character set. This allows for using .properties files asresource bundlesforlocalization. A non-Latin-1 text file can be converted to a correct .properties file by using thenative2asciitool that is shipped with theJDKor by using a tool, such as po2prop,[1]that manages the transformation from a bilingual localization format into .properties escaping. An alternative to using unicode escape characters for non-Latin-1 character in ISO 8859-1 character encoded Java *.properties files is to use the JDK's XML Properties file format which by default isUTF-8encoded, introduced starting with Java 1.5.[2] Another alternative is to create custom control that provides custom encoding.[3] In Java 9 and newer, the default encoding specifically for property resource bundles is UTF-8, and if an invalid UTF-8 byte sequence is encountered it falls back to ISO-8859-1.[4][5] Editing .properties files is done using anytext editorsuch as those typically installed on variousOperating SystemsincludingNotepadon Windows orEmacs,Vim, etc. on Linux systems. Third-party tools are also available with additional functionality specific to editing .properties files such as: Apache Flexuses .properties files as well, but here they are UTF-8 encoded.[6] InApache mod_jk's uriworkermap.properties format, an exclamation mark ("!") denotes aNegationoperator when used as the first nonblank characterin a line.[7] PerlCPANcontains Config::Properties to interface to a .properties file.[8] SAPuses .properties files for localization within their framework SAPUI5 and its open-source variantOpenUI5[9] There are manyNode.js(JavaScript/TypeScript) options available onNpm's package manager.[10] PHPalso has many package options available.[11]
https://en.wikipedia.org/wiki/.properties
Aclustered file system(CFS) is afile systemwhich is shared by being simultaneouslymountedon multipleservers. There are several approaches toclustering, most of which do not employ a clustered file system (onlydirect attached storagefor each node). Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster.Parallel file systemsare a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.[1] Ashared-disk file systemuses astorage area network(SAN) to allow multiple computers to gain direct disk access at theblock level. Access control and translation from file-level operations that applications use to block-level operations used by the SAN must take place on the client node. The most common type of clustered file system, the shared-disk file system – by adding mechanisms forconcurrency control– provides a consistent andserializableview of the file system, avoiding corruption and unintendeddata losseven when multiple clients try to access the same files at the same time. Shared-disk file-systems commonly employ some sort offencingmechanism to prevent data corruption in case of node failures, because an unfenced device can cause data corruption if it loses communication with its sister nodes and tries to access the same information other nodes are accessing. The underlying storage area network may use any of a number of block-level protocols, includingSCSI,iSCSI,HyperSCSI,ATA over Ethernet(AoE),Fibre Channel,network block device, andInfiniBand. There are different architectural approaches to a shared-disk filesystem. Some distribute file information across all the servers in a cluster (fully distributed).[2] Distributed file systemsdo not shareblock level accessto the same storage but use a networkprotocol.[3][4]These are commonly known as network file systems, even though they are not the only file systems that use the network to send data.[5]Distributed file systems can restrict access to the file system depending onaccess listsorcapabilitieson both the servers and the clients, depending on how the protocol is designed. The difference between a distributed file system and adistributed data storeis that a distributed file system allows files to be accessed using the same interfaces and semantics as local files – for example, mounting/unmounting, listing directories, read/write at byte boundaries, system's native permission model. Distributed data stores, by contrast, require using a different API or library and have different semantics (most often those of a database).[6] Distributed file systems may aim for "transparency" in a number of aspects. That is, they aim to be "invisible" to client programs, which "see" a system which is similar to a local file system. Behind the scenes, the distributed file system handles locating files, transporting data, and potentially providing other features listed below. TheIncompatible Timesharing Systemused virtual devices for transparent inter-machine file system access in the 1960s. More file servers were developed in the 1970s. In 1976,Digital Equipment Corporationcreated theFile Access Listener(FAL), an implementation of theData Access Protocolas part ofDECnetPhase II which became the first widely used network file system. In 1984,Sun Microsystemscreated the file system called "Network File System" (NFS) which became the first widely usedInternet Protocolbased network file system.[4]Other notable network file systems areAndrew File System(AFS),Apple Filing Protocol(AFP),NetWare Core Protocol(NCP), andServer Message Block(SMB) which is also known as Common Internet File System (CIFS). In 1986,IBMannounced client and server support for Distributed Data Management Architecture (DDM) for theSystem/36,System/38, and IBM mainframe computers runningCICS. This was followed by the support forIBM Personal Computer,AS/400, IBM mainframe computers under theMVSandVSEoperating systems, andFlexOS. DDM also became the foundation forDistributed Relational Database Architecture, also known as DRDA. There are manypeer-to-peernetwork protocolsfor open-sourcedistributed file systems for cloudor closed-source clustered file systems, e. g.:9P,AFS,Coda,CIFS/SMB,DCE/DFS, WekaFS,[7]Lustre, PanFS,[8]Google File System,Mnet,Chord Project. Network-attached storage (NAS) provides both storage and a file system, like a shared disk file system on top of a storage area network (SAN). NAS typically uses file-based protocols (as opposed to block-based protocols a SAN would use) such asNFS(popular onUNIXsystems), SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems),AFP(used withApple Macintoshcomputers), orNCP(used withOESandNovell NetWare). The failure of disk hardware or a given storage node in a cluster can create asingle point of failurethat can result indata lossor unavailability.Fault toleranceand high availability can be provided throughdata replicationof one sort or another, so that data remains intact and available despite the failure of any single piece of equipment. For examples, see the lists ofdistributed fault-tolerant file systemsanddistributed parallel fault-tolerant file systems. A commonperformancemeasurementof a clustered file system is the amount of time needed to satisfy service requests. In conventional systems, this time consists of a disk-access time and a small amount ofCPU-processing time. But in a clustered file system, a remote access has additional overhead due to the distributed structure. This includes the time to deliver the request to a server, the time to deliver the response to the client, and for each direction, a CPU overhead of running thecommunication protocolsoftware. Concurrency control becomes an issue when more than one person or client is accessing the same file or block and want to update it. Hence updates to the file from one client should not interfere with access and updates from other clients. This problem is more complex with file systems due to concurrent overlapping writes, where different writers write to overlapping regions of the file concurrently.[9]This problem is usually handled byconcurrency controlorlockingwhich may either be built into the file system or provided by an add-on protocol. IBM mainframes in the 1970s could share physical disks and file systems if each machine had its own channel connection to the drives' control units. In the 1980s,Digital Equipment Corporation'sTOPS-20andOpenVMSclusters (VAX/ALPHA/IA64) included shared disk file systems.[10]
https://en.wikipedia.org/wiki/Clustered_file_system
Incomputer science,shared memoryismemorythat may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. Using memory for communication inside a single program, e.g. among its multiplethreads, is also referred to as shared memory. In computer hardware,shared memoryrefers to a (typically large) block ofrandom access memory(RAM) that can be accessed by several differentcentral processing units(CPUs) in amultiprocessor computer system. Shared memory systems may use:[1] A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to the same location. The issue with shared memory systems is that many CPUs need fast access to memory and will likelycache memory, which has two complications: Technologies likecrossbar switches,Omega networks,HyperTransportorfront-side buscan be used to dampen the bottleneck-effects. In case of aHeterogeneous System Architecture(processor architecture that integrates different types of processors, such asCPUsandGPUs, with shared memory), thememory management unit(MMU) of the CPU and theinput–output memory management unit(IOMMU) of the GPU have to share certain characteristics, like a common address space. The alternatives to shared memory aredistributed memoryanddistributed shared memory, each having a similar set of issues. In computer software,shared memoryis either Since both processes can access the shared memory area like regular working memory, this is a very fast way of communication (as opposed to other mechanisms of IPC such asnamed pipes,Unix domain socketsorCORBA). On the other hand, it is less scalable, as for example the communicating processes must be running on the same machine (of other IPC methods, only Internet domain sockets—not Unix domain sockets—can use acomputer network), and care must be taken to avoid issues if processes sharing memory are running on separate CPUs and the underlying architecture is notcache coherent. IPC by shared memory is used for example to transfer images between the application and theX serveron Unix systems, or inside the IStream object returned by CoMarshalInterThreadInterfaceInStream in the COM libraries underWindows. Dynamic librariesare generally held in memory once and mapped to multiple processes, and only pages that had to be customized for the individual process (because a symbol resolved differently there) are duplicated, usually with a mechanism known ascopy-on-writethat transparently copies the page when a write is attempted, and then lets the write succeed on the private copy. Compared to multiple address space operating systems, memory sharing -- especially of sharing procedures or pointer-based structures -- is simpler insingle address space operating systems.[2] POSIXprovides a standardized API for using shared memory,POSIX Shared Memory. This uses the functionshm_openfrom sys/mman.h.[3]POSIX interprocess communication (part of the POSIX:XSI Extension) includes the shared-memory functionsshmat,shmctl,shmdtandshmget.[4][5]Unix System V provides an API for shared memory as well. This uses shmget from sys/shm.h. BSD systems provide "anonymous mapped memory" which can be used by several processes. The shared memory created byshm_openis persistent. It stays in the system until explicitly removed by a process. This has a drawback in that if the process crashes and fails to clean up shared memory it will stay until system shutdown; that limitation is not present in an Android-specific implementation dubbedashmem.[6] POSIX also provides themmapAPI for mapping files into memory; a mapping can be shared, allowing the file's contents to be used as shared memory. Linux distributions based on the 2.6 kernel and later offer /dev/shm as shared memory in the form of aRAM disk, more specifically as a world-writable directory (a directory in which every user of the system can create files) that is stored in memory. Both theRedHatandDebianbased distributions include it by default. Support for this type of RAM disk is completely optional within the kernelconfiguration file.[7] On Windows, one can useCreateFileMappingandMapViewOfFilefunctions to map a region of a file into memory in multiple processes.[8] Some C++ libraries provide a portable and object-oriented access to shared memory functionality. For example,Boostcontains the Boost.Interprocess C++ Library[9]andQtprovides the QSharedMemory class.[10] For programming languages with POSIX bindings (say, C/C++), shared memory regions can be created and accessed by calling the functions provided by the operating system. Other programming languages may have their own ways of using these operating facilities for similar effect. For example,PHPprovides anAPIto create shared memory, similar toPOSIXfunctions.[11]
https://en.wikipedia.org/wiki/Shared_memory
Incomputing(specificallydata transmissionanddata storage), ablock,[1]sometimes called aphysical record, is a sequence ofbytesorbits, usually containing some whole number ofrecords, having a fixed length; ablock size.[2]Data thusstructuredare said to beblocked. The process of putting data into blocks is calledblocking, whiledeblockingis the process of extracting data from blocks. Blocked data is normally stored in adata buffer, and read or written a whole block at a time. Blocking reduces theoverheadand speeds up the handling of thedata stream.[3]For some devices, such as magnetic tape andCKD disk devices, blocking reduces the amount of external storage required for the data. Blocking is almost universally employed when storing data to9-trackmagnetic tape, NANDflash memory, and rotating media such asfloppy disks,hard disks, andoptical discs. Mostfile systemsare based on ablock device, which is a level ofabstractionfor the hardware responsible for storing and retrieving specified blocks of data, though the block size in file systems may be a multiple of the physical block size. This leads to space inefficiency due tointernal fragmentation, since file lengths are often not integer multiples of block size, and thus the last block of a file may remain partially empty. This will createslack space. Some newer file systems, such asBtrfsandFreeBSDUFS2, attempt to solve this through techniques calledblock suballocation and tail merging. Other file systems such asZFSsupport variable block sizes.[4][5] Block storage is normally abstracted by a file system ordatabase management system(DBMS) for use by applications and end users. The physical or logical volumes accessed viablock I/Omay be devices internal to a server, directly attached viaSCSIorFibre Channel, or distant devices accessed via astorage area network(SAN) using a protocol such asiSCSI, orAoE. DBMSes often use their own block I/O for improved performance and recoverability as compared to layering the DBMS on top of a file system. On Linux the default block size for most file systems is 4096 bytes. Thestatcommand part ofGNU Core Utilitiescan be used to check the block size. InRusta block can be read with theread_exactmethod.[6] InPythona block can be read with thereadmethod. InC#a block can be read with theFileStreamclass.[7]
https://en.wikipedia.org/wiki/Block_storage
Incomputing, afile systemorfilesystem(often abbreviated toFSorfs) governsfileorganization and access. Alocalfile system is a capability of anoperating systemthat services the applications running on the samecomputer.[1][2]Adistributed file systemis aprotocolthat provides file access betweennetworkedcomputers. A file system provides adata storageservicethat allowsapplicationsto sharemass storage. Without a file system, applications could access the storage inincompatibleways that lead toresource contention,data corruptionanddata loss. There are many file systemdesignsandimplementations– with various structure and features and various resulting characteristics such as speed, flexibility, security, size and more. File systems have been developed for many types ofstorage devices, includinghard disk drives(HDDs),solid-state drives(SSDs),magnetic tapesandoptical discs.[3] A portion of the computermain memorycan be set up as aRAM diskthat serves as a storage device for a file system. File systems such astmpfscan store files invirtual memory. Avirtualfile system provides access to files that are either computed on request, calledvirtual files(seeprocfsandsysfs), or are mapping into another, backing storage. Fromc.1900and before the advent of computers the termsfile system,filing systemandsystem for filingwere used to describe methods of organizing, storing and retrieving paper documents.[4]By 1961, the termfile systemwas being applied to computerized filing alongside the original meaning.[5]By 1964, it was in general use.[6] A local file system'sarchitecturecan be described aslayers of abstractioneven though a particular file system design may not actually separate the concepts.[7] Thelogical file systemlayer provides relatively high-level access via anapplication programming interface(API) for file operations including open, close, read and write – delegating operations to lower layers. This layer manages open file table entries and per-process file descriptors.[8]It provides file access, directory operations, security and protection.[7] Thevirtual file system, an optional layer, supports multiple concurrent instances of physical file systems, each of which is called a file system implementation.[8] Thephysical file systemlayer provides relatively low-level access to a storage device (e.g. disk). It reads and writesdata blocks, providesbufferingand othermemory managementand controls placement of blocks in specific locations on the storage medium. This layer usesdevice driversorchannel I/Oto drive the storage device.[7] Afile name, orfilename, identifies a file to consuming applications and in some cases users. A file name is unique so that an application can refer to exactly one file for a particular name. If the file system supports directories, then generally file name uniqueness is enforced within the context of each directory. In other words, a storage can contain multiple files with the same name, but not in the same directory. Most file systems restrict the length of a file name. Some file systems match file names ascase sensitiveand others as case insensitive. For example, the namesMYFILEandmyfilematch the same file for case insensitive, but different files for case sensitive. Most modern file systems allow a file name to contain a wide range of characters from theUnicodecharacter set. Some restrict characters such as those used to indicate special attributes such as a device, device type, directory prefix, file path separator, or file type. File systems typically support organizing files intodirectories, also calledfolders, which segregate files into groups. This may be implemented by associating the file name with an index in atable of contentsor aninodein aUnix-likefile system. Directory structures may be flat (i.e. linear), or allow hierarchies by allowing a directory to contain directories, called subdirectories. The first file system to support arbitrary hierarchies of directories was used in theMulticsoperating system.[9]The native file systems of Unix-like systems also support arbitrary directory hierarchies, as do,Apple'sHierarchical File Systemand its successorHFS+inclassic Mac OS, theFATfile system inMS-DOS2.0 and later versions of MS-DOS and inMicrosoft Windows, theNTFSfile system in theWindows NTfamily of operating systems, and the ODS-2 (On-Disk Structure-2) and higher levels of theFiles-11file system inOpenVMS. In addition to data, the file content, a file system also manages associatedmetadatawhich may include but is not limited to: A file system stores associated metadata separate from the content of the file. Most file systems store the names of all the files in one directory in one place—the directory table for that directory—which is often stored like any other file. Many file systems put only some of the metadata for a file in the directory table, and the rest of the metadata for that file in a completely separate structure, such as theinode. Most file systems also store metadata not associated with any one particular file. Such metadata includes information about unused regions—free space bitmap,block availability map—and information aboutbad sectors. Often such information about anallocation groupis stored inside the allocation group itself. Additional attributes can be associated on file systems, such asNTFS,XFS,ext2,ext3, some versions ofUFS, andHFS+, usingextended file attributes. Some file systems provide for user defined attributes such as the author of the document, the character encoding of a document or the size of an image. Some file systems allow for different data collections to be associated with one file name. These separate collections may be referred to asstreamsorforks. Apple has long used a forked file system on the Macintosh, and Microsoft supports streams in NTFS. Some file systems maintain multiple past revisions of a file under a single file name; the file name by itself retrieves the most recent version, while prior saved version can be accessed using a special naming convention such as "filename;4" or "filename(-4)" to access the version four saves ago. Seecomparison of file systems § Metadatafor details on which file systems support which kinds of metadata. A local file system tracks which areas of storage belong to which file and which are not being used. When a file system creates a file, it allocates space for data. Some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows. To delete a file, the file system records that the file's space is free; available to use for another file. A local file system manages storage space to provide a level of reliability and efficiency. Generally, it allocates storage device space in a granular manner, usually multiple physical units (i.e.bytes). For example, inApple DOSof the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used atrack/sector map.[citation needed] The granular nature results in unused space, sometimes calledslack space, for each file except for those that have the rare size that is a multiple of the granular allocation.[10]For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB. Generally, the allocation unit size is set when the storage is configured. Choosing a relatively small size compared to the files stored, results in excessive access overhead. Choosing a relatively large size results in excessive unused space. Choosing an allocation size based on the average size of files expected to be in the storage tends to minimize unusable space. As a file system creates, modifies and deletes files, the underlying storage representation may becomefragmented. Files and the unused space between files will occupy allocation blocks that are not contiguous. A file becomes fragmented if space needed to store its content cannot be allocated in contiguous blocks. Free space becomes fragmented when files are deleted.[11] This is invisible to the end user and the system still works correctly. However this can degrade performance on some storage hardware that work better with contiguous blocks such ashard disk drives. Other hardware such assolid-state drivesare not affected by fragmentation. A file system often supports access control of data that it manages. The intent of access control is often to prevent certain users from reading or modifying certain files. Access control can also restrict access by program in order to ensure that data is modified in a controlled way. Examples include passwords stored in the metadata of the file or elsewhere andfile permissionsin the form of permission bits,access control lists, orcapabilities. The need for file system utilities to be able to access the data at the media level to reorganize the structures and provide efficient backup usually means that these are only effective for polite users but are not effective against intruders. Methods for encrypting file data are sometimes included in the file system. This is very effective since there is no need for file system utilities to know the encryption seed to effectively manage the data. The risks of relying on encryption include the fact that an attacker can copy the data and use brute force to decrypt the data. Additionally, losing the seed means losing the data. Some operating systems allow a system administrator to enabledisk quotasto limit a user's use of storage space. A file system typically ensures that stored data remains consistent in both normal operations as well as exceptional situations like: Recovery from exceptional situations may include updating metadata, directory entries and handling data that was buffered but not written to storage media. A file system might record events to allow analysis of issues such as: Many file systems access data as a stream ofbytes. Typically, to read file data, a program provides amemory bufferand the file system retrieves data from the medium and then writes the data to the buffer. A write involves the program providing a buffer of bytes that the file system reads and then stores to the medium. Some file systems, or layers on top of a file system, allow a program to define arecordso that a program can read and write data as a structure; not an unorganized sequence of bytes. If afixed lengthrecord definition is used, then locating the nthrecord can be calculated mathematically, which is relatively fast compared to parsing the data for record separators. An identification for each record, also known as a key, allows a program to read, write and update records without regard to their location in storage. Such storage requires managing blocks of media, usually separating key blocks and data blocks. Efficient algorithms can be developed with pyramid structures for locating records.[12] Typically, a file system can be managed by the user via various utility programs. Some utilities allow the user to create, configure and remove an instance of a file system. It may allow extending or truncating the space allocated to the file system. Directory utilities may be used to create, rename and deletedirectory entries, which are also known asdentries(singular:dentry),[13]and to alter metadata associated with a directory. Directory utilities may also include capabilities to create additional links to a directory (hard linksinUnix), to rename parent links (".." inUnix-likeoperating systems),[clarification needed]and to create bidirectional links to files. File utilities create, list, copy, move and delete files, and alter metadata. They may be able to truncate data, truncate or extend space allocation, append to, move, and modify files in-place. Depending on the underlying structure of the file system, they may provide a mechanism to prepend to or truncate from the beginning of a file, insert entries into the middle of a file, or delete entries from a file. Utilities to free space for deleted files, if the file system provides an undelete function, also belong to this category. Some file systems defer operations such as reorganization of free space, secure erasing of free space, and rebuilding of hierarchical structures by providing utilities to perform these functions at times of minimal activity. An example is the file systemdefragmentationutilities. Some of the most important features of file system utilities are supervisory activities which may involve bypassing ownership or direct access to the underlying device. These include high-performance backup and recovery, data replication, and reorganization of various data structures and allocation tables within the file system. Utilities, libraries and programs usefile system APIsto make requests of the file system. These include data transfer, positioning, updating metadata, managing directories, managing access specifications, and removal. Frequently, retail systems are configured with a single file system occupying the entirestorage device. Another approach is topartitionthe disk so that several file systems with different attributes can be used. One file system, for use as browser cache or email storage, might be configured with a small allocation size. This keeps the activity of creating and deleting files typical of browser activity in a narrow area of the disk where it will not interfere with other file allocations. Another partition might be created for the storage of audio or video files with a relatively large block size. Yet another may normally be setread-onlyand only periodically be set writable. Some file systems, such asZFSandAPFS, support multiple file systems sharing a common pool of free blocks, supporting several file systems with different attributes without having to reserved a fixed amount of space for each file system.[14][15] A third approach, which is mostly used in cloud systems, is to use "disk images" to house additional file systems, with the same attributes or not, within another (host) file system as a file. A common example is virtualization: one user can run an experimental Linux distribution (using theext4file system) in a virtual machine under his/her production Windows environment (usingNTFS). The ext4 file system resides in a disk image, which is treated as a file (or multiple files, depending on thehypervisorand settings) in the NTFS host file system. Having multiple file systems on a single system has the additional benefit that in the event of a corruption of a single file system, the remaining file systems will frequently still be intact. This includes virus destruction of thesystemfile system or even a system that will not boot. File system utilities which require dedicated access can be effectively completed piecemeal. In addition,defragmentationmay be more effective. Several system maintenance utilities, such as virus scans and backups, can also be processed in segments. For example, it is not necessary to backup the file system containing videos along with all the other files if none have been added since the last backup. As for the image files, one can easily "spin off" differential images which contain only "new" data written to the master (original) image. Differential images can be used for both safety concerns (as a "disposable" system - can be quickly restored if destroyed or contaminated by a virus, as the old image can be removed and a new image can be created in matter of seconds, even without automated procedures) and quick virtual machine deployment (since the differential images can be quickly spawned using a script in batches). Adisk file systemtakes advantages of the ability of disk storage media to randomly address data in a short amount of time. Additional considerations include the speed of accessing data following that initially requested and the anticipation that the following data may also be requested. This permits multiple users (or processes) access to various data on the disk without regard to the sequential location of the data. Examples includeFAT(FAT12,FAT16,FAT32),exFAT,NTFS,ReFS,HFSandHFS+,HPFS,APFS,UFS,ext2,ext3,ext4,XFS,btrfs,Files-11,Veritas File System,VMFS,ZFS,ReiserFS,NSSand ScoutFS. Some disk file systems arejournaling file systemsorversioning file systems. ISO 9660andUniversal Disk Format(UDF) are two common formats that targetCompact Discs,DVDsandBlu-raydiscs.Mount Rainieris an extension to UDF supported since 2.6 series of the Linux kernel and since Windows Vista that facilitates rewriting to DVDs. Aflash file systemconsiders the special abilities, performance and restrictions offlash memorydevices. Frequently a disk file system can use a flash memory device as the underlying storage media, but it is much better to use a file system specifically designed for a flash device.[16] Atape file systemis a file system and tape format designed to store files on tape.Magnetic tapesare sequential storage media with significantly longer random data access times than disks, posing challenges to the creation and efficient management of a general-purpose file system. In a disk file system there is typically a master file directory, and a map of used and free data regions. Any file additions, changes, or removals require updating the directory and the used/free maps. Random access to data regions is measured in milliseconds so this system works well for disks. Tape requires linear motion to wind and unwind potentially very long reels of media. This tape motion may take several seconds to several minutes to move the read/write head from one end of the tape to the other. Consequently, a master file directory and usage map can be extremely slow and inefficient with tape. Writing typically involves reading the block usage map to find free blocks for writing, updating the usage map and directory to add the data, and then advancing the tape to write the data in the correct spot. Each additional file write requires updating the map and directory and writing the data, which may take several seconds to occur for each file. Tape file systems instead typically allow for the file directory to be spread across the tape intermixed with the data, referred to asstreaming, so that time-consuming and repeated tape motions are not required to write new data. However, a side effect of this design is that reading the file directory of a tape usually requires scanning the entire tape to read all the scattered directory entries. Most data archiving software that works with tape storage will store a local copy of the tape catalog on a disk file system, so that adding files to a tape can be done quickly without having to rescan the tape media. The local tape catalog copy is usually discarded if not used for a specified period of time, at which point the tape must be re-scanned if it is to be used in the future. IBM has developed a file system for tape called theLinear Tape File System. The IBM implementation of this file system has been released as the open-sourceIBM Linear Tape File System — Single Drive Edition (LTFS-SDE)product. The Linear Tape File System uses a separate partition on the tape to record the index meta-data, thereby avoiding the problems associated with scattering directory entries across the entire tape. Writing data to a tape, erasing, or formatting a tape is often a significantly time-consuming process and can take several hours on large tapes.[a]With many data tape technologies it is not necessary to format the tape before over-writing new data to the tape. This is due to the inherently destructive nature of overwriting data on sequential media. Because of the time it can take to format a tape, typically tapes are pre-formatted so that the tape user does not need to spend time preparing each new tape for use. All that is usually necessary is to write an identifying media label to the tape before use, and even this can be automatically written by software when a new tape is used for the first time. Another concept for file management is the idea of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similarrich metadata.[17] IBM DB2 for i[18](formerly known as DB2/400 and DB2 for i5/OS) is a database file system as part of the object basedIBM i[19]operating system (formerly known as OS/400 and i5/OS), incorporating asingle level storeand running on IBM Power Systems (formerly known as AS/400 and iSeries), designed by Frank G. Soltis IBM's former chief scientist for IBM i. Around 1978 to 1988 Frank G. Soltis and his team at IBM Rochester had successfully designed and applied technologies like the database file system where others like Microsoft later failed to accomplish.[20]These technologies are informally known as 'Fortress Rochester'[citation needed]and were in few basic aspects extended from early Mainframe technologies but in many ways more advanced from a technological perspective[citation needed]. Some other projects that are not "pure" database file systems but that use some aspects of a database file system: Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the commandshell, may leave the entire system in an unusable state. Transaction processingintroduces theatomicityguarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent. Either the software will be completely installed or the failed installation will be completely rolled back, but an unusable partial install will not be left on the system. Transactions also provide theisolationguarantee[clarification needed], meaning that operations within a transaction are hidden from other threads on the system until the transaction commits, and that interfering operations on the system will be properlyserializedwith the transaction. Windows, beginning with Vista, added transaction support toNTFS, in a feature calledTransactional NTFS, but its use is now discouraged.[21]There are a number of research prototypes of transactional file systems for UNIX systems, including the Valor file system,[22]Amino,[23]LFS,[24]and a transactionalext3file system on the TxOS kernel,[25]as well as transactional file systems targeting embedded systems, such as TFFS.[26] Ensuring consistency across multiple file system operations is difficult, if not impossible, without file system transactions.File lockingcan be used as aconcurrency controlmechanism for individual files, but it typically does not protect the directory structure or file metadata. For instance, file locking cannot preventTOCTTOUrace conditions on symbolic links. File locking also cannot automatically roll back a failed operation, such as a software upgrade; this requires atomicity. Journaling file systemsis one technique used to introduce transaction-level consistency to file system structures. Journal transactions are not exposed to programs as part of the OS API; they are only used internally to ensure consistency at the granularity of a single system call. Data backup systems typically do not provide support for direct backup of data stored in a transactional manner, which makes the recovery of reliable and consistent data sets difficult. Most backup software simply notes what files have changed since a certain time, regardless of the transactional state shared across multiple files in the overall dataset. As a workaround, some database systems simply produce an archived state file containing all data up to that point, and the backup software only backs that up and does not interact directly with the active transactional databases at all. Recovery requires separate recreation of the database from the state file after the file has been restored by the backup software. Anetwork file systemis a file system that acts as a client for a remote file access protocol, providing access to files on a server. Programs using local interfaces can transparently create, manage and access hierarchical directories and files in remote network-connected computers. Examples of network file systems include clients for theNFS,[27]AFS,SMBprotocols, and file-system-like clients forFTPandWebDAV. Ashared disk file systemis one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually astorage area network). The file system arbitrates access to that subsystem, preventing write collisions.[28]Examples includeGFS2fromRed Hat,GPFS, now known as Spectrum Scale, from IBM,SFSfrom DataPlow,CXFSfromSGI,StorNextfromQuantum Corporationand ScoutFS from Versity. Some file systems expose elements of the operating system as files so they can be acted on via thefile system API. This is common inUnix-likeoperating systems, and to a lesser extent in other operating systems. Examples include: In the 1970s disk and digital tape devices were too expensive for some earlymicrocomputerusers. An inexpensive basic data storage system was devised that used commonaudio cassettetape. When the system needed to write data, the user was notified to press "RECORD" on the cassette recorder, then press "RETURN" on the keyboard to notify the system that the cassette recorder was recording. The system wrote a sound to provide time synchronization, thenmodulated soundsthat encoded a prefix, the data, achecksumand a suffix. When the system needed to read data, the user was instructed to press "PLAY" on the cassette recorder. The system wouldlistento the sounds on the tape waiting until a burst of sound could be recognized as the synchronization. The system would then interpret subsequent sounds as data. When the data read was complete, the system would notify the user to press "STOP" on the cassette recorder. It was primitive, but it (mostly) worked. Data was stored sequentially, usually in an unnamed format, although some systems (such as theCommodore PETseries of computers) did allow the files to be named. Multiple sets of data could be written and located by fast-forwarding the tape and observing at the tape counter to find the approximate start of the next data region on the tape. The user might have to listen to the sounds to find the right spot to begin playing the next data region. Some implementations even included audible sounds interspersed with the data. In a flat file system, there are nosubdirectories; directory entries for all files are stored in a single directory. Whenfloppy diskmedia was first available this type of file system was adequate due to the relatively small amount of data space available.CP/Mmachines featured a flat file system, where files could be assigned to one of 16user areasand generic file operations narrowed to work on one instead of defaulting to work on all of them. These user areas were no more than special attributes associated with the files; that is, it was not necessary to define specificquotafor each of these areas and files could be added to groups for as long as there was still free storage space on the disk. The earlyApple Macintoshalso featured a flat file system, theMacintosh File System. It was unusual in that the file management program (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of EMFS. This structure required every file to have a unique name, even if it appeared to be in a separate folder.IBMDOS/360andOS/360store entries for all files on a disk pack (volume) in a directory on the pack called aVolume Table of Contents(VTOC). While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files. A recent addition to the flat file system family isAmazon'sS3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes. Anoperating system(OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently. An OS typically provides file system access to the user. Often an OS providescommand line interface, such asUnix shell, WindowsCommand PromptandPowerShell, andOpenVMS DCL. An OS often also providesgraphical user interfacefile browserssuch as MacOSFinderand WindowsFile Explorer. Unix-likeoperating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is oneroot directory, and every file existing on the system is located under it somewhere. Unix-like systems can use aRAM diskor network shared resource as its root directory. Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is calledmountinga file system. For example, to access the files on aCD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory." The directory given to the operating system is called themount point– it might, for example, be/media. The/mediadirectory exists on many Unix systems (as specified in theFilesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only theadministrator(i.e.root user) may authorize the mounting of file systems. Unix-likeoperating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose. Linuxsupports numerous file systems, but common choices for the system disk on a block device include the ext* family (ext2,ext3andext4),XFS,JFS, andbtrfs. For raw flash without aflash translation layer(FTL) orMemory Technology Device(MTD), there areUBIFS,JFFS2andYAFFS, among others.SquashFSis a common compressed read-only file system. Solarisin earlier releases defaulted to (non-journaled or non-logging)UFSfor bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS. Support for other file systems and significant enhancements were added over time, includingVeritas SoftwareCorp. (journaling)VxFS, Sun Microsystems (clustering)QFS, Sun Microsystems (journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting)ZFS. Kernel extensions were added to Solaris to allow for bootable VeritasVxFSoperation. Logging orjournalingwas added to UFS in Sun'sSolaris 7. Releases ofSolaris 10, Solaris Express,OpenSolaris, and other open source variants of the Solaris operating system later supported bootableZFS. Logical Volume Managementallows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may useSolaris Volume Manager(formerly known asSolstice DiskSuite). Multiple operating systems (including Solaris) may useVeritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools inZFS. macOS (formerly Mac OS X)uses theApple File System(APFS), which in 2017 replaced a file system inherited fromclassic Mac OScalledHFS Plus(HFS+). Apple also uses the term "Mac OS Extended" for HFS+.[29]HFS Plus is ametadata-rich andcase-preservingbut (usually)case-insensitivefile system. Due to the Unix roots of macOS, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter. File names can be up to 255 characters. HFS Plus usesUnicodeto store file names. On macOS, thefiletypecan come from thetype code, stored in file's metadata, or thefilename extension. HFS Plus has three kinds of links: Unix-stylehard links, Unix-stylesymbolic links, andaliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code inuserland. macOS 10.13 High Sierra, which was announced on June 5, 2017, at Apple's WWDC event, uses theApple File Systemonsolid-state drives. macOS also supported theUFSfile system, derived from theBSDUnix Fast File System viaNeXTSTEP. However, as ofMac OS X Leopard, macOS could no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.[30]As ofMac OS X LionUFS support was completely dropped. Newer versions of macOS are capable of reading and writing to the legacyFATfile systems (16 and 32) common on Windows. They are also capable ofreadingthe newerNTFSfile systems for Windows. In order towriteto NTFS file systems on macOS versions prior toMac OS X Snow Leopardthird-party software is necessary. Mac OS X 10.6 (Snow Leopard) and later allow writing to NTFS file systems, but only after a non-trivial system setting change (third-party software exists that automates this).[31] Finally, macOS supports reading and writing of theexFATfile system since Mac OS X Snow Leopard, starting from version 10.6.5.[32] OS/21.2 introduced theHigh Performance File System(HPFS). HPFS supports mixed case file names in differentcode pages, long file names (255 characters), more efficient use of disk space, an architecture that keeps related items close to each other on the disk volume, less fragmentation of data,extent-basedspace allocation, aB+ treestructure for directories, and the root directory located at the midpoint of the disk, for faster average access. Ajournaled filesystem(JFS) was shipped in 1999. PC-BSDis a desktop version of FreeBSD, which inheritsFreeBSD'sZFSsupport, similarly toFreeNAS. The new graphical installer ofPC-BSDcan handle/ (root) on ZFSandRAID-Zpool installs anddisk encryptionusingGeliright from the start in an easy convenient (GUI) way. The current PC-BSD 9.0+ 'Isotope Edition' has ZFS filesystem version 5 and ZFS storage pool version 28. Plan 9 from Bell Labstreats everything as a file and accesses all objects as a file would be accessed (i.e., there is noioctlormmap): networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I/O operations onfile descriptors. The9Pprotocol removes the difference between local and remote files. File systems in Plan 9 are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system. TheInfernooperating system shares these concepts with Plan 9. Windows makes use of theFAT,NTFS,exFAT,Live File SystemandReFSfile systems (the last of these is only supported and usable inWindows Server 2012,Windows Server 2016,Windows 8,Windows 8.1, andWindows 10; Windows cannot boot from it). Windows uses adrive letterabstraction at the user level to distinguish one disk or partition from another. For example, thepathC:\WINDOWSrepresents a directoryWINDOWSon the partition represented by the letter C. Drive C: is most commonly used for the primaryhard disk drivepartition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs exist in many applications which make assumptions that the drive that the operating system is installed on is C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk drive partition, can be traced toMS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived fromCP/Min the 1970s, and ultimately from IBM'sCP/CMSof 1967. The family ofFATfile systems is supported by almost all operating systems for personal computers, including all versions ofWindowsandMS-DOS/PC DOS,OS/2, andDR-DOS. (PC DOS is an OEM version of MS-DOS, MS-DOS was originally based onSCP's86-DOS. DR-DOS was based onDigital Research'sConcurrent DOS, a successor ofCP/M-86.) The FAT file systems are therefore well-suited as a universal exchange format between computers and devices of most any type and age. The FAT file system traces its roots back to an (incompatible) 8-bit FAT precursor inStandalone Disk BASICand the short-livedMDOS/MIDASproject.[citation needed] Over the years, the file system has been expanded fromFAT12toFAT16andFAT32. Various features have been added to the file system includingsubdirectories,codepagesupport,extended attributes, andlong filenames. Third parties such as Digital Research have incorporated optional support for deletion tracking, and volume/directory/file-based multi-user security schemes to support file and directory passwords and permissions such as read/write/execute/delete access rights. Most of these extensions are not supported by Windows. The FAT12 and FAT16 file systems had a limit on the number of entries in theroot directoryof the file system and had restrictions on the maximum size of FAT-formatted disks orpartitions. FAT32 addresses the limitations in FAT12 and FAT16, except for the file size limit of close to 4 GB, but it remains limited compared to NTFS. FAT12, FAT16 and FAT32 also have a limit of eight characters for the file name, and three characters for the extension (such as.exe). This is commonly referred to as the8.3 filenamelimit.VFAT, an optional extension to FAT12, FAT16 and FAT32, introduced inWindows 95andWindows NT 3.5, allowed long file names (LFN) to be stored in the FAT file system in a backwards compatible fashion. NTFS, introduced with theWindows NToperating system in 1993, allowedACL-based permission control. Other features also supported byNTFSinclude hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, and reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links). exFAThas certain advantages over NTFS with regard tofile system overhead.[citation needed] exFAT is not backward compatible with FAT file systems such as FAT12, FAT16 or FAT32. The file system is supported with newer Windows systems, such as Windows XP, Windows Server 2003, Windows Vista, Windows 2008, Windows 7, Windows 8, Windows 8.1, Windows 10 and Windows 11. exFAT is supported in macOS starting with version 10.6.5 (Snow Leopard).[32]Support in other operating systems is sparse since implementing support for exFAT requires a license. exFAT is the only file system that is fully supported on both macOS and Windows that can hold files larger than 4 GB.[33][34] Prior to the introduction ofVSAM,OS/360systems implemented a hybrid file system. The system was designed to easily supportremovable disk packs, so the information relating to all files on one disk (volumein IBM terminology) is stored on that disk in aflat system filecalled theVolume Table of Contents(VTOC). The VTOC stores all metadata for the file. Later a hierarchical directory structure was imposed with the introduction of theSystem Catalog, which can optionally catalog files (datasets) on resident and removable volumes. The catalog only contains information to relate a dataset to a specific volume. If the user requests access to a dataset on an offline volume, and they have suitable privileges, the system will attempt to mount the required volume. Cataloged and non-cataloged datasets can still be accessed using information in the VTOC, bypassing the catalog, if the required volume id is provided to the OPEN request. Still later the VTOC was indexed to speed up access. The IBMConversational Monitor System(CMS) component ofVM/370uses a separate flat file system for eachvirtual disk(minidisk). File data and control information are scattered and intermixed. The anchor is a record called theMaster File Directory(MFD), always located in the fourth block on the disk. Originally CMS used fixed-length 800-byte blocks, but later versions used larger size blocks up to 4K. Access to a data record requires two levels ofindirection, where the file's directory entry (called aFile Status Table(FST) entry) points to blocks containing a list of addresses of the individual records. Data on the AS/400 and its successors consists of system objects mapped into the system virtual address space in asingle-level store. Many types ofobjectsare defined including the directories and files found in other file systems. File objects, along with other types of objects, form the basis of the AS/400's support for an integratedrelational database. File systems limitstorable data capacity– generally driven by the typical size of storage devices at the time the file system is designed and anticipated into the foreseeable future. Since storage sizes have increased at nearexponentialrate (seeMoore's law), newer storage devices often exceed existing file system limits within only a few years after introduction. This requires new file systems with ever increasing capacity. With higher capacity, the need for capabilities and therefore complexity increases as well. File system complexity typically varies proportionally with available storage capacity. Capacity issues aside, the file systems of early 1980shome computerswith 50 KB to 512 KB of storage would not be a reasonable choice for modern storage systems with hundreds of gigabytes of capacity. Likewise, modern file systems would not be a reasonable choice for these early systems, since the complexity of modern file system structures would quickly consume the limited capacity of early storage systems. It may be advantageous or necessary to have files in a different file system than they currently exist. Reasons include the need for an increase in the space requirements beyond the limits of the current file system. The depth of path may need to be increased beyond the restrictions of the file system. There may be performance or reliability considerations. Providing access to another operating system which does not support the existing file system is another reason. In some cases conversion can be done in-place, although migrating the file system is more conservative, as it involves a creating a copy of the data and is recommended.[39]On Windows, FAT and FAT32 file systems can be converted to NTFS via the convert.exe utility, but not the reverse.[39]On Linux, ext2 can be converted to ext3 (and converted back), and ext3 can be converted to ext4 (but not back),[40]and both ext3 and ext4 can be converted tobtrfs, and converted back until the undo information is deleted.[41]These conversions are possible due to using the same format for the file data itself, and relocating the metadata into empty space, in some cases usingsparse filesupport.[41] Migration has the disadvantage of requiring additional space although it may be faster. The best case is if there is unused space on media which will contain the final file system. For example, to migrate a FAT32 file system to an ext2 file system, a new ext2 file system is created. Then the data from the FAT32 file system is copied to the ext2 one, and the old file system is deleted. An alternative, when there is not sufficient space to retain the original file system until the new one is created, is to use a work area (such as a removable media). This takes longer but has the benefit of producing a backup. Inhierarchical file systems, files are accessed by means of apaththat is a branching list of directories containing the file. Different file systems have different limits on the depth of the path. File systems also have a limit on the length of an individual file name. Copying files with long names or located in paths of significant depth from one file system to another may cause undesirable results. This depends on how the utility doing the copying handles the discrepancy.
https://en.wikipedia.org/wiki/File_storage
Cloud storageis a model ofcomputer data storagein whichdata, said to be on "the cloud", is storedremotelyin logicalpoolsand is accessible to users over a network, typically theInternet. Thephysical storagespans multipleservers(sometimes in multiple locations), and the physical environment is typically owned and managed by acloud computingprovider. These cloud storage providers are responsible for keeping the dataavailableandaccessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data. Cloud storage servicesmay be accessed through acolocatedcloud computingservice, aweb serviceapplication programming interface(API) or by applications that use the API, such ascloud desktopstorage, acloud storage gatewayorWeb-basedcontent management systems. Cloud computing is believed to have been invented byJ. C. R. Lickliderin the 1960s with his work onARPANETto connect people and data from anywhere at any time.[1] In 1983,CompuServeoffered its consumer users a small amount of disk space that could be used to store any files they chose to upload.[2] In 1994,AT&Tlaunched PersonaLink Services, an online platform for personal and business communication and entrepreneurship. The storage was one of the first to be all web-based, and referenced in their commercials as, "you can think of our electronic meeting place as the cloud."[3]Amazon Web Servicesintroduced their cloud storage serviceAmazon S3in 2006, and has gained widespread recognition and adoption as the storage supplier to popular services such asSmugMug,Dropbox, andPinterest. In 2005,Boxannounced an online file sharing and personal cloud content management service for businesses.[4] Cloud storage is based on highly virtualized infrastructure and is like broadercloud computingin terms of interfaces, near-instant elasticity andscalability,multi-tenancy, andmeteredresources. Cloud storage services can be used from an off-premises service (Amazon S3) or deployed on-premises (ViON Capacity Services).[5] There are three types of cloud storage: a hostedobject storageservice,file storage, andblock storage. Each of these cloud storage types offer their own unique advantages. Examples of object storage services that can be hosted and deployed with cloud storage characteristics includeAmazon S3,Oracle Cloud StorageandMicrosoft AzureStorage, object storage software likeOpenstack Swift, object storage systems likeEMC Atmos, EMC ECS and Hitachi Content Platform, and distributed storage research projects like OceanStore[6]and VISION Cloud.[7] Examples of file storage services includeAmazon Elastic File System(EFS) andQumulo Core,[8]used for applications that need access to shared files and require a file system. This storage is often supported with aNetwork Attached Storage(NAS) server, used for large content repositories, development environments, media stores, or user home directories. A block storage service likeAmazon Elastic Block Store(EBS) is used for other enterprise applications like databases and often requires dedicated, low latency storage for each host. This is comparable in certain respects todirect attached storage(DAS) or astorage area network(SAN). Cloud storage is:[6] Outsourcingdata storage increases theattack surface area.[17] There are several options available to avoid security issues. One option is to use a private cloud instead of a public cloud. Another option is to ingest data in an encrypted format where the key is held within the on-premise infrastructure. To this end, access is often by use of on-premisecloud storage gatewaysthat have options to encrypt the data prior to transfer.[21] Companies are not permanent and the services and products they provide can change. Outsourcing data storage to another company needs careful investigation and nothing is ever certain. Contracts set in stone can be worthless when a company ceases to exist or its circumstances change. Companies can:[22][23][24] Typically, cloud storageService Level Agreements(SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues,human errorslike misconfigurations,natural disasters,force majeureevents, orsecurity breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by other services offered within the same provider. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, service providers typically do not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.[26][27][28][29] Hybrid cloud storage is a term for a storage infrastructure that uses a combination of on-premises storage resources with cloud storage. The on-premises storage is usually managed by the organization, while the public cloud storage provider is responsible for the management and security of the data stored in the cloud.[37]Hybrid cloud storage can be implemented by an on-premisescloud storage gatewaythat presents a file system or object storage interface that users can access in the same way they would access a local storage system. The cloud storage gateway transparently transfers the data to and from the cloud storage service, providing low latency access to the data through a local cache.[21] Hybrid cloud storage can be used to supplement an organization's internal storage resources, or it can be used as the primary storage infrastructure. In either case, hybrid cloud storage can provide organizations with greater flexibility and scalability than traditional on-premises storage infrastructure.[37] There are several benefits to using hybrid cloud storage, including the ability tocachefrequently used data on-site for quick access, while inactivecold datais stored off-site in the cloud. This can save space, reduce storage costs and improve performance. Additionally, hybrid cloud storage can provide organizations with greater redundancy and fault tolerance, as data is stored in both on-premises and cloud storage infrastructure.[37]
https://en.wikipedia.org/wiki/Cloud_storage
Object access method(OAM) is anaccess methodunderz/OSwhich is designed for the storage of large numbers of large files, such as images. It has a number of distinguishing features, e.g. compared toVSAM: OAM is used in conjunction withIBM Db2. An example use case for OAM would be storing medical images in a Db2 database running under z/OS. OAM was created in the 1980s "as a prototype product for an insurance company to replacemicrofiche." Initially OAM supported optical storage and magnetic disks. In the 1990s support formagnetic tapewas added. In 2011 support was added for storage of objects in a z/OS unix file system—eitherzFSorNFS.[2] In the 1990s, Object Access Method was used by theCanadian Intellectual Property Officeto store documents related to patent processing.[3] Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it. Thismainframe computer-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Object_access_method
9P(or thePlan 9 Filesystem ProtocolorStyx) is anetwork protocoldeveloped for thePlan 9 from Bell Labsdistributed operating systemas the means of connecting the components of a Plan 9 system. Files are key objects in Plan 9. They representwindows,networkconnections,processes, and almost anything else available in the operating system. 9P was revised for the 4th edition of Plan 9 under the name9P2000, containing various improvements. Some of the improvements made are the removal of certain filename restrictions, the addition of a 'last modifier' metadata field for directories, and authentication files.[1]The latest version of theInferno operating systemalso uses 9P2000. The Inferno file protocol was originally called Styx, but technically it has always been a variant of 9P. A server implementation of 9P for Unix, called u9fs,[2][3]is included in the Plan 9 distribution. A 9POS Xclientkernel extensionis provided by Mac9P.[4]A kernel client driver implementing 9P with some extensions forLinuxis part of thev9fsproject. 9P and its derivatives have also found application in embedded environments, such as the Styx-on-a-Brick project forLego Mindstorms Bricks.[5] Many of Plan 9's applications take the form of 9P file servers. Examples include: Outside of Plan 9, the 9P protocol is still used when a lightweight remote file system is required:
https://en.wikipedia.org/wiki/9P_(protocol)
AnAdvanced Encryption Standard instruction set(AES instruction set) is a set of instructions that are specifically designed to performAES encryptionand decryption operations efficiently. These instructions are typically found in modern processors and can greatly accelerate AES operations compared to software implementations. An AES instruction set includes instructions forkey expansion, encryption, and decryption using various key sizes (128-bit, 192-bit, and 256-bit). The instruction set is often implemented as a set of instructions that can perform a single round of AES along with a special version for the last round which has a slightly different method. When AES is implemented as an instruction set instead of as software, it can have improved security, as itsside channel attacksurface is reduced.[1] AES-NI (or the IntelAdvanced Encryption Standard New Instructions;AES-NI) was the first major implementation. AES-NI is an extension to thex86instruction set architectureformicroprocessorsfromIntelandAMDproposed by Intel in March 2008.[2] A wider version of AES-NI,AVX-512 Vector AES instructions (VAES), is found inAVX-512.[3] The followingIntelprocessors support the AES-NI instruction set:[5] SeveralAMDprocessors support AES instructions: AES support with unprivileged processor instructions is also available in the latestSPARCprocessors (T3,T4,T5, M5, and forward) and in latestARMprocessors. TheSPARC T4processor, introduced in 2011, has user-level instructions implementing AES rounds.[13]These instructions are in addition to higher level encryption commands. TheARMv8-Aprocessor architecture, announced in 2011, including the ARM Cortex-A53 and A57 (but not previous v7 processors like the Cortex A5, 7, 8, 9, 11, 15[citation needed]) also have user-level instructions which implement AES rounds.[14] VIA x86 CPUsandAMD Geodeuse driver-based accelerated AES handling instead. (SeeCrypto API (Linux).) The following chips, while supporting AES hardware acceleration, do not support AES-NI: Programming information is available inARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile (Section A2.3 "The Armv8 Cryptographic Extension").[20] The Marvell Kirkwood was the embedded core of a range of SoC fromMarvell Technology, these SoC CPUs (ARM, mv_cesa in Linux) use driver-based accelerated AES handling. (SeeCrypto API (Linux).) The scalar and vector cryptographic instruction set extensions for the RISC-V architecture were ratified respectively on 2022 and 2023, which allowed RISC-V processors to implement hardware acceleration for AES,GHASH,SHA-256,SHA-512,SM3, andSM4. Before the AES-specific instructions were available on RISC-V, a number of RISC-V chips included integrated AES co-processors. Examples include: Since thePower ISA v.2.07, the instructionsvcipherandvcipherlastimplement one round of AES directly.[30] IBM z9 or later mainframe processors support AES as single-opcode (KM, KMC) AES ECB/CBC instructions via IBM's CryptoExpress hardware.[31]These single-instruction AES versions are therefore easier to use than Intel NI ones, but may not be extended to implement other algorithms based on AES round functions (such as theWhirlpoolandGrøstlhash functions). InAES-NI Performance Analyzed, Patrick Schmid and Achim Roos found "impressive results from a handful of applications already optimized to take advantage of Intel's AES-NI capability".[34]A performance analysis using theCrypto++security libraryshowed an increase in throughput from approximately 28.0 cycles per byte to 3.5 cycles per byte withAES/GCMversus aPentium 4with no acceleration.[35][36][failed verification][better source needed] Most modern compilers can emit AES instructions. A lot of security and cryptography software supports the AES instruction set, including the following notable core infrastructure: A fringe use of the AES instruction set involves using it on block ciphers with a similarly-structuredS-box, usingaffine transformto convert between the two.SM4,CamelliaandARIAhave been accelerated using AES-NI.[52][53][54]TheAVX-512 Galois Field New Instructions(GFNI) allows implementing these S-boxes in a more direct way.[55] New cryptographic algorithms have been constructed to specifically use parts of the AES algorithm, so that the AES instruction set can be used for speedups. The AEGIS family, which offersauthenticated encryption, runs with at least twice the speed of AES.[56]AEGIS is an "additional finalist for high-performance applications" in theCAESAR Competition.[57]
https://en.wikipedia.org/wiki/AES_instruction_set
TheFMA instruction setis an extension to the 128- and 256-bitStreaming SIMD Extensionsinstructions in thex86microprocessorinstruction setto performfused multiply–add(FMA) operations.[1]There are two variants: FMA3 and FMA4 instructions have almost identical functionality, but are not compatible. Both containfused multiply–add(FMA) instructions forfloating-pointscalar andSIMDoperations, but FMA3 instructions have three operands, while FMA4 ones have four. The FMA operation has the formd= round(a·b+c), where the round function performs aroundingto allow the result to fit within the destination register if there are too many significant bits to fit within the destination. The four-operand form (FMA4) allowsa,b,canddto be four different registers, while the three-operand form (FMA3) requires thatdbe the same register asa,borc. The three-operand form makes the code shorter and the hardware implementation slightly simpler, while the four-operand form provides more programming flexibility. SeeXOP instruction setfor more discussion of compatibility issues between Intel and AMD. Supported commands include Explicit order of operands is included in the mnemonic using numbers "132", "213", and "231": as well as operand format (packed or scalar) and size (single or double). This results in The incompatibility between Intel's FMA3 and AMD's FMA4 is due to both companies changing plans without coordinating coding details with each other. AMD changed their plans from FMA3 to FMA4 while Intel changed their plans from FMA4 to FMA3 almost at the same time. The history can be summarized as follows: Different compilers provide different levels of support for FMA:
https://en.wikipedia.org/wiki/FMA3_instruction_set
TheFMA instruction setis an extension to the 128- and 256-bitStreaming SIMD Extensionsinstructions in thex86microprocessorinstruction setto performfused multiply–add(FMA) operations.[1]There are two variants: FMA3 and FMA4 instructions have almost identical functionality, but are not compatible. Both containfused multiply–add(FMA) instructions forfloating-pointscalar andSIMDoperations, but FMA3 instructions have three operands, while FMA4 ones have four. The FMA operation has the formd= round(a·b+c), where the round function performs aroundingto allow the result to fit within the destination register if there are too many significant bits to fit within the destination. The four-operand form (FMA4) allowsa,b,canddto be four different registers, while the three-operand form (FMA3) requires thatdbe the same register asa,borc. The three-operand form makes the code shorter and the hardware implementation slightly simpler, while the four-operand form provides more programming flexibility. SeeXOP instruction setfor more discussion of compatibility issues between Intel and AMD. Supported commands include Explicit order of operands is included in the mnemonic using numbers "132", "213", and "231": as well as operand format (packed or scalar) and size (single or double). This results in The incompatibility between Intel's FMA3 and AMD's FMA4 is due to both companies changing plans without coordinating coding details with each other. AMD changed their plans from FMA3 to FMA4 while Intel changed their plans from FMA4 to FMA3 almost at the same time. The history can be summarized as follows: Different compilers provide different levels of support for FMA:
https://en.wikipedia.org/wiki/FMA4_instruction_set
Advanced Vector Extensions(AVX, also known asGesher New Instructionsand thenSandy Bridge New Instructions) areSIMDextensions to thex86instruction set architectureformicroprocessorsfromIntelandAdvanced Micro Devices(AMD). They were proposed by Intel in March 2008 and first supported by Intel with theSandy Bridge[1]microarchitecture shipping in Q1 2011 and later by AMD with theBulldozer[2]microarchitecture shipping in Q4 2011. AVX provides new features, new instructions, and a new coding scheme. AVX2(also known asHaswell New Instructions) expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with theHaswellmicroarchitecture, which shipped in 2013. AVX-512expands AVX to 512-bit support using a newEVEX prefixencoding proposed by Intel in July 2013 and first supported by Intel with theKnights Landingco-processor, which shipped in 2016.[3][4]In conventional processors, AVX-512 was introduced withSkylakeserver and HEDT processors in 2017. AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (seeSIMD). Each YMM register can hold and do simultaneous operations (math) on: The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (inx86-64mode, from XMM0–XMM15 to YMM0–YMM15). The legacySSEinstructions can still be utilized via theVEX prefixto operate on the lower 128 bits of the YMM registers. AVX introduces a three-operand SIMD instruction format calledVEX coding scheme, where the destination register is distinct from the two source operands. For example, anSSEinstruction using the conventional two-operand forma←a+bcan now use a non-destructive three-operand formc←a+b, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such asBMI. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced withAVX-512. Thealignmentrequirement of SIMD memory operands is relaxed.[5]Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, theVMOVDQAinstruction still requires its memory operand to be aligned. The newVEX coding schemeintroduces a new set of code prefixes that extends theopcodespace, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need forVZEROUPPERandVZEROALL. The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.[6] These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands. Issues regarding compatibility between future Intel and AMD processors are discussed underXOP instruction set. AVX adds new register-state through the 256-bit wide YMM register file, so explicitoperating systemsupport is required to properly save and restore AVX's expanded registers betweencontext switches. The following operating system versions support AVX: Advanced Vector Extensions 2 (AVX2), also known asHaswell New Instructions,[24]is an expansion of the AVX instruction set introduced in Intel'sHaswell microarchitecture. AVX2 makes the following additions: Sometimes three-operandfused multiply-accumulate(FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its ownCPUIDflag and is described onits own pageand not below. AVX-512are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed byIntelin July 2013.[3] AVX-512 instructions are encoded with the newEVEX prefix. It allows 4 operands, 8 new 64-bitopmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memoryaddressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode. AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following: Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors. The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).[26]: 23 [28] ^Note 1: Intel does not officially support AVX-512 family of instructions on theAlder Lakemicroprocessors. In early 2022, Intel began disabling in silicon (fusing off) AVX-512 in Alder Lake microprocessors to prevent customers from enabling AVX-512.[29]In older Alder Lake family CPUs with some legacy combinations of BIOS and microcode revisions, it was possible to execute AVX-512 family instructions when disabling all the efficiency cores which do not contain the silicon for AVX-512.[30][31][32] AVX-VNNI is aVEX-coded variant of theAVX512-VNNIinstruction set extension. Similarly, AVX-IFMA is aVEX-coded variant ofAVX512-IFMA. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features ofEVEXencoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when fullAVX-512support is not implemented in the processor. AVX10, announced in July 2023,[38]is a new, "converged" AVX instruction set. It addresses several issues of AVX-512, in particular that it is split into too many parts[39](20 feature flags). The initial technical paper also made 512-bit vectors optional to support, but as of revision 3.0 vector length enumeration is removed and 512-bit vectors are mandatory.[40] AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one).[41]For example, AVX10.2 indicates that a CPU is capable of the second version of AVX10.[42]Initial revisions of the AVX10 technical specifications also included maximum supported vector length as part of the ISA extension name, e.g. AVX10.2/256 would mean a second version of AVX10 with vector length up to 256 bits, but later revisions made that unnecessary. The first version of AVX10, notated AVX10.1, doesnotintroduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in IntelSapphire Rapids: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.[42] AVX10.1 was first released in IntelGranite Rapids[42](Q3 2024) and AVX10.2 will be available inDiamond Rapids.[43] APX is a new extension. It is not focused on vector computation, but provides RISC-like extensions to the x86-64 architecture by doubling the number of general-purpose registers to 32 and introducing three-operand instruction formats. AVX is only tangentially affected as APX introduces extended operands.[44][45] Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessivevoltage droopduring load transients. Some Intel processors have provisions to reduce theTurbo Boostfrequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits. OnSkylakeand its derivatives, the throttling is divided into three levels:[66][67] The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.[66] InIce Lake, only two levels persist:[68] Rocket Lakeprocessors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.[68]However, downclocking can still happen due to other reasons, such as reaching thermal and power limits. Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.[69] On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.[70]
https://en.wikipedia.org/wiki/Advanced_Vector_Extensions
wolfSSLis a small, portable, embedded SSL/TLS library targeted for use by embedded systems developers. It is anopen sourceimplementation ofTLS(SSL 3.0, TLS 1.0, 1.1, 1.2, 1.3, andDTLS1.0, 1.2, and 1.3) written in theC programming language. It includes SSL/TLS client libraries and an SSL/TLS server implementation as well as support for multiple APIs, including those defined bySSLandTLS. wolfSSL also includes anOpenSSLcompatibility interface with the most commonly used OpenSSL functions.[4][5] wolfSSL is currently available forMicrosoft Windows,Linux,macOS,Solaris,ESP32,ESP8266,ThreadX,VxWorks,FreeBSD,NetBSD,OpenBSD,embedded Linux,Yocto Project,OpenEmbedded,WinCE,Haiku,OpenWrt,iPhone,Android,Wii, andGameCubethrough DevKitPro support,QNX,MontaVista,Tronvariants,NonStop OS,OpenCL, Micrium'sMicroC/OS-II,FreeRTOS,SafeRTOS,Freescale MQX,Nucleus,TinyOS,TI-RTOS,HP-UX, uTasker, uT-kernel, embOS,INtime,mbed,RIOT, CMSIS-RTOS, FROSTED,Green Hills INTEGRITY, Keil RTX, TOPPERS, PetaLinux,Apache Mynewt, andPikeOS.[6] The genesis of wolfSSL dates to 2004.OpenSSLwas available at the time, and was dual licensed under theOpenSSL Licenseand theSSLeay license.[7]yaSSL, alternatively, was developed and dual-licensed under both a commercial license and the GPL.[8]yaSSL offered a more modern API, commercial style developer support and was complete with an OpenSSL compatibility layer.[4]The first major user of wolfSSL/CyaSSL/yaSSL wasMySQL.[9]Through bundling with MySQL, yaSSL has achieved extremely high distribution volumes in the millions. In February 2019,Daniel Stenberg, the creator ofcURL, was hired by the wolfSSL project to work on cURL.[10] The wolfSSL lightweight SSL library implements the following protocols:[11] Protocol Notes: wolfSSL uses the following cryptography libraries: By default, wolfSSL uses the cryptographic services provided by wolfCrypt.[13]wolfCrypt ProvidesRSA,ECC,DSS,Diffie–Hellman,EDH,NTRU(deprecated and removed),DES,Triple DES,AES(CBC, CTR, CCM, GCM),Camellia,IDEA,ARC4,HC-128,ChaCha20,MD2,MD4,MD5,SHA-1,SHA-2,SHA-3,BLAKE2,RIPEMD-160,Poly1305, Random Number Generation, Large Integer support, base 16/64 encoding/decoding, and post-quantum cryptographic algorithms:ML-KEM(certified under FIPS 203) and ML-DSA (certified under FIPS 204). wolfCrypt also includes support for the recentX25519andEd25519algorithms. wolfCrypt acts as a back-end crypto implementation for several popular software packages and libraries, includingMIT Kerberos[14](where it can be enabled using a build option). CyaSSL+ includesNTRU[15]public key encryption. The addition of NTRU in CyaSSL+ was a result of the partnership between yaSSL and Security Innovation.[15]NTRU works well in mobile and embedded environments due to the reduced bit size needed to provide the same security as other public key systems. In addition, it's not known to be vulnerable to quantum attacks. Several cipher suites utilizing NTRU are available with CyaSSL+ including AES-256, RC4, and HC-128. wolfSSL supports the followingSecure Elements: wolfSSL supports the following hardware technologies: The following tables list wolfSSL's support for using various devices' hardware encryption with various algorithms. (Xeon and Core processor families) Cryptographic Accelerator and Assurance Module (CAAM) (NXP MCF547X and MCF548X) K50, K60, K70, and K80 (ARM Cortex-M4 core) F1, F2, F4, L1, W Series (ARM Cortex - M3/M4) (III/V PX processors) (Embedded Connectivity) (ARM Cortex-M4F) (Series SoC family, 32-bit ARM Cortex M0 processor core) - "All" denotes 128, 192, and 256-bit supported block sizes (NXP MCF547X and MCF548X) K50, K60, K70, and K80 (ARM Cortex-M4 core) F1, F2, F4, L1, W Series (ARM Cortex - M3/M4) (III/V PX processors) (Embedded Connectivity) (ARM Cortex-M4F) (Intel and AMD x86) (III/V PX processors) (Intel and AMD x86) K50, K60, K70, and K80 (ARM Cortex-M4 core) F1, F2, F4, L1, W Series (ARM Cortex - M3/M4) (Embedded Connectivity) (III/V PX processors) 192, 224, 256, 384, 521 ATECC508A (compatible with any MPU or MCU including: Atmel SMART and AVR MCUs) (NIST-P256) (Intel and AMD x86) (III/V PX processors) (Embedded Connectivity) F1, F2, F4, L1, W Series (ARM Cortex - M3/M4) (III/V PX processors) (Series SoC family, 32-bit ARM Cortex M0 processor core) wolfSSL supports the following certifications: wolfSSL is dual licensed:
https://en.wikipedia.org/wiki/WolfSSL
TheF16C[1](previously/informally known asCVT16) instruction set is anx86instruction set architectureextension which provides support for converting betweenhalf-precisionand standard IEEEsingle-precision floating-point formats. The CVT16 instruction set, announced byAMDon May 1, 2009,[2]is an extension to the 128-bitSSEcore instructions in thex86andAMD64instruction sets. CVT16 is a revision of part of theSSE5instruction set proposal announced on August 30, 2007, which is supplemented by theXOPandFMA4instruction sets. This revision makes the binary coding of the proposed new instructions more compatible withIntel'sAVXinstruction extensions, while the functionality of the instructions is unchanged. In recent documents, the name F16C is formally used in bothIntelandAMDx86-64architecture specifications. There are variants that convert four floating-point values in anXMM registeror 8 floating-point values in aYMM register. The instructions are abbreviations for "vector convert packed half to packed single" and vice versa: The 8-bit immediate argument toVCVTPS2PHselects theroundingmode. Values 0–4 select nearest, down, up, truncate, and the mode set inMXCSR.RC. Support for these instructions is indicated by bit 29 of ECX afterCPUID with EAX=1.
https://en.wikipedia.org/wiki/F16C
Intel MPX(Memory Protection Extensions) are a discontinued set of extensions to thex86instruction set architecture. Withcompiler,runtime libraryandoperating systemsupport, Intel MPX claimed to enhance security tosoftwareby checkingpointer referenceswhose normal compile-time intentions are maliciously exploited at runtime due tobuffer overflows. In practice, there have been too many flaws discovered in the design for it to be useful, and support has been deprecated or removed from most compilers and operating systems.Intelhas listed MPX as removed in 2019 and onward hardware in section 2.5 of its Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 1.[1] Intel MPX introduces new boundsregisters, and newinstruction setextensions that operate on these registers. Additionally, there is a new set of "bound tables" that store bounds beyond what can fit in the bounds registers.[2][3][4][5][6] MPX uses four new 128-bit bounds registers,BND0toBND3, each storing a pair of 64-bit lower bound (LB) and upper bound (UB) values of a buffer. The upper bound is stored inones' complementform, withBNDMK(create bounds) andBNDCU(check upper bound) performing the conversion. The architecture includes two configuration registersBNDCFGx(BNDCFGUin user space andBNDCFGSin kernel mode), and a status registerBNDSTATUS, which provides a memory address and error code in case of an exception.[7][8] Two-level address translation is used for storing bounds in memory. The top layer consists of a Bounds Directory (BD) created on the application startup. Each BD entry is either empty or contains a pointer to a dynamically created Bounds Table (BT), which in turn contains a set of pointer bounds along with the linear addresses of the pointers. The bounds load (BNDLDX) and store (BNDSTX) instructions transparently perform the address translation and access bounds in the proper BT entry.[7][8] Intel MPX was introduced as part of theSkylakemicroarchitecture.[9] IntelGoldmontmicroarchitecture also supports Intel MPX.[9] A study examined a detailed cross-layer dissection of the MPX system stack and comparison with three prominent software-based memory protection mechanisms (AddressSanitizer, SAFECode, and SoftBound) and presents the following conclusions.[8] In addition, a review concluded MPX was not production ready, andAddressSanitizerwas a better option.[8]A review by Kostya Serebryany at Google, AddressSanitizer's developer,[22]had similar findings.[23] Another study[24]exploring the scope ofSpectreandMeltdownsecurity vulnerabilities discovered that Meltdown can be used to bypass Intel MPX, using the Bound Range Exceeded (#BR) hardware exception. According to their publication, the researchers were able to leak information through a Flush+Reload covert channel from an out-of-bound access on an array safeguarded by the MPX system. Their Proof Of Concept has not been publicly disclosed.
https://en.wikipedia.org/wiki/Memory_Protection_Extensions
AArch64orARM64is the64-bitExecution state of theARM architecture family. It was first introduced with theArmv8-Aarchitecture, and has had many extension updates.[2] An Execution state, in ARMv8-A, ARMv8-R, and ARMv9-A, defines the number ofbitsin the primaryprocessor registers, the availableinstruction sets, and other aspects of the processor's execution environment. In those versions of the Arm architecture, there are two Execution states, the 64-bit AArch64 Execution state and the 32-bit AArch32 Execution state.[3] Extension: Data gathering hint (ARMv8.0-DGH). AArch64 was introduced in ARMv8-A and is included in subsequent versions of ARMv8-A, and in all versions of ARMv9-A. It was also introduced in ARMv8-R as an option, after its introduction in ARMv8-A; it is not included in ARMv8-M. The main opcode for selecting which group an A64 instruction belongs to is at bits 25–28. Announced in October 2011,[5]ARMv8-Arepresents a fundamental change to the ARM architecture. It adds an optional 64-bit Execution state, named "AArch64", and the associated new "A64" instruction set, in addition to a 32-bit Execution state, "AArch32", supporting the 32-bit "A32" (original 32-bit Arm) and "T32" (Thumb/Thumb-2) instruction sets. The latter instruction sets provideuser-spacecompatibility with the existing 32-bit ARMv7-A architecture. ARMv8-A allows 32-bit applications to be executed in a 64-bit OS, and a 32-bit OS to be under the control of a 64-bithypervisor.[1]ARM announced theirCortex-A53andCortex-A57cores on 30 October 2012.[6]Applewas the first to release an ARMv8-A compatible core (Cyclone) in a consumer product (iPhone 5S).AppliedMicro, using anFPGA, was the first to demo ARMv8-A.[7]The first ARMv8-ASoCfromSamsungis the Exynos 5433 used in theGalaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in abig.LITTLEconfiguration; but it will run only in AArch32 mode.[8]ARMv8-A includes the VFPv3/v4 and advanced SIMD (Neon) as standard features in both AArch32 and AArch64. It also adds cryptography instructions supportingAES,SHA-1/SHA-256andfinite field arithmetic.[9] An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels.[10]For example, the ARM Cortex-A32 supports only AArch32,[11]theARM Cortex-A34supports only AArch64,[12]and theARM Cortex-A72supports both AArch64 and AArch32.[13]An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0.[10] In December 2014, ARMv8.1-A,[14]an update with "incremental benefits over v8.0", was announced. The enhancements fell into two categories: changes to the instruction set, and changes to the exception model and memory translation. Instruction set enhancements included the following: Enhancements for the exception model and memory translation system included the following: ARMv8.2-A was announced in January 2016.[17]Its enhancements fall into four categories: The Scalable Vector Extension (SVE) is "an optional extension to the ARMv8.2-A architecture and newer" developed specifically for vectorization ofhigh-performance computingscientific workloads.[18][19]The specification allows for variable vector lengths to be implemented from 128 to 2048 bits. The extension is complementary to, and does not replace, theNEONextensions. A 512-bit SVE variant has already been implemented on theFugaku supercomputerusing theFujitsu A64FXARM processor; this computer[20]was the fastest supercomputer in the world for two years, from June 2020[21]to May 2022.[22]A more flexible version, 2x256 SVE, was implemented by theAWS Graviton3ARM processor. SVE is supported byGCC, with GCC 8 supporting automatic vectorization[19]and GCC 10 supporting C intrinsics. As of July 2020[update],LLVMandclangsupport C and IR intrinsics. ARM's own fork of LLVM supports auto-vectorization.[23] In October 2016, ARMv8.3-A was announced. Its enhancements fell into six categories:[24] ARMv8.3-A architecture is now supported by (at least) theGCC7 compiler.[29] In November 2017, ARMv8.4-A was announced. Its enhancements fell into these categories:[30][31][32] In September 2018, ARMv8.5-A was announced. Its enhancements fell into these categories:[34][35][36] On 2 August 2019,GoogleannouncedAndroidwould adopt Memory Tagging Extension (MTE).[38] In March 2021, ARMv9-A was announced. ARMv9-A's baseline is all the features from ARMv8.5.[39][40][41]ARMv9-A also adds: In September 2019, ARMv8.6-A was announced. Its enhancements fell into these categories:[34][47] For example, fine-grained traps, Wait-for-Event (WFE) instructions, EnhancedPAC2 and FPAC. The bfloat16 extensions for SVE and Neon are mainly for deep learning use.[49] In September 2020, ARMv8.7-A was announced. Its enhancements fell into these categories:[34][50] In September 2021, ARMv8.8-A and ARMv9.3-A were announced. Their enhancements fell into these categories:[34][52] LLVM15 supports ARMv8.8-A and ARMv9.3-A.[53] In September 2022, ARMv8.9-A and ARMv9.4-A were announced, including:[54] In October 2023, ARMv9.5-A was announced, including:[55] In October 2024, ARMv9.6-A was announced, including:[56] TheARM-Rarchitecture, specifically the Armv8-R profile, is designed to address the needs of real-time applications, where predictable and deterministic behavior is essential. This profile focuses on delivering high performance, reliability, and efficiency in embedded systems where real-time constraints are critical. With the introduction of optional AArch64 support in the Armv8-R profile, the real-time capabilities have been further enhanced. The Cortex-R82[57]is the first processor to implement this extended support, bringing several new features and improvements to the real-time domain.[58]
https://en.wikipedia.org/wiki/AArch64#Scalable_Vector_Extension_(SVE)
ARM(stylised in lowercase asarm, formerly an acronym forAdvanced RISC Machinesand originallyAcorn RISC Machine) is a family ofRISCinstruction set architectures(ISAs) forcomputer processors.Arm Holdingsdevelops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licensescoresthat implement these ISAs. Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, includingsmartphones,laptops, andtablet computers, as well asembedded systems.[3][4][5]However, ARM processors are also used fordesktopsandservers, includingFugaku, the world's fastestsupercomputerfrom 2020[6]to 2022. With over 230 billion ARM chips produced,[7][8]since at least 2003, and with its dominance increasing every year[update], ARM is the most widely used family of instruction set architectures.[9][4][10][11][12] There have been several generations of the ARM design. The original ARM1 used a32-bitinternal structure but had a 26-bitaddress spacethat limited it to 64 MB ofmain memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a64-bitaddress space and 64-bit arithmetic with its new 32-bit fixed-length instruction set.[13]Arm Holdings has also released a series of additional instruction sets for different roles: the "Thumb" extensions add both 32- and 16-bit instructions for improvedcode density, whileJazelleadded instructions for directly handlingJava bytecode. More recent changes include the addition ofsimultaneous multithreading(SMT) for improved performance orfault tolerance.[14] Acorn Computers' first widely successful design was theBBC Micro, introduced in December 1981. This was a relatively conventional machine based on theMOS Technology 6502CPU but ran at roughly double the performance of competing designs like theApple IIdue to its use of fasterdynamic random-access memory(DRAM). Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal withHitachifor a supply of faster 4 MHz parts.[15] Machines of the era generally shared memory between the processor and theframebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separateinput/output(I/O). As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market.[16] 1981 was also the year that theIBM Personal Computerwas introduced. Using the recently introducedIntel 8088, a16-bitCPU compared to the 6502's8-bitdesign, it offered higher overall performance. Its introduction changed the desktop computer market radically: what had been largely a hobby and gaming market emerging over the prior five years began to change to a must-have business tool where the earlier 8-bit designs simply could not compete. Even newer32-bitdesigns were also coming to market, such as theMotorola 68000[17]andNational Semiconductor NS32016.[18] Acorn began considering how to compete in this market and produced a new paper design named theAcorn Business Computer. They set themselves the goal of producing a machine with ten times the performance of the BBC Micro, but at the same price.[19]This would outperform and underprice the PC. At the same time, the recent introduction of theApple Lisabrought thegraphical user interface(GUI) concept to a wider audience and suggested the future belonged to machines with a GUI.[20]The Lisa, however, cost $9,995, as it was packed with support chips, large amounts of memory, and ahard disk drive, all very expensive then.[21] The engineers then began studying all of the CPU designs available. Their conclusion about the existing 16-bit designs was that they were a lot more expensive and were still "a bit crap",[22]offering only slightly higher performance than their BBC Micro design. They also almost always demanded a large number of support chips to operate even at that level, which drove up the cost of the computer as a whole. These systems would simply not hit the design goal.[22]They also considered the new 32-bit designs, but these cost even more and had the same issues with support chips.[23]According toSophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/s bandwidth.[24][a] Two key events led Acorn down the path to ARM. One was the publication of a series of reports from theUniversity of California, Berkeley, which suggested that a simple chip design could nevertheless have extremely high performance, much higher than the latest 32-bit designs on the market.[25]The second was a visit bySteve Furberand Sophie Wilson to theWestern Design Center, a company run byBill Menschand his sister, which had become the logical successor to the MOS team and was offering new versions like theWDC 65C02. The Acorn team saw high school students producing chip layouts on Apple II machines, which suggested that anyone could do it.[26][27]In contrast, a visit to another design firm working on modern 32-bit CPU revealed a team with over a dozen members who were already on revision H of their design and yet it still contained bugs.[b]This cemented their late 1983 decision to begin their own CPU design, the Acorn RISC Machine.[28] The originalBerkeley RISCdesigns were in some sense teaching systems, not designed specifically for outright performance. To the RISC's basic register-heavy and load/store concepts, ARM added a number of the well-received design notes of the 6502. Primary among them was the ability to quickly serveinterrupts, which allowed the machines to offer reasonableinput/outputperformance with no added external hardware. To offer interrupts with similar performance as the 6502, the ARM design limited its physicaladdress spaceto 64 MB of total addressable space, requiring 26 bits of address. As instructions were 4 bytes (32 bits) long, and required to be aligned on 4-byte boundaries, the lower 2 bits of an instruction address were always zero. This meant theprogram counter(PC) only needed to be 24 bits, allowing it to be stored along with the eight bitprocessor flagsin a single 32-bit register. That meant that upon receiving an interrupt, the entire machine state could be saved in a single operation, whereas had the PC been a full 32-bit value, it would require separate operations to store the PC and the status flags. This decision halved the interrupt overhead.[29] Another change, and among the most important in terms of practical real-world performance, was the modification of theinstruction setto take advantage ofpage mode DRAM. Recently introduced, page mode allowed subsequent accesses of memory to run twice as fast if they were roughly in the same location, or "page", in the DRAM chip. Berkeley's design did not consider page mode and treated all memory equally. The ARM design added special vector-like memory access instructions, the "S-cycles", that could be used to fill or save multiple registers in a single page using page mode. This doubled memory performance when they could be used, and was especially important for graphics performance.[30] The Berkeley RISC designs usedregister windowsto reduce the number of register saves and restores performed inprocedure calls; the ARM design did not adopt this. Wilson developed the instruction set, writing a simulation of the processor inBBC BASICthat ran on a BBC Micro with asecond 6502 processor.[31][32]This convinced Acorn engineers they were on the right track. Wilson approached Acorn's CEO,Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a small team to design the actual processor based on Wilson's ISA.[33]The official Acorn RISC Machine project started in October 1983. Acorn choseVLSI Technologyas the "silicon partner", as they were a source of ROMs and custom chips for Acorn. Acorn provided the design and VLSI provided the layout and production. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985.[3]Known as ARM1, these versions ran at 6 MHz.[34] The first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips (VIDC, IOC, MEMC), and sped up theCAD softwareused in ARM2 development. Wilson subsequently rewroteBBC BASICin ARMassembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, making ARM BBC BASIC an extremely good test for any ARM emulator. The result of the simulations on the ARM1 boards led to the late 1986 introduction of the ARM2 design running at 8 MHz, and the early 1987 speed-bumped version at 10 to 12 MHz.[c]A significant change in the underlying architecture was the addition of aBooth multiplier, whereas formerly multiplication had to be carried out in software.[36]Further, a new Fast Interrupt reQuest mode, FIQ for short, allowed registers 8 through 14 to be replaced as part of the interrupt itself. This meant FIQ requests did not have to save out their registers, further speeding interrupts.[37] The first use of the ARM2 were in ARM Evaluations systems, supplied as a second processor for BBC Micro and Master machines, from July 1986,[38]internal Acorn A500 development machines,[39]and theAcorn Archimedespersonal computer models A305, A310, and A440, launched on the 6th June 1987. According to theDhrystonebenchmark, the ARM2 was roughly seven times the performance of a typical 7 MHz 68000-based system like theAmigaorMacintosh SE. It was twice as fast as anIntel 80386running at 16 MHz, and about the same speed as a multi-processorVAX-11/784superminicomputer. The only systems that beat it were theSun SPARCandMIPS R2000RISC-basedworkstations.[40]Further, as the CPU was designed for high-speed I/O, it dispensed with many of the support chips seen in these machines; notably, it lacked any dedicateddirect memory access(DMA) controller which was often found on workstations. The graphics system was also simplified based on the same set of underlying assumptions about memory and timing. The result was a dramatically simplified design, offering performance on par with expensive workstations but at a price point similar to contemporary desktops.[40] The ARM2 featured a32-bitdata bus,26-bitaddress space and 27 32-bitregisters, of which 16 are accessible at any one time (including thePC).[41]The ARM2 had atransistor countof just 30,000,[42]compared to Motorola's six-year-older 68000 model with around 68,000. Much of this simplicity came from the lack ofmicrocode, which represents about one-quarter to one-third of the 68000's transistors, and the lack of (like most CPUs of the day) acache. This simplicity enabled the ARM2 to have a low power consumption and simpler thermal packaging by having fewer powered transistors. Nevertheless, ARM2 offered better performance than the contemporary 1987IBM PS/2 Model 50, which initially utilised anIntel 80286, offering 1.8 MIPS @ 10 MHz, and later in 1987, the 2 MIPS of the PS/2 70, with itsIntel 386DX @ 16 MHz.[43][44] A successor, ARM3, was produced with a 4 KB cache, which further improved performance.[45]The address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags.[46] In the late 1980s,Apple ComputerandVLSI Technologystarted working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd.,[47][48][49]which became ARM Ltd. when its parent company,Arm Holdingsplc, floated on theLondon Stock ExchangeandNasdaqin 1998.[50]The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for theirApple NewtonPDA. In 1994, Acorn used the ARM610 as the maincentral processing unit(CPU) in theirRiscPCcomputers.DEClicensed the ARMv4 architecture and produced theStrongARM.[51]At 233MHz, this CPU drew only one watt (newer versions draw far less). This work was later passed to Intel as part of a lawsuit settlement, and Intel took the opportunity to supplement theiri960line with the StrongARM. Intel later developed its own high performance implementation namedXScale, which it has since sold toMarvell. Transistor count of the ARM core remained essentially the same throughout these changes; ARM2 had 30,000 transistors,[52]while ARM6 grew only to 35,000.[53] In 2005, about 98% of all mobile phones sold used at least one ARM processor.[54]In 2010, producers of chips based on ARM architectures reported shipments of 6.1 billionARM-based processors, representing 95% ofsmartphones, 35% ofdigital televisionsandset-top boxes, and 10% ofmobile computers. In 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems.[55]In 2013, 10 billion were produced[56]and "ARM-based chips are found in nearly 60 percent of the world's mobile devices".[57] Arm Holdings's primary business is sellingIP cores, which licensees use to createmicrocontrollers(MCUs),CPUs, andsystems-on-chipsbased on those cores. Theoriginal design manufacturercombines the ARM core with other parts to produce a complete device, typically one that can be built in existingsemiconductor fabrication plants(fabs) at low cost and still deliver substantial performance. The most successful implementation has been theARM7TDMIwith hundreds of millions sold.Atmelhas been a precursor design center in the ARM7TDMI-based embedded system. The ARM architectures used in smartphones, PDAs and othermobile devicesrange from ARMv5 toARMv8-A. In 2009, some manufacturers introduced netbooks based on ARM architecture CPUs, in direct competition with netbooks based onIntel Atom.[58] Arm Holdings offers a variety of licensing terms, varying in cost and deliverables. Arm Holdings provides to all licensees an integratable hardware description of the ARM core as well as complete software development toolset (compiler,debugger,software development kit), and the right to sell manufacturedsiliconcontaining the ARM CPU. SoC packages integrating ARM's core designs include Nvidia Tegra's first three generations, CSR plc's Quatro family, ST-Ericsson's Nova and NovaThor, Silicon Labs's Precision32 MCU, Texas Instruments'sOMAPproducts, Samsung's Hummingbird andExynosproducts, Apple'sA4,A5, andA5X, andNXP'si.MX. Fablesslicensees, who wish to integrate an ARM core into their own chip design, are usually only interested in acquiring a ready-to-manufacture verifiedsemiconductor intellectual property core. For these customers, Arm Holdings delivers agate netlistdescription of the chosen ARM core, along with an abstracted simulation model and test programs to aid design integration and verification. More ambitious customers, including integrated device manufacturers (IDM) and foundry operators, choose to acquire the processor IP insynthesizableRTL(Verilog) form. With the synthesizable RTL, the customer has the ability to perform architectural level optimisations and extensions. This allows the designer to achieve exotic design goals not otherwise possible with an unmodified netlist (high clock speed, very low power consumption, instruction set extensions, etc.). While Arm Holdings does not grant the licensee the right to resell the ARM architecture itself, licensees may freely sell manufactured products such as chip devices, evaluation boards and complete systems.Merchant foundriescan be a special case; not only are they allowed to sell finished silicon containing ARM cores, they generally hold the right to re-manufacture ARM cores for other customers. Arm Holdings prices its IP based on perceived value. Lower performing ARM cores typically have lower licence costs than higher performing cores. In implementation terms, a synthesisable core costs more than a hard macro (blackbox) core. Complicating price matters, a merchant foundry that holds an ARM licence, such as Samsung or Fujitsu, can offer fab customers reduced licensing costs. In exchange for acquiring the ARM core through the foundry's in-house design services, the customer can reduce or eliminate payment of ARM's upfront licence fee. Compared to dedicated semiconductor foundries (such asTSMCandUMC) without in-house design services, Fujitsu/Samsung charge two- to three-times more per manufacturedwafer.[citation needed]For low to mid volume applications, a design service foundry offers lower overall pricing (through subsidisation of the licence fee). For high volume mass-produced parts, the long term cost reduction achievable through lower wafer pricing reduces the impact of ARM's NRE (non-recurring engineering) costs, making the dedicated foundry a better choice. Companies that have developed chips with cores designed by Arm includeAmazon.com'sAnnapurna Labssubsidiary,[59]Analog Devices,Apple,AppliedMicro(now:MACOM Technology Solutions[60]),Atmel,Broadcom,Cavium,Cypress Semiconductor,Freescale Semiconductor(nowNXP Semiconductors),Huawei,Intel,[dubious–discuss]Maxim Integrated,Nvidia,NXP,Qualcomm,Renesas,Samsung Electronics,ST Microelectronics,Texas Instruments, andXilinx. In February 2016, ARM announced the Built on ARM Cortex Technology licence, often shortened to Built on Cortex (BoC) licence. This licence allows companies to partner with ARM and make modifications to ARM Cortex designs. These design modifications will not be shared with other companies. These semi-custom core designs also have brand freedom, for exampleKryo 280. Companies that are current licensees of Built on ARM Cortex Technology includeQualcomm.[61] Companies can also obtain an ARMarchitectural licencefor designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro (now:Ampere Computing), Broadcom,Cavium(now: Marvell),Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics,Fujitsu, and NUVIA Inc. (acquired by Qualcomm in 2021). On 16 July 2019, ARM announced ARM Flexible Access. ARM Flexible Access provides unlimited access to included ARMintellectual property(IP) for development. Per product licence fees are required once a customer reaches foundry tapeout or prototyping.[62][63] 75% of ARM's most recent IP over the last two years are included in ARM Flexible Access. As of October 2019: Arm provides a list of vendors who implement ARM cores in their design (application specific standard products (ASSP), microprocessor and microcontrollers).[105] ARM cores are used in a number of products, particularlyPDAsandsmartphones. Somecomputingexamples areMicrosoft'sfirst generation Surface,Surface 2andPocket PCdevices (following2002),Apple'siPads, andAsus'sEee Pad Transformertablet computers, and severalChromebooklaptops. Others include Apple'siPhonesmartphonesandiPodportable media players,Canon PowerShotdigital cameras,Nintendo Switchhybrid, theWiisecurity processor and3DShandheld game consoles, andTomTomturn-by-turnnavigation systems. In 2005, Arm took part in the development ofManchester University's computerSpiNNaker, which used ARM cores to simulate thehuman brain.[106] ARM chips are also used inRaspberry Pi,BeagleBoard,BeagleBone,PandaBoard, and othersingle-board computers, because they are very small, inexpensive, and consume very little power. The 32-bit ARM architecture (ARM32), such asARMv7-A(implementing AArch32; seesection on Armv8-Afor more on it), was the most widely used architecture in mobile devices as of 2011[update].[55] Since 1995, various versions of theARM Architecture Reference Manual(see§ External links) have been the primary source of documentation on the ARM processor architecture and instruction set, distinguishing interfaces that all ARM processors are required to support (such as instruction semantics) from implementation details that may vary. The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture "profiles": Although the architecture profiles were first defined for ARMv7, ARM subsequently defined the ARMv6-M architecture (used by the CortexM0/M0+/M1) as a subset of the ARMv7-M profile with fewer instructions. Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events (interrupts) or programmatically.[107] The original (and subsequent) ARM implementation was hardwired withoutmicrocode, like the much simpler8-bit6502processor used in prior Acorn microcomputers. The 32-bit ARM architecture (and the 64-bit architecture for the most part) includes the following RISC features: To compensate for the simpler design, compared with processors like the Intel 80286 andMotorola 68020, some additional design features were used: ARM includes integer arithmetic operations for add, subtract, and multiply; some versions of the architecture also support divide operations. ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or 64-bit result, though Cortex-M0 / M0+ / M1 cores do not support 64-bit results.[112]Some ARM cores also support 16-bit × 16-bit and 32-bit × 16-bit multiplies. The divide instructions are only included in the following ARM architectures: Registers R0 through R7 are the same across all CPU modes; they are never banked. Registers R8 through R12 are the same across all CPU modes except FIQ mode. FIQ mode has its own distinct R8 through R12 registers. R13 and R14 are banked across all privileged CPU modes except system mode. That is, each mode that can be entered because of an exception has its own R13 and R14. These registers generally contain the stack pointer and the return address from function calls, respectively. Aliases: The Current Program Status Register (CPSR) has the following 32 bits.[115] Almost every ARM instruction has a conditional execution feature calledpredication, which is implemented with a 4-bit condition code selector (the predicate). To allow for unconditional execution, one of the four-bit codes causes the instruction to be always executed. Most other CPU architectures only have condition codes on branch instructions.[116] Though the predicate takes up four of the 32 bits in an instruction code, and thus cuts down significantly on the encoding bits available for displacements in memory access instructions, it avoids branch instructions when generating code for smallifstatements. Apart from eliminating the branch instructions themselves, this preserves the fetch/decode/execute pipeline at the cost of only one cycle per skipped instruction. An algorithm that provides a good example of conditional execution is the subtraction-basedEuclidean algorithmfor computing thegreatest common divisor. In theC programming language, the algorithm can be written as: The same algorithm can be rewritten in a way closer to target ARMinstructionsas: and coded inassembly languageas: which avoids the branches around thethenandelseclauses. Ifr0andr1are equal then neither of theSUBinstructions will be executed, eliminating the need for a conditional branch to implement thewhilecheck at the top of the loop, for example hadSUBLE(less than or equal) been used. One of the ways that Thumb code provides a more dense encoding is to remove the four-bit selector from non-branch instructions. Another feature of theinstruction setis the ability to fold shifts and rotates into thedata processing(arithmetic, logical, and register-register move) instructions, so that, for example, the statement inClanguage: could be rendered as a one-word, one-cycle instruction:[117] This results in the typical ARM program being denser than expected with fewer memory accesses; thus the pipeline is used more efficiently. The ARM processor also has features rarely seen in other RISC architectures, such asPC-relative addressing (indeed, on the 32-bit[1]ARM thePCis one of its 16 registers) and pre- and post-increment addressing modes. The ARM instruction set has increased over time. Some early ARM processors (before ARM7TDMI), for example, have no instruction to store a two-byte quantity. The ARM7 and earlier implementations have a three-stagepipeline; the stages being fetch, decode, and execute. Higher-performance designs, such as the ARM9, have deeper pipelines: Cortex-A8 has thirteen stages. Additional implementation changes for higher performance include a fasteradderand more extensivebranch predictionlogic. The difference between the ARM7DI and ARM7DMI cores, for example, was an improved multiplier; hence the added "M". The ARM architecture (pre-Armv8) provides a non-intrusive way of extending the instruction set using "coprocessors" that can be addressed using MCR, MRC, MRRC, MCRR, and similar instructions. The coprocessor space is divided logically into 16 coprocessors with numbers from 0 to 15, coprocessor 15 (cp15) being reserved for some typical control functions like managing the caches andMMUoperation on processors that have one. In ARM-based machines, peripheral devices are usually attached to the processor by mapping their physical registers into ARM memory space, into the coprocessor space, or by connecting to another device (a bus) that in turn attaches to the processor. Coprocessor accesses have lower latency, so some peripherals—for example, an XScale interrupt controller—are accessible in both ways: through memory and through coprocessors. In other cases, chip designers only integrate hardware using the coprocessor mechanism. For example, an image processing engine might be a small ARM7TDMI core combined with a coprocessor that has specialised operations to support a specific set of HDTV transcoding primitives. All modern ARM processors include hardware debugging facilities, allowing software debuggers to perform operations such as halting, stepping, and breakpointing of code starting from reset. These facilities are built usingJTAGsupport, though some newer cores optionally support ARM's own two-wire "SWD" protocol. In ARM7TDMI cores, the "D" represented JTAG debug support, and the "I" represented presence of an "EmbeddedICE" debug module. For ARM7 and ARM9 core generations, EmbeddedICE over JTAG was a de facto debug standard, though not architecturally guaranteed. The ARMv7 architecture defines basic debug facilities at an architectural level. These include breakpoints, watchpoints and instruction execution in a "Debug Mode"; similar facilities were also available with EmbeddedICE. Both "halt mode" and "monitor" mode debugging are supported. The actual transport mechanism used to access the debug facilities is not architecturally specified, but implementations generally include JTAG support. There is a separate ARM "CoreSight" debug architecture, which is not architecturally required by ARMv7 processors. The Debug Access Port (DAP) is an implementation of an ARM Debug Interface.[118]There are two different supported implementations, the Serial WireJTAGDebug Port (SWJ-DP) and the Serial Wire Debug Port (SW-DP).[119]CMSIS-DAP is a standard interface that describes how various debugging software on a host PC can communicate over USB to firmware running on a hardware debugger, which in turn talks over SWD or JTAG to a CoreSight-enabled ARM Cortex CPU.[120][121][122] To improve the ARM architecture fordigital signal processingand multimedia applications, DSP instructions were added to the instruction set.[123]These are signified by an "E" in the name of the ARMv5TE and ARMv5TEJ architectures. E-variants also imply T, D, M, and I. The new instructions are common indigital signal processor(DSP) architectures. They include variations on signedmultiply–accumulate,saturated add and subtract, andcount leading zeros. First introduced in 1999, this extension of the core instruction set contrasted with ARM's earlier DSP coprocessor known as Piccolo, which employed a distinct, incompatible instruction set whose execution involved a separate program counter.[124]Piccolo instructions employed a distinct register file of sixteen 32-bit registers, with some instructions combining registers for use as 48-bit accumulators and other instructions addressing 16-bit half-registers. Some instructions were able to operate on two such 16-bit values in parallel. Communication with the Piccolo register file involvedload to Piccoloandstore from Piccolocoprocessor instructions via two buffers of eight 32-bit entries. Described as reminiscent of other approaches, notably Hitachi's SH-DSP and Motorola's 68356, Piccolo did not employ dedicated local memory and relied on the bandwidth of the ARM core for DSP operand retrieval, impacting concurrent performance.[125]Piccolo's distinct instruction set also proved not to be a "good compiler target".[124] Introduced in the ARMv6 architecture, this was a precursor to Advanced SIMD, also namedNeon.[126] Jazelle DBX (Direct Bytecode eXecution) is a technique that allowsJava bytecodeto be executed directly in the ARM architecture as a third execution state (and instruction set) alongside the existing ARM and Thumb-mode. Support for this state is signified by the "J" in the ARMv5TEJ architecture, and in ARM9EJ-S and ARM7EJ-S core names. Support for this state is required starting in ARMv6 (except for the ARMv7-M profile), though newer cores only include a trivial implementation that provides no hardware acceleration. To improve compiled code density, processors since the ARM7TDMI (released in 1994[127]) have featured theThumbcompressed instruction set, which have their own state. (The "T" in "TDMI" indicates the Thumb feature.) When in this state, the processor executes the Thumb instruction set, a compact 16-bit encoding for a subset of the ARM instruction set.[128]Most of the Thumb instructions are directly mapped to normal ARM instructions. The space saving comes from making some of the instruction operands implicit and limiting the number of possibilities compared to the ARM instructions executed in the ARM instruction set state. In Thumb, the 16-bit opcodes have less functionality. For example, only branches can be conditional, and many opcodes are restricted to accessing only half of all of the CPU's general-purpose registers. The shorter opcodes give improved code density overall, even though some operations require extra instructions. In situations where the memory port or bus width is constrained to less than 32 bits, the shorter Thumb opcodes allow increased performance compared with 32-bit ARM code, as less program code may need to be loaded into the processor over the constrained memory bandwidth. Unlike processor architectures with variable length (16- or 32-bit) instructions, such as the Cray-1 andHitachiSuperH, the ARM and Thumb instruction sets exist independently of each other. Embedded hardware, such as theGame Boy Advance, typically have a small amount of RAM accessible with a full 32-bit datapath; the majority is accessed via a 16-bit or narrower secondary datapath. In this situation, it usually makes sense to compile Thumb code and hand-optimise a few of the most CPU-intensive sections using full 32-bit ARM instructions, placing these wider instructions into the 32-bit bus accessible memory. The first processor with a Thumbinstruction decoderwas the ARM7TDMI. All processors supporting 32-bit instruction sets, starting with ARM9, and including XScale, have included a Thumb instruction decoder. It includes instructions adopted from the HitachiSuperH(1992), which was licensed by ARM.[129]ARM's smallest processor families (Cortex M0 and M1) implement only the 16-bit Thumb instruction set for maximum performance in lowest cost applications. ARM processors that don't support 32-bit addressing also omit Thumb. Thumb-2technology was introduced in theARM1156 core, announced in 2003. Thumb-2 extends the limited 16-bit instruction set of Thumb with additional 32-bit instructions to give the instruction set more breadth, thus producing a variable-length instruction set. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory. Thumb-2 extends the Thumb instruction set with bit-field manipulation, table branches and conditional execution. At the same time, the ARM instruction set was extended to maintain equivalent functionality in both instruction sets. A new "Unified Assembly Language" (UAL) supports generation of either Thumb or ARM instructions from the same source code; versions of Thumb seen on ARMv7 processors are essentially as capable as ARM code (including the ability to write interrupt handlers). This requires a bit of care, and use of a new "IT" (if-then) instruction, which permits up to four successive instructions to execute based on a tested condition, or on its inverse. When compiling into ARM code, this is ignored, but when compiling into Thumb it generates an actual instruction. For example: All ARMv7 chips support the Thumb instruction set. All chips in the Cortex-A series that support ARMv7, all Cortex-R series, and all ARM11 series support both "ARM instruction set state" and "Thumb instruction set state", while chips in theCortex-Mseries support only the Thumb instruction set.[130][131][132] ThumbEE(erroneously calledThumb-2EEin some ARM documentation), which was marketed as Jazelle RCT[133](Runtime Compilation Target), was announced in 2005 and deprecated in 2011. It first appeared in theCortex-A8processor. ThumbEE is a fourth instruction set state, making small changes to the Thumb-2 extended instruction set. These changes make the instruction set particularly suited to code generated at runtime (e.g. byJIT compilation) in managedExecution Environments. ThumbEE is a target for languages such asJava,C#,Perl, andPython, and allowsJIT compilersto output smaller compiled code without reducing performance.[citation needed] New features provided by ThumbEE include automatic null pointer checks on every load and store instruction, an instruction to perform an array bounds check, and special instructions that call a handler. In addition, because it utilises Thumb-2 technology, ThumbEE provides access to registers r8–r15 (where the Jazelle/DBX Java VM state is held).[134]Handlers are small sections of frequently called code, commonly used to implement high level languages, such as allocating memory for a new object. These changes come from repurposing a handful of opcodes, and knowing the core is in the new ThumbEE state. On 23 November 2011, Arm deprecated any use of the ThumbEE instruction set,[135]and Armv8 removes support for ThumbEE. VFP(Vector Floating Point) technology is afloating-point unit(FPU) coprocessor extension to the ARM architecture[136](implemented differently in Armv8 – coprocessors not defined there). It provides low-costsingle-precisionanddouble-precision floating-pointcomputation fully compliant with theANSI/IEEE Std 754-1985 Standard for Binary Floating-Point Arithmetic. VFP provides floating-point computation suitable for a wide spectrum of applications such as PDAs, smartphones, voice compression and decompression, three-dimensional graphics and digital audio, printers, set-top boxes, and automotive applications. The VFP architecture was intended to support execution of short "vector mode" instructions but these operated on each vector element sequentially and thus did not offer the performance of truesingle instruction, multiple data(SIMD) vector parallelism. This vector mode was therefore removed shortly after its introduction,[137]to be replaced with the much more powerful Advanced SIMD, also namedNeon. Some devices such as the ARM Cortex-A8 have a cut-downVFPLitemodule instead of a full VFP module, and require roughly ten times more clock cycles per float operation.[138]Pre-Armv8 architecture implemented floating-point/SIMD with the coprocessor interface. Other floating-point and/or SIMD units found in ARM-based processors using the coprocessor interface includeFPA, FPE,iwMMXt, some of which were implemented in software by trapping but could have been implemented in hardware. They provide some of the same functionality as VFP but are notopcode-compatible with it. FPA10 also providesextended precision, but implements correct rounding (required by IEEE 754) only in single precision.[139] InDebianLinuxand derivatives such asUbuntuandLinux Mint,armhf(ARM hard float) refers to the ARMv7 architecture including the additional VFP3-D16 floating-point hardware extension (and Thumb-2) above. Software packages and cross-compiler tools use the armhf vs. arm/armel suffixes to differentiate.[141] TheAdvanced SIMDextension (also known asNeonor "MPE" Media Processing Engine) is a combined 64- and128-bitSIMD instruction set that provides standardised acceleration for media and signal processing applications. Neon is included in all Cortex-A8 devices, but is optional in Cortex-A9 devices.[142]Neon can execute MP3 audio decoding on CPUs running at 10 MHz, and can run theGSMadaptive multi-rate(AMR) speech codec at 13 MHz. It features a comprehensive instruction set, separate register files, and independent execution hardware.[143]Neon supports 8-, 16-, 32-, and 64-bit integer and single-precision (32-bit) floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In Neon, the SIMD supports up to 16 operations at the same time. The Neon hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors, but will execute with 64 bits at a time,[138]whereas newer Cortex-A15 devices can execute 128 bits at a time.[144][145] A quirk of Neon in Armv7 devices is that it flushes allsubnormal numbersto zero, and as a result theGCCcompiler will not use it unless-funsafe-math-optimizations, which allows losing denormals, is turned on. "Enhanced" Neon defined since Armv8 does not have this quirk, but as ofGCC 8.2the same flag is still required to enable Neon instructions.[146]On the other hand, GCC does consider Neon safe on AArch64 for Armv8. ProjectNe10 is ARM's first open-source project (from its inception; while they acquired an older project, now namedMbed TLS). The Ne10 library is a set of common, useful functions written in both Neon and C (for compatibility). The library was created to allow developers to use Neon optimisations without learning Neon, but it also serves as a set of highly optimised Neon intrinsic and assembly code examples for common DSP, arithmetic, and image processing routines. The source code is available on GitHub.[147] Helium is the M-Profile Vector Extension (MVE). It adds more than 150 scalar and vector instructions.[148] The Security Extensions, marketed as TrustZone Technology, is in ARMv6KZ and later application profile architectures. It provides a low-cost alternative to adding another dedicated security core to an SoC, by providing two virtual processors backed by hardware based access control. This lets the application core switch between two states, referred to asworlds(to reduce confusion with other names for capability domains), to prevent information leaking from the more trusted world to the less trusted world.[149]This world switch is generally orthogonal to all other capabilities of the processor, thus each world can operate independently of the other while using the same core. Memory and peripherals are then made aware of the operating world of the core and may use this to provide access control to secrets and code on the device.[150] Typically, a rich operating system is run in the less trusted world, with smaller security-specialised code in the more trusted world, aiming to reduce theattack surface. Typical applications includeDRMfunctionality for controlling the use of media on ARM-based devices,[151]and preventing any unapproved use of the device. In practice, since the specific implementation details of proprietary TrustZone implementations have not been publicly disclosed for review, it is unclear what level of assurance is provided for a giventhreat model, but they are not immune from attack.[152][153] Open Virtualization[154]is an open source implementation of the trusted world architecture for TrustZone. AMDhas licensed and incorporated TrustZone technology into itsSecure Processor Technology.[155]AMD'sAPUsinclude a Cortex-A5 processor for handling secure processing, which is enabled in some, but not all products.[156][157][158]In fact, the Cortex-A5 TrustZone core had been included in earlier AMD products, but was not enabled due to time constraints.[157] Samsung Knoxuses TrustZone for purposes such as detecting modifications to the kernel, storing certificates and attestating keys.[159] The Security Extension, marketed as TrustZone for Armv8-M Technology, was introduced in the Armv8-M architecture. While containing similar concepts to TrustZone for Armv8-A, it has a different architectural design, as world switching is performed using branch instructions instead of using exceptions.[160]It also supports safe interleaved interrupt handling from either world regardless of the current security state. Together these features provide low latency calls to the secure world and responsive interrupt handling. ARM provides a reference stack of secure world code in the form of Trusted Firmware for M andPSA Certified. As of ARMv6, the ARM architecture supportsno-execute page protection, which is referred to asXN, foreXecute Never.[161] The Large Physical Address Extension (LPAE), which extends the physical address size from 32 bits to 40 bits, was added to the Armv7-A architecture in 2011.[162] The physical address size may be even larger in processors based on the 64-bit (Armv8-A) architecture. For example, it is 44 bits in Cortex-A75 and Cortex-A65AE.[163] TheArmv8-RandArmv8-Marchitectures, announced after the Armv8-A architecture, share some features with Armv8-A. However, Armv8-M does not include any 64-bit AArch64 instructions, and Armv8-R originally did not include any AArch64 instructions; those instructions were added toArmv8-Rlater. The Armv8.1-M architecture, announced in February 2019, is an enhancement of the Armv8-M architecture. It brings new features including: Announced in October 2011,[13]Armv8-A(often called ARMv8 while the Armv8-R is also available) represents a fundamental change to the ARM architecture. It supports twoexecution states: a 64-bit state namedAArch64and a 32-bit state namedAArch32. In the AArch64 state, a new 64-bitA64instruction set is supported; in the AArch32 state, two instruction sets are supported: the original 32-bit instruction set, namedA32, and the 32-bit Thumb-2 instruction set, namedT32. AArch32 providesuser-spacecompatibility with Armv7-A. The processor state can change on an Exception level change; this allows 32-bit applications to be executed in AArch32 state under a 64-bit OS whose kernel executes in AArch64 state, and allows a 32-bit OS to run in AArch32 state under the control of a 64-bithypervisorrunning in AArch64 state.[1]ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012.[75]Apple was the first to release an Armv8-A compatible core in a consumer product (Apple A7iniPhone 5S).AppliedMicro, using anFPGA, was the first to demo Armv8-A.[164]The first Armv8-ASoCfromSamsungis the Exynos 5433 used in theGalaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in abig.LITTLEconfiguration; but it will run only in AArch32 mode.[165] To both AArch32 and AArch64, Armv8-A makes VFPv3/v4 and advanced SIMD (Neon) standard. It also adds cryptography instructions supportingAES,SHA-1/SHA-256andfinite field arithmetic.[166]AArch64 was introduced in Armv8-A and its subsequent revision. AArch64 is not included in the 32-bit Armv8-R and Armv8-M architectures. An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels.[167]For example, the ARM Cortex-A32 supports only AArch32,[168]theARM Cortex-A34supports only AArch64,[169]and theARM Cortex-A72supports both AArch64 and AArch32.[170]An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0.[167] Optional AArch64 support was added to the Armv8-R profile, with the first ARM core implementing it being the Cortex-R82.[171]It adds the A64 instruction set. Announced in March 2021, the updated architecture places a focus on secure execution andcompartmentalisation.[172][173] Arm SystemReady is a compliance program that helps ensure the interoperability of an operating system on Arm-based hardware from datacenter servers to industrial edge and IoT devices. The key building blocks of the program are the specifications for minimum hardware and firmware requirements that the operating systems and hypervisors can rely upon. These specifications are:[174] These specifications are co-developed byArmand its partners in the System Architecture Advisory Committee (SystemArchAC). Architecture Compliance Suite (ACS) is the test tools that help to check the compliance of these specifications. The Arm SystemReady Requirements Specification documents the requirements of the certifications.[179] This program was introduced byArmin 2020 at the firstDevSummitevent. Its predecessor Arm ServerReady was introduced in 2018 at the Arm TechCon event. This program currently includes two bands: PSA Certified, formerly named Platform Security Architecture, is an architecture-agnostic security framework and evaluation scheme. It is intended to help secureInternet of things(IoT) devices built on system-on-a-chip (SoC) processors.[182]It was introduced to increase security where a fulltrusted execution environmentis too large or complex.[183] The architecture was introduced byArmin 2017 at the annualTechConevent.[183][184]Although the scheme is architecture agnostic, it was first implemented on Arm Cortex-M processor cores intended for microcontroller use. PSA Certified includes freely available threat models and security analyses that demonstrate the process for deciding on security features in common IoT products.[185]It also provides freely downloadable application programming interface (API) packages, architectural specifications, open-source firmware implementations, and related test suites.[186] Following the development of the architecture security framework in 2017, thePSA Certifiedassurance scheme launched two years later at Embedded World in 2019.[187]PSA Certified offers a multi-level security evaluation scheme for chip vendors, OS providers and IoT device makers.[188]The Embedded World presentation introduced chip vendors to Level 1 Certification. A draft of Level 2 protection was presented at the same time.[189]Level 2 certification became a usable standard in February 2020.[190] The certification was created by PSA Joint Stakeholders to enable a security-by-design approach for a diverse set of IoT products. PSA Certified specifications are implementation and architecture agnostic, as a result they can be applied to any chip, software or device.[191][189]The certification also removes industry fragmentation forIoT productmanufacturers and developers.[192] The first 32-bit ARM-based personal computer, theAcorn Archimedes, was originally intended to run an ambitious operating system calledARX. The machines shipped withRISC OS, which was also used on later ARM-based systems from Acorn and other vendors. Some early Acorn machines were also able to run aUnixport calledRISC iX. (Neither is to be confused withRISC/os, a contemporary Unix variant for the MIPS architecture.) The 32-bit ARM architecture is supported by a large number ofembeddedandreal-time operating systems, including: As of March 2024, the 32-bit ARM architecture used to be the primary hardware environment for most mobile device operating systems such as the following but many of these platforms such as Android and Apple iOS have evolved to the 64-bit ARM architecture: Formerly, but now discontinued: The 32-bit ARM architecture is supported by RISC OS and by multipleUnix-likeoperating systems including: Windows applications recompiled for ARM and linked with Winelib, from theWineproject, can run on 32-bit or 64-bit ARM in Linux, FreeBSD, or other compatible operating systems.[222][223]x86 binaries, e.g. when not specially compiled for ARM, have been demonstrated on ARM usingQEMUwith Wine (on Linux and more),[citation needed]but do not work at full speed or same capability as with Winelib.
https://en.wikipedia.org/wiki/VFP_(instruction_set)
ARM(stylised in lowercase asarm, formerly an acronym forAdvanced RISC Machinesand originallyAcorn RISC Machine) is a family ofRISCinstruction set architectures(ISAs) forcomputer processors.Arm Holdingsdevelops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licensescoresthat implement these ISAs. Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, includingsmartphones,laptops, andtablet computers, as well asembedded systems.[3][4][5]However, ARM processors are also used fordesktopsandservers, includingFugaku, the world's fastestsupercomputerfrom 2020[6]to 2022. With over 230 billion ARM chips produced,[7][8]since at least 2003, and with its dominance increasing every year[update], ARM is the most widely used family of instruction set architectures.[9][4][10][11][12] There have been several generations of the ARM design. The original ARM1 used a32-bitinternal structure but had a 26-bitaddress spacethat limited it to 64 MB ofmain memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a64-bitaddress space and 64-bit arithmetic with its new 32-bit fixed-length instruction set.[13]Arm Holdings has also released a series of additional instruction sets for different roles: the "Thumb" extensions add both 32- and 16-bit instructions for improvedcode density, whileJazelleadded instructions for directly handlingJava bytecode. More recent changes include the addition ofsimultaneous multithreading(SMT) for improved performance orfault tolerance.[14] Acorn Computers' first widely successful design was theBBC Micro, introduced in December 1981. This was a relatively conventional machine based on theMOS Technology 6502CPU but ran at roughly double the performance of competing designs like theApple IIdue to its use of fasterdynamic random-access memory(DRAM). Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal withHitachifor a supply of faster 4 MHz parts.[15] Machines of the era generally shared memory between the processor and theframebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separateinput/output(I/O). As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market.[16] 1981 was also the year that theIBM Personal Computerwas introduced. Using the recently introducedIntel 8088, a16-bitCPU compared to the 6502's8-bitdesign, it offered higher overall performance. Its introduction changed the desktop computer market radically: what had been largely a hobby and gaming market emerging over the prior five years began to change to a must-have business tool where the earlier 8-bit designs simply could not compete. Even newer32-bitdesigns were also coming to market, such as theMotorola 68000[17]andNational Semiconductor NS32016.[18] Acorn began considering how to compete in this market and produced a new paper design named theAcorn Business Computer. They set themselves the goal of producing a machine with ten times the performance of the BBC Micro, but at the same price.[19]This would outperform and underprice the PC. At the same time, the recent introduction of theApple Lisabrought thegraphical user interface(GUI) concept to a wider audience and suggested the future belonged to machines with a GUI.[20]The Lisa, however, cost $9,995, as it was packed with support chips, large amounts of memory, and ahard disk drive, all very expensive then.[21] The engineers then began studying all of the CPU designs available. Their conclusion about the existing 16-bit designs was that they were a lot more expensive and were still "a bit crap",[22]offering only slightly higher performance than their BBC Micro design. They also almost always demanded a large number of support chips to operate even at that level, which drove up the cost of the computer as a whole. These systems would simply not hit the design goal.[22]They also considered the new 32-bit designs, but these cost even more and had the same issues with support chips.[23]According toSophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/s bandwidth.[24][a] Two key events led Acorn down the path to ARM. One was the publication of a series of reports from theUniversity of California, Berkeley, which suggested that a simple chip design could nevertheless have extremely high performance, much higher than the latest 32-bit designs on the market.[25]The second was a visit bySteve Furberand Sophie Wilson to theWestern Design Center, a company run byBill Menschand his sister, which had become the logical successor to the MOS team and was offering new versions like theWDC 65C02. The Acorn team saw high school students producing chip layouts on Apple II machines, which suggested that anyone could do it.[26][27]In contrast, a visit to another design firm working on modern 32-bit CPU revealed a team with over a dozen members who were already on revision H of their design and yet it still contained bugs.[b]This cemented their late 1983 decision to begin their own CPU design, the Acorn RISC Machine.[28] The originalBerkeley RISCdesigns were in some sense teaching systems, not designed specifically for outright performance. To the RISC's basic register-heavy and load/store concepts, ARM added a number of the well-received design notes of the 6502. Primary among them was the ability to quickly serveinterrupts, which allowed the machines to offer reasonableinput/outputperformance with no added external hardware. To offer interrupts with similar performance as the 6502, the ARM design limited its physicaladdress spaceto 64 MB of total addressable space, requiring 26 bits of address. As instructions were 4 bytes (32 bits) long, and required to be aligned on 4-byte boundaries, the lower 2 bits of an instruction address were always zero. This meant theprogram counter(PC) only needed to be 24 bits, allowing it to be stored along with the eight bitprocessor flagsin a single 32-bit register. That meant that upon receiving an interrupt, the entire machine state could be saved in a single operation, whereas had the PC been a full 32-bit value, it would require separate operations to store the PC and the status flags. This decision halved the interrupt overhead.[29] Another change, and among the most important in terms of practical real-world performance, was the modification of theinstruction setto take advantage ofpage mode DRAM. Recently introduced, page mode allowed subsequent accesses of memory to run twice as fast if they were roughly in the same location, or "page", in the DRAM chip. Berkeley's design did not consider page mode and treated all memory equally. The ARM design added special vector-like memory access instructions, the "S-cycles", that could be used to fill or save multiple registers in a single page using page mode. This doubled memory performance when they could be used, and was especially important for graphics performance.[30] The Berkeley RISC designs usedregister windowsto reduce the number of register saves and restores performed inprocedure calls; the ARM design did not adopt this. Wilson developed the instruction set, writing a simulation of the processor inBBC BASICthat ran on a BBC Micro with asecond 6502 processor.[31][32]This convinced Acorn engineers they were on the right track. Wilson approached Acorn's CEO,Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a small team to design the actual processor based on Wilson's ISA.[33]The official Acorn RISC Machine project started in October 1983. Acorn choseVLSI Technologyas the "silicon partner", as they were a source of ROMs and custom chips for Acorn. Acorn provided the design and VLSI provided the layout and production. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985.[3]Known as ARM1, these versions ran at 6 MHz.[34] The first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips (VIDC, IOC, MEMC), and sped up theCAD softwareused in ARM2 development. Wilson subsequently rewroteBBC BASICin ARMassembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, making ARM BBC BASIC an extremely good test for any ARM emulator. The result of the simulations on the ARM1 boards led to the late 1986 introduction of the ARM2 design running at 8 MHz, and the early 1987 speed-bumped version at 10 to 12 MHz.[c]A significant change in the underlying architecture was the addition of aBooth multiplier, whereas formerly multiplication had to be carried out in software.[36]Further, a new Fast Interrupt reQuest mode, FIQ for short, allowed registers 8 through 14 to be replaced as part of the interrupt itself. This meant FIQ requests did not have to save out their registers, further speeding interrupts.[37] The first use of the ARM2 were in ARM Evaluations systems, supplied as a second processor for BBC Micro and Master machines, from July 1986,[38]internal Acorn A500 development machines,[39]and theAcorn Archimedespersonal computer models A305, A310, and A440, launched on the 6th June 1987. According to theDhrystonebenchmark, the ARM2 was roughly seven times the performance of a typical 7 MHz 68000-based system like theAmigaorMacintosh SE. It was twice as fast as anIntel 80386running at 16 MHz, and about the same speed as a multi-processorVAX-11/784superminicomputer. The only systems that beat it were theSun SPARCandMIPS R2000RISC-basedworkstations.[40]Further, as the CPU was designed for high-speed I/O, it dispensed with many of the support chips seen in these machines; notably, it lacked any dedicateddirect memory access(DMA) controller which was often found on workstations. The graphics system was also simplified based on the same set of underlying assumptions about memory and timing. The result was a dramatically simplified design, offering performance on par with expensive workstations but at a price point similar to contemporary desktops.[40] The ARM2 featured a32-bitdata bus,26-bitaddress space and 27 32-bitregisters, of which 16 are accessible at any one time (including thePC).[41]The ARM2 had atransistor countof just 30,000,[42]compared to Motorola's six-year-older 68000 model with around 68,000. Much of this simplicity came from the lack ofmicrocode, which represents about one-quarter to one-third of the 68000's transistors, and the lack of (like most CPUs of the day) acache. This simplicity enabled the ARM2 to have a low power consumption and simpler thermal packaging by having fewer powered transistors. Nevertheless, ARM2 offered better performance than the contemporary 1987IBM PS/2 Model 50, which initially utilised anIntel 80286, offering 1.8 MIPS @ 10 MHz, and later in 1987, the 2 MIPS of the PS/2 70, with itsIntel 386DX @ 16 MHz.[43][44] A successor, ARM3, was produced with a 4 KB cache, which further improved performance.[45]The address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags.[46] In the late 1980s,Apple ComputerandVLSI Technologystarted working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd.,[47][48][49]which became ARM Ltd. when its parent company,Arm Holdingsplc, floated on theLondon Stock ExchangeandNasdaqin 1998.[50]The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for theirApple NewtonPDA. In 1994, Acorn used the ARM610 as the maincentral processing unit(CPU) in theirRiscPCcomputers.DEClicensed the ARMv4 architecture and produced theStrongARM.[51]At 233MHz, this CPU drew only one watt (newer versions draw far less). This work was later passed to Intel as part of a lawsuit settlement, and Intel took the opportunity to supplement theiri960line with the StrongARM. Intel later developed its own high performance implementation namedXScale, which it has since sold toMarvell. Transistor count of the ARM core remained essentially the same throughout these changes; ARM2 had 30,000 transistors,[52]while ARM6 grew only to 35,000.[53] In 2005, about 98% of all mobile phones sold used at least one ARM processor.[54]In 2010, producers of chips based on ARM architectures reported shipments of 6.1 billionARM-based processors, representing 95% ofsmartphones, 35% ofdigital televisionsandset-top boxes, and 10% ofmobile computers. In 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems.[55]In 2013, 10 billion were produced[56]and "ARM-based chips are found in nearly 60 percent of the world's mobile devices".[57] Arm Holdings's primary business is sellingIP cores, which licensees use to createmicrocontrollers(MCUs),CPUs, andsystems-on-chipsbased on those cores. Theoriginal design manufacturercombines the ARM core with other parts to produce a complete device, typically one that can be built in existingsemiconductor fabrication plants(fabs) at low cost and still deliver substantial performance. The most successful implementation has been theARM7TDMIwith hundreds of millions sold.Atmelhas been a precursor design center in the ARM7TDMI-based embedded system. The ARM architectures used in smartphones, PDAs and othermobile devicesrange from ARMv5 toARMv8-A. In 2009, some manufacturers introduced netbooks based on ARM architecture CPUs, in direct competition with netbooks based onIntel Atom.[58] Arm Holdings offers a variety of licensing terms, varying in cost and deliverables. Arm Holdings provides to all licensees an integratable hardware description of the ARM core as well as complete software development toolset (compiler,debugger,software development kit), and the right to sell manufacturedsiliconcontaining the ARM CPU. SoC packages integrating ARM's core designs include Nvidia Tegra's first three generations, CSR plc's Quatro family, ST-Ericsson's Nova and NovaThor, Silicon Labs's Precision32 MCU, Texas Instruments'sOMAPproducts, Samsung's Hummingbird andExynosproducts, Apple'sA4,A5, andA5X, andNXP'si.MX. Fablesslicensees, who wish to integrate an ARM core into their own chip design, are usually only interested in acquiring a ready-to-manufacture verifiedsemiconductor intellectual property core. For these customers, Arm Holdings delivers agate netlistdescription of the chosen ARM core, along with an abstracted simulation model and test programs to aid design integration and verification. More ambitious customers, including integrated device manufacturers (IDM) and foundry operators, choose to acquire the processor IP insynthesizableRTL(Verilog) form. With the synthesizable RTL, the customer has the ability to perform architectural level optimisations and extensions. This allows the designer to achieve exotic design goals not otherwise possible with an unmodified netlist (high clock speed, very low power consumption, instruction set extensions, etc.). While Arm Holdings does not grant the licensee the right to resell the ARM architecture itself, licensees may freely sell manufactured products such as chip devices, evaluation boards and complete systems.Merchant foundriescan be a special case; not only are they allowed to sell finished silicon containing ARM cores, they generally hold the right to re-manufacture ARM cores for other customers. Arm Holdings prices its IP based on perceived value. Lower performing ARM cores typically have lower licence costs than higher performing cores. In implementation terms, a synthesisable core costs more than a hard macro (blackbox) core. Complicating price matters, a merchant foundry that holds an ARM licence, such as Samsung or Fujitsu, can offer fab customers reduced licensing costs. In exchange for acquiring the ARM core through the foundry's in-house design services, the customer can reduce or eliminate payment of ARM's upfront licence fee. Compared to dedicated semiconductor foundries (such asTSMCandUMC) without in-house design services, Fujitsu/Samsung charge two- to three-times more per manufacturedwafer.[citation needed]For low to mid volume applications, a design service foundry offers lower overall pricing (through subsidisation of the licence fee). For high volume mass-produced parts, the long term cost reduction achievable through lower wafer pricing reduces the impact of ARM's NRE (non-recurring engineering) costs, making the dedicated foundry a better choice. Companies that have developed chips with cores designed by Arm includeAmazon.com'sAnnapurna Labssubsidiary,[59]Analog Devices,Apple,AppliedMicro(now:MACOM Technology Solutions[60]),Atmel,Broadcom,Cavium,Cypress Semiconductor,Freescale Semiconductor(nowNXP Semiconductors),Huawei,Intel,[dubious–discuss]Maxim Integrated,Nvidia,NXP,Qualcomm,Renesas,Samsung Electronics,ST Microelectronics,Texas Instruments, andXilinx. In February 2016, ARM announced the Built on ARM Cortex Technology licence, often shortened to Built on Cortex (BoC) licence. This licence allows companies to partner with ARM and make modifications to ARM Cortex designs. These design modifications will not be shared with other companies. These semi-custom core designs also have brand freedom, for exampleKryo 280. Companies that are current licensees of Built on ARM Cortex Technology includeQualcomm.[61] Companies can also obtain an ARMarchitectural licencefor designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro (now:Ampere Computing), Broadcom,Cavium(now: Marvell),Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics,Fujitsu, and NUVIA Inc. (acquired by Qualcomm in 2021). On 16 July 2019, ARM announced ARM Flexible Access. ARM Flexible Access provides unlimited access to included ARMintellectual property(IP) for development. Per product licence fees are required once a customer reaches foundry tapeout or prototyping.[62][63] 75% of ARM's most recent IP over the last two years are included in ARM Flexible Access. As of October 2019: Arm provides a list of vendors who implement ARM cores in their design (application specific standard products (ASSP), microprocessor and microcontrollers).[105] ARM cores are used in a number of products, particularlyPDAsandsmartphones. Somecomputingexamples areMicrosoft'sfirst generation Surface,Surface 2andPocket PCdevices (following2002),Apple'siPads, andAsus'sEee Pad Transformertablet computers, and severalChromebooklaptops. Others include Apple'siPhonesmartphonesandiPodportable media players,Canon PowerShotdigital cameras,Nintendo Switchhybrid, theWiisecurity processor and3DShandheld game consoles, andTomTomturn-by-turnnavigation systems. In 2005, Arm took part in the development ofManchester University's computerSpiNNaker, which used ARM cores to simulate thehuman brain.[106] ARM chips are also used inRaspberry Pi,BeagleBoard,BeagleBone,PandaBoard, and othersingle-board computers, because they are very small, inexpensive, and consume very little power. The 32-bit ARM architecture (ARM32), such asARMv7-A(implementing AArch32; seesection on Armv8-Afor more on it), was the most widely used architecture in mobile devices as of 2011[update].[55] Since 1995, various versions of theARM Architecture Reference Manual(see§ External links) have been the primary source of documentation on the ARM processor architecture and instruction set, distinguishing interfaces that all ARM processors are required to support (such as instruction semantics) from implementation details that may vary. The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture "profiles": Although the architecture profiles were first defined for ARMv7, ARM subsequently defined the ARMv6-M architecture (used by the CortexM0/M0+/M1) as a subset of the ARMv7-M profile with fewer instructions. Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events (interrupts) or programmatically.[107] The original (and subsequent) ARM implementation was hardwired withoutmicrocode, like the much simpler8-bit6502processor used in prior Acorn microcomputers. The 32-bit ARM architecture (and the 64-bit architecture for the most part) includes the following RISC features: To compensate for the simpler design, compared with processors like the Intel 80286 andMotorola 68020, some additional design features were used: ARM includes integer arithmetic operations for add, subtract, and multiply; some versions of the architecture also support divide operations. ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or 64-bit result, though Cortex-M0 / M0+ / M1 cores do not support 64-bit results.[112]Some ARM cores also support 16-bit × 16-bit and 32-bit × 16-bit multiplies. The divide instructions are only included in the following ARM architectures: Registers R0 through R7 are the same across all CPU modes; they are never banked. Registers R8 through R12 are the same across all CPU modes except FIQ mode. FIQ mode has its own distinct R8 through R12 registers. R13 and R14 are banked across all privileged CPU modes except system mode. That is, each mode that can be entered because of an exception has its own R13 and R14. These registers generally contain the stack pointer and the return address from function calls, respectively. Aliases: The Current Program Status Register (CPSR) has the following 32 bits.[115] Almost every ARM instruction has a conditional execution feature calledpredication, which is implemented with a 4-bit condition code selector (the predicate). To allow for unconditional execution, one of the four-bit codes causes the instruction to be always executed. Most other CPU architectures only have condition codes on branch instructions.[116] Though the predicate takes up four of the 32 bits in an instruction code, and thus cuts down significantly on the encoding bits available for displacements in memory access instructions, it avoids branch instructions when generating code for smallifstatements. Apart from eliminating the branch instructions themselves, this preserves the fetch/decode/execute pipeline at the cost of only one cycle per skipped instruction. An algorithm that provides a good example of conditional execution is the subtraction-basedEuclidean algorithmfor computing thegreatest common divisor. In theC programming language, the algorithm can be written as: The same algorithm can be rewritten in a way closer to target ARMinstructionsas: and coded inassembly languageas: which avoids the branches around thethenandelseclauses. Ifr0andr1are equal then neither of theSUBinstructions will be executed, eliminating the need for a conditional branch to implement thewhilecheck at the top of the loop, for example hadSUBLE(less than or equal) been used. One of the ways that Thumb code provides a more dense encoding is to remove the four-bit selector from non-branch instructions. Another feature of theinstruction setis the ability to fold shifts and rotates into thedata processing(arithmetic, logical, and register-register move) instructions, so that, for example, the statement inClanguage: could be rendered as a one-word, one-cycle instruction:[117] This results in the typical ARM program being denser than expected with fewer memory accesses; thus the pipeline is used more efficiently. The ARM processor also has features rarely seen in other RISC architectures, such asPC-relative addressing (indeed, on the 32-bit[1]ARM thePCis one of its 16 registers) and pre- and post-increment addressing modes. The ARM instruction set has increased over time. Some early ARM processors (before ARM7TDMI), for example, have no instruction to store a two-byte quantity. The ARM7 and earlier implementations have a three-stagepipeline; the stages being fetch, decode, and execute. Higher-performance designs, such as the ARM9, have deeper pipelines: Cortex-A8 has thirteen stages. Additional implementation changes for higher performance include a fasteradderand more extensivebranch predictionlogic. The difference between the ARM7DI and ARM7DMI cores, for example, was an improved multiplier; hence the added "M". The ARM architecture (pre-Armv8) provides a non-intrusive way of extending the instruction set using "coprocessors" that can be addressed using MCR, MRC, MRRC, MCRR, and similar instructions. The coprocessor space is divided logically into 16 coprocessors with numbers from 0 to 15, coprocessor 15 (cp15) being reserved for some typical control functions like managing the caches andMMUoperation on processors that have one. In ARM-based machines, peripheral devices are usually attached to the processor by mapping their physical registers into ARM memory space, into the coprocessor space, or by connecting to another device (a bus) that in turn attaches to the processor. Coprocessor accesses have lower latency, so some peripherals—for example, an XScale interrupt controller—are accessible in both ways: through memory and through coprocessors. In other cases, chip designers only integrate hardware using the coprocessor mechanism. For example, an image processing engine might be a small ARM7TDMI core combined with a coprocessor that has specialised operations to support a specific set of HDTV transcoding primitives. All modern ARM processors include hardware debugging facilities, allowing software debuggers to perform operations such as halting, stepping, and breakpointing of code starting from reset. These facilities are built usingJTAGsupport, though some newer cores optionally support ARM's own two-wire "SWD" protocol. In ARM7TDMI cores, the "D" represented JTAG debug support, and the "I" represented presence of an "EmbeddedICE" debug module. For ARM7 and ARM9 core generations, EmbeddedICE over JTAG was a de facto debug standard, though not architecturally guaranteed. The ARMv7 architecture defines basic debug facilities at an architectural level. These include breakpoints, watchpoints and instruction execution in a "Debug Mode"; similar facilities were also available with EmbeddedICE. Both "halt mode" and "monitor" mode debugging are supported. The actual transport mechanism used to access the debug facilities is not architecturally specified, but implementations generally include JTAG support. There is a separate ARM "CoreSight" debug architecture, which is not architecturally required by ARMv7 processors. The Debug Access Port (DAP) is an implementation of an ARM Debug Interface.[118]There are two different supported implementations, the Serial WireJTAGDebug Port (SWJ-DP) and the Serial Wire Debug Port (SW-DP).[119]CMSIS-DAP is a standard interface that describes how various debugging software on a host PC can communicate over USB to firmware running on a hardware debugger, which in turn talks over SWD or JTAG to a CoreSight-enabled ARM Cortex CPU.[120][121][122] To improve the ARM architecture fordigital signal processingand multimedia applications, DSP instructions were added to the instruction set.[123]These are signified by an "E" in the name of the ARMv5TE and ARMv5TEJ architectures. E-variants also imply T, D, M, and I. The new instructions are common indigital signal processor(DSP) architectures. They include variations on signedmultiply–accumulate,saturated add and subtract, andcount leading zeros. First introduced in 1999, this extension of the core instruction set contrasted with ARM's earlier DSP coprocessor known as Piccolo, which employed a distinct, incompatible instruction set whose execution involved a separate program counter.[124]Piccolo instructions employed a distinct register file of sixteen 32-bit registers, with some instructions combining registers for use as 48-bit accumulators and other instructions addressing 16-bit half-registers. Some instructions were able to operate on two such 16-bit values in parallel. Communication with the Piccolo register file involvedload to Piccoloandstore from Piccolocoprocessor instructions via two buffers of eight 32-bit entries. Described as reminiscent of other approaches, notably Hitachi's SH-DSP and Motorola's 68356, Piccolo did not employ dedicated local memory and relied on the bandwidth of the ARM core for DSP operand retrieval, impacting concurrent performance.[125]Piccolo's distinct instruction set also proved not to be a "good compiler target".[124] Introduced in the ARMv6 architecture, this was a precursor to Advanced SIMD, also namedNeon.[126] Jazelle DBX (Direct Bytecode eXecution) is a technique that allowsJava bytecodeto be executed directly in the ARM architecture as a third execution state (and instruction set) alongside the existing ARM and Thumb-mode. Support for this state is signified by the "J" in the ARMv5TEJ architecture, and in ARM9EJ-S and ARM7EJ-S core names. Support for this state is required starting in ARMv6 (except for the ARMv7-M profile), though newer cores only include a trivial implementation that provides no hardware acceleration. To improve compiled code density, processors since the ARM7TDMI (released in 1994[127]) have featured theThumbcompressed instruction set, which have their own state. (The "T" in "TDMI" indicates the Thumb feature.) When in this state, the processor executes the Thumb instruction set, a compact 16-bit encoding for a subset of the ARM instruction set.[128]Most of the Thumb instructions are directly mapped to normal ARM instructions. The space saving comes from making some of the instruction operands implicit and limiting the number of possibilities compared to the ARM instructions executed in the ARM instruction set state. In Thumb, the 16-bit opcodes have less functionality. For example, only branches can be conditional, and many opcodes are restricted to accessing only half of all of the CPU's general-purpose registers. The shorter opcodes give improved code density overall, even though some operations require extra instructions. In situations where the memory port or bus width is constrained to less than 32 bits, the shorter Thumb opcodes allow increased performance compared with 32-bit ARM code, as less program code may need to be loaded into the processor over the constrained memory bandwidth. Unlike processor architectures with variable length (16- or 32-bit) instructions, such as the Cray-1 andHitachiSuperH, the ARM and Thumb instruction sets exist independently of each other. Embedded hardware, such as theGame Boy Advance, typically have a small amount of RAM accessible with a full 32-bit datapath; the majority is accessed via a 16-bit or narrower secondary datapath. In this situation, it usually makes sense to compile Thumb code and hand-optimise a few of the most CPU-intensive sections using full 32-bit ARM instructions, placing these wider instructions into the 32-bit bus accessible memory. The first processor with a Thumbinstruction decoderwas the ARM7TDMI. All processors supporting 32-bit instruction sets, starting with ARM9, and including XScale, have included a Thumb instruction decoder. It includes instructions adopted from the HitachiSuperH(1992), which was licensed by ARM.[129]ARM's smallest processor families (Cortex M0 and M1) implement only the 16-bit Thumb instruction set for maximum performance in lowest cost applications. ARM processors that don't support 32-bit addressing also omit Thumb. Thumb-2technology was introduced in theARM1156 core, announced in 2003. Thumb-2 extends the limited 16-bit instruction set of Thumb with additional 32-bit instructions to give the instruction set more breadth, thus producing a variable-length instruction set. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory. Thumb-2 extends the Thumb instruction set with bit-field manipulation, table branches and conditional execution. At the same time, the ARM instruction set was extended to maintain equivalent functionality in both instruction sets. A new "Unified Assembly Language" (UAL) supports generation of either Thumb or ARM instructions from the same source code; versions of Thumb seen on ARMv7 processors are essentially as capable as ARM code (including the ability to write interrupt handlers). This requires a bit of care, and use of a new "IT" (if-then) instruction, which permits up to four successive instructions to execute based on a tested condition, or on its inverse. When compiling into ARM code, this is ignored, but when compiling into Thumb it generates an actual instruction. For example: All ARMv7 chips support the Thumb instruction set. All chips in the Cortex-A series that support ARMv7, all Cortex-R series, and all ARM11 series support both "ARM instruction set state" and "Thumb instruction set state", while chips in theCortex-Mseries support only the Thumb instruction set.[130][131][132] ThumbEE(erroneously calledThumb-2EEin some ARM documentation), which was marketed as Jazelle RCT[133](Runtime Compilation Target), was announced in 2005 and deprecated in 2011. It first appeared in theCortex-A8processor. ThumbEE is a fourth instruction set state, making small changes to the Thumb-2 extended instruction set. These changes make the instruction set particularly suited to code generated at runtime (e.g. byJIT compilation) in managedExecution Environments. ThumbEE is a target for languages such asJava,C#,Perl, andPython, and allowsJIT compilersto output smaller compiled code without reducing performance.[citation needed] New features provided by ThumbEE include automatic null pointer checks on every load and store instruction, an instruction to perform an array bounds check, and special instructions that call a handler. In addition, because it utilises Thumb-2 technology, ThumbEE provides access to registers r8–r15 (where the Jazelle/DBX Java VM state is held).[134]Handlers are small sections of frequently called code, commonly used to implement high level languages, such as allocating memory for a new object. These changes come from repurposing a handful of opcodes, and knowing the core is in the new ThumbEE state. On 23 November 2011, Arm deprecated any use of the ThumbEE instruction set,[135]and Armv8 removes support for ThumbEE. VFP(Vector Floating Point) technology is afloating-point unit(FPU) coprocessor extension to the ARM architecture[136](implemented differently in Armv8 – coprocessors not defined there). It provides low-costsingle-precisionanddouble-precision floating-pointcomputation fully compliant with theANSI/IEEE Std 754-1985 Standard for Binary Floating-Point Arithmetic. VFP provides floating-point computation suitable for a wide spectrum of applications such as PDAs, smartphones, voice compression and decompression, three-dimensional graphics and digital audio, printers, set-top boxes, and automotive applications. The VFP architecture was intended to support execution of short "vector mode" instructions but these operated on each vector element sequentially and thus did not offer the performance of truesingle instruction, multiple data(SIMD) vector parallelism. This vector mode was therefore removed shortly after its introduction,[137]to be replaced with the much more powerful Advanced SIMD, also namedNeon. Some devices such as the ARM Cortex-A8 have a cut-downVFPLitemodule instead of a full VFP module, and require roughly ten times more clock cycles per float operation.[138]Pre-Armv8 architecture implemented floating-point/SIMD with the coprocessor interface. Other floating-point and/or SIMD units found in ARM-based processors using the coprocessor interface includeFPA, FPE,iwMMXt, some of which were implemented in software by trapping but could have been implemented in hardware. They provide some of the same functionality as VFP but are notopcode-compatible with it. FPA10 also providesextended precision, but implements correct rounding (required by IEEE 754) only in single precision.[139] InDebianLinuxand derivatives such asUbuntuandLinux Mint,armhf(ARM hard float) refers to the ARMv7 architecture including the additional VFP3-D16 floating-point hardware extension (and Thumb-2) above. Software packages and cross-compiler tools use the armhf vs. arm/armel suffixes to differentiate.[141] TheAdvanced SIMDextension (also known asNeonor "MPE" Media Processing Engine) is a combined 64- and128-bitSIMD instruction set that provides standardised acceleration for media and signal processing applications. Neon is included in all Cortex-A8 devices, but is optional in Cortex-A9 devices.[142]Neon can execute MP3 audio decoding on CPUs running at 10 MHz, and can run theGSMadaptive multi-rate(AMR) speech codec at 13 MHz. It features a comprehensive instruction set, separate register files, and independent execution hardware.[143]Neon supports 8-, 16-, 32-, and 64-bit integer and single-precision (32-bit) floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In Neon, the SIMD supports up to 16 operations at the same time. The Neon hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors, but will execute with 64 bits at a time,[138]whereas newer Cortex-A15 devices can execute 128 bits at a time.[144][145] A quirk of Neon in Armv7 devices is that it flushes allsubnormal numbersto zero, and as a result theGCCcompiler will not use it unless-funsafe-math-optimizations, which allows losing denormals, is turned on. "Enhanced" Neon defined since Armv8 does not have this quirk, but as ofGCC 8.2the same flag is still required to enable Neon instructions.[146]On the other hand, GCC does consider Neon safe on AArch64 for Armv8. ProjectNe10 is ARM's first open-source project (from its inception; while they acquired an older project, now namedMbed TLS). The Ne10 library is a set of common, useful functions written in both Neon and C (for compatibility). The library was created to allow developers to use Neon optimisations without learning Neon, but it also serves as a set of highly optimised Neon intrinsic and assembly code examples for common DSP, arithmetic, and image processing routines. The source code is available on GitHub.[147] Helium is the M-Profile Vector Extension (MVE). It adds more than 150 scalar and vector instructions.[148] The Security Extensions, marketed as TrustZone Technology, is in ARMv6KZ and later application profile architectures. It provides a low-cost alternative to adding another dedicated security core to an SoC, by providing two virtual processors backed by hardware based access control. This lets the application core switch between two states, referred to asworlds(to reduce confusion with other names for capability domains), to prevent information leaking from the more trusted world to the less trusted world.[149]This world switch is generally orthogonal to all other capabilities of the processor, thus each world can operate independently of the other while using the same core. Memory and peripherals are then made aware of the operating world of the core and may use this to provide access control to secrets and code on the device.[150] Typically, a rich operating system is run in the less trusted world, with smaller security-specialised code in the more trusted world, aiming to reduce theattack surface. Typical applications includeDRMfunctionality for controlling the use of media on ARM-based devices,[151]and preventing any unapproved use of the device. In practice, since the specific implementation details of proprietary TrustZone implementations have not been publicly disclosed for review, it is unclear what level of assurance is provided for a giventhreat model, but they are not immune from attack.[152][153] Open Virtualization[154]is an open source implementation of the trusted world architecture for TrustZone. AMDhas licensed and incorporated TrustZone technology into itsSecure Processor Technology.[155]AMD'sAPUsinclude a Cortex-A5 processor for handling secure processing, which is enabled in some, but not all products.[156][157][158]In fact, the Cortex-A5 TrustZone core had been included in earlier AMD products, but was not enabled due to time constraints.[157] Samsung Knoxuses TrustZone for purposes such as detecting modifications to the kernel, storing certificates and attestating keys.[159] The Security Extension, marketed as TrustZone for Armv8-M Technology, was introduced in the Armv8-M architecture. While containing similar concepts to TrustZone for Armv8-A, it has a different architectural design, as world switching is performed using branch instructions instead of using exceptions.[160]It also supports safe interleaved interrupt handling from either world regardless of the current security state. Together these features provide low latency calls to the secure world and responsive interrupt handling. ARM provides a reference stack of secure world code in the form of Trusted Firmware for M andPSA Certified. As of ARMv6, the ARM architecture supportsno-execute page protection, which is referred to asXN, foreXecute Never.[161] The Large Physical Address Extension (LPAE), which extends the physical address size from 32 bits to 40 bits, was added to the Armv7-A architecture in 2011.[162] The physical address size may be even larger in processors based on the 64-bit (Armv8-A) architecture. For example, it is 44 bits in Cortex-A75 and Cortex-A65AE.[163] TheArmv8-RandArmv8-Marchitectures, announced after the Armv8-A architecture, share some features with Armv8-A. However, Armv8-M does not include any 64-bit AArch64 instructions, and Armv8-R originally did not include any AArch64 instructions; those instructions were added toArmv8-Rlater. The Armv8.1-M architecture, announced in February 2019, is an enhancement of the Armv8-M architecture. It brings new features including: Announced in October 2011,[13]Armv8-A(often called ARMv8 while the Armv8-R is also available) represents a fundamental change to the ARM architecture. It supports twoexecution states: a 64-bit state namedAArch64and a 32-bit state namedAArch32. In the AArch64 state, a new 64-bitA64instruction set is supported; in the AArch32 state, two instruction sets are supported: the original 32-bit instruction set, namedA32, and the 32-bit Thumb-2 instruction set, namedT32. AArch32 providesuser-spacecompatibility with Armv7-A. The processor state can change on an Exception level change; this allows 32-bit applications to be executed in AArch32 state under a 64-bit OS whose kernel executes in AArch64 state, and allows a 32-bit OS to run in AArch32 state under the control of a 64-bithypervisorrunning in AArch64 state.[1]ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012.[75]Apple was the first to release an Armv8-A compatible core in a consumer product (Apple A7iniPhone 5S).AppliedMicro, using anFPGA, was the first to demo Armv8-A.[164]The first Armv8-ASoCfromSamsungis the Exynos 5433 used in theGalaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in abig.LITTLEconfiguration; but it will run only in AArch32 mode.[165] To both AArch32 and AArch64, Armv8-A makes VFPv3/v4 and advanced SIMD (Neon) standard. It also adds cryptography instructions supportingAES,SHA-1/SHA-256andfinite field arithmetic.[166]AArch64 was introduced in Armv8-A and its subsequent revision. AArch64 is not included in the 32-bit Armv8-R and Armv8-M architectures. An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels.[167]For example, the ARM Cortex-A32 supports only AArch32,[168]theARM Cortex-A34supports only AArch64,[169]and theARM Cortex-A72supports both AArch64 and AArch32.[170]An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0.[167] Optional AArch64 support was added to the Armv8-R profile, with the first ARM core implementing it being the Cortex-R82.[171]It adds the A64 instruction set. Announced in March 2021, the updated architecture places a focus on secure execution andcompartmentalisation.[172][173] Arm SystemReady is a compliance program that helps ensure the interoperability of an operating system on Arm-based hardware from datacenter servers to industrial edge and IoT devices. The key building blocks of the program are the specifications for minimum hardware and firmware requirements that the operating systems and hypervisors can rely upon. These specifications are:[174] These specifications are co-developed byArmand its partners in the System Architecture Advisory Committee (SystemArchAC). Architecture Compliance Suite (ACS) is the test tools that help to check the compliance of these specifications. The Arm SystemReady Requirements Specification documents the requirements of the certifications.[179] This program was introduced byArmin 2020 at the firstDevSummitevent. Its predecessor Arm ServerReady was introduced in 2018 at the Arm TechCon event. This program currently includes two bands: PSA Certified, formerly named Platform Security Architecture, is an architecture-agnostic security framework and evaluation scheme. It is intended to help secureInternet of things(IoT) devices built on system-on-a-chip (SoC) processors.[182]It was introduced to increase security where a fulltrusted execution environmentis too large or complex.[183] The architecture was introduced byArmin 2017 at the annualTechConevent.[183][184]Although the scheme is architecture agnostic, it was first implemented on Arm Cortex-M processor cores intended for microcontroller use. PSA Certified includes freely available threat models and security analyses that demonstrate the process for deciding on security features in common IoT products.[185]It also provides freely downloadable application programming interface (API) packages, architectural specifications, open-source firmware implementations, and related test suites.[186] Following the development of the architecture security framework in 2017, thePSA Certifiedassurance scheme launched two years later at Embedded World in 2019.[187]PSA Certified offers a multi-level security evaluation scheme for chip vendors, OS providers and IoT device makers.[188]The Embedded World presentation introduced chip vendors to Level 1 Certification. A draft of Level 2 protection was presented at the same time.[189]Level 2 certification became a usable standard in February 2020.[190] The certification was created by PSA Joint Stakeholders to enable a security-by-design approach for a diverse set of IoT products. PSA Certified specifications are implementation and architecture agnostic, as a result they can be applied to any chip, software or device.[191][189]The certification also removes industry fragmentation forIoT productmanufacturers and developers.[192] The first 32-bit ARM-based personal computer, theAcorn Archimedes, was originally intended to run an ambitious operating system calledARX. The machines shipped withRISC OS, which was also used on later ARM-based systems from Acorn and other vendors. Some early Acorn machines were also able to run aUnixport calledRISC iX. (Neither is to be confused withRISC/os, a contemporary Unix variant for the MIPS architecture.) The 32-bit ARM architecture is supported by a large number ofembeddedandreal-time operating systems, including: As of March 2024, the 32-bit ARM architecture used to be the primary hardware environment for most mobile device operating systems such as the following but many of these platforms such as Android and Apple iOS have evolved to the 64-bit ARM architecture: Formerly, but now discontinued: The 32-bit ARM architecture is supported by RISC OS and by multipleUnix-likeoperating systems including: Windows applications recompiled for ARM and linked with Winelib, from theWineproject, can run on 32-bit or 64-bit ARM in Linux, FreeBSD, or other compatible operating systems.[222][223]x86 binaries, e.g. when not specially compiled for ARM, have been demonstrated on ARM usingQEMUwith Wine (on Linux and more),[citation needed]but do not work at full speed or same capability as with Winelib.
https://en.wikipedia.org/wiki/NEON_(instruction_set)
TheFMA instruction setis an extension to the 128- and 256-bitStreaming SIMD Extensionsinstructions in thex86microprocessorinstruction setto performfused multiply–add(FMA) operations.[1]There are two variants: FMA3 and FMA4 instructions have almost identical functionality, but are not compatible. Both containfused multiply–add(FMA) instructions forfloating-pointscalar andSIMDoperations, but FMA3 instructions have three operands, while FMA4 ones have four. The FMA operation has the formd= round(a·b+c), where the round function performs aroundingto allow the result to fit within the destination register if there are too many significant bits to fit within the destination. The four-operand form (FMA4) allowsa,b,canddto be four different registers, while the three-operand form (FMA3) requires thatdbe the same register asa,borc. The three-operand form makes the code shorter and the hardware implementation slightly simpler, while the four-operand form provides more programming flexibility. SeeXOP instruction setfor more discussion of compatibility issues between Intel and AMD. Supported commands include Explicit order of operands is included in the mnemonic using numbers "132", "213", and "231": as well as operand format (packed or scalar) and size (single or double). This results in The incompatibility between Intel's FMA3 and AMD's FMA4 is due to both companies changing plans without coordinating coding details with each other. AMD changed their plans from FMA3 to FMA4 while Intel changed their plans from FMA4 to FMA3 almost at the same time. The history can be summarized as follows: Different compilers provide different levels of support for FMA:
https://en.wikipedia.org/wiki/FMA_instruction_set
TheXOP(eXtended Operations[1])instruction set, announced byAMDon May 1, 2009, is an extension to the 128-bitSSEcore instructions in thex86andAMD64instruction set for theBulldozerprocessor core, which was released on October 12, 2011.[2]However AMD removed support for XOP fromZen (microarchitecture)onward.[3] The XOP instruction set contains several different types of vector instructions since it was originally intended as a major upgrade toSSE. Most of the instructions are integer instructions, but it also contains floating point permutation and floating point fraction extraction instructions. See the index for a list of instruction types. XOP is a revised subset of what was originally intended asSSE5. It was changed to be similar but not overlapping withAVX, parts that overlapped with AVX were removed or moved to separate standards such asFMA4(floating-point vectormultiply–accumulate) andCVT16(Half-precisionfloating-point conversion implemented as F16C byIntel).[1] All SSE5 instructions that were equivalent or similar to instructions in theAVXandFMA4instruction sets announced by Intel have been changed to use the coding proposed by Intel.Integerinstructionswithoutequivalents in AVX were classified as the XOP extension.[1]The XOP instructions have an opcode byte 8F (hexadecimal), but otherwise almost identical coding scheme asAVXwith the 3-byte VEX prefix. Commentators[4]have seen this as evidence that Intel has not allowed AMD to use any part of the large VEX coding space. AMD has been forced to use different codes in order to avoid using any code combination that Intel might possibly be using in its development pipeline for something else. The XOP coding scheme is as close to the VEX scheme as technically possible without risking that the AMD codes overlap with future Intel codes. This inference is speculative, since no public information is available about negotiations between the two companies on this issue. The use of the 8F byte requires that the m-bits (seeVEX coding scheme) have a value larger than or equal to 8 in order to avoid overlap with existing instructions.[Note 1]The C4 byte used in the VEX scheme has no such restriction. This may prevent the use of the m-bits for other purposes in the future in the XOP scheme, but not in the VEX scheme. Another possible problem is that the pp bits have the value 00 in the XOP scheme, while they have the value 01 in the VEX scheme for instructions that have no legacy equivalent. This may complicate the use of the pp bits for other purposes in the future. A similar compatibility issue is the difference between theFMA3 and FMA4instruction sets. Intel initially proposed FMA4 in AVX/FMA specification version 3 to supersede the 3-operand FMA proposed by AMD in SSE5. After AMD adopted FMA4, Intel canceled FMA4 support and reverted to FMA3 in the AVX/FMA specification version 5 (SeeFMA history).[1][5][6] In March 2015, AMD explicitly revealed in the description of the patch for the GNU Binutils package thatZen, its third-generation x86-64 architecture in its first iteration (znver1 – Zen, version 1), will not supportTBM,FMA4,XOPandLWPinstructions developed specifically for the "Bulldozer" family of micro-architectures.[7][8] These are integer version of theFMA instruction set. These are all four operand instructions similar toFMA4and they all operate on signed integers. r0 = a0 * b0 + c0,r1 = a1 * b1 + c1, .. r0 = a0 * b0 + c0,r1 = a2 * b2 + c1, .[2] r0 = a0 * b0 + c0,r1 = a1 * b1 + c1, .. r0 = a0 * b0 + c0,r1 = a2 * b2 + c1 r0 = a1 * b1 + c0,r1 = a3 * b3 + c1 r0 = a0 * b0 + a1 * b1 + c0,r1 = a2 * b2+a3 * b3 + c1, .. Horizontal addition instructions adds adjacent values in the input vector to each other. The output size in the instructions below describes how wide the horizontal addition performed is. For instance horizontal byte to word adds two bytes at a time and returns the result as vector of words, but byte to quadword adds eight bytes together at a time and returns the result as vector of quadwords. Six additional horizontal addition and subtraction instructions can be found inSSSE3, but they operate on two input vectors and only does two and two operations. r0 = a0+a1,r1 = a2+a3,r2 = a4+a5, ... r0 = a0+a1+a2+a3,r1 = a4+a5+a6+a7, ... r0 = a0+a1+a2+a3+a4+a5+a6+a7, ... r0 = a0+a1,r1 = a2+a3,r2 = a4+a5, ... r0 = a0+a1+a2+a3,r1 = a4+a5+a6+a7 r0 = a0+a1,r1 = a2+a3 r0 = a0-a1,r1 = a2-a3,r2 = a4-a5, ... r0 = a0-a1,r1 = a2-a3,r2 = a4-a5, ... r0 = a0-a1,r1 = a2-a3 This set of vector compare instructions all take an immediate as an extra argument. The immediate controls what kind of comparison is performed. There are eight comparison possible for each instruction. The vectors are compared and all comparisons that evaluate to true set all corresponding bits in the destination to 1, and false comparisons sets all the same bits to 0. This result can be used directly in VPCMOV instruction for a vectorizedconditional move. VPCMOVworks as bitwise variant of the blend instructions inSSE4. Like the AVX instruction VPBLENDVB, it is a four-operand instruction with three source operands and a destination. For each bit in the third operand (which acts as a selector), 1 selects the same bit in the first source, and 0 selects the same in the second source. When used together with the XOP vector comparison instructions above this can be used to implement a vectorized ternary move, or if the second input is the same as the destination, a conditional move (CMOV). The shift instructions here differ from those inSSE2in that they can shift each unit with a different amount using a vector register interpreted as packed signed integers. The sign indicates the direction of shift or rotate, with positive values causing left shift and negative right shift[10]Intel has specified a different incompatible set of variable vector shift instructions in AVX2.[11] VPPERMis a single instruction that combines theSSSE3instruction PALIGNR and PSHUFB and adds more to both. Some compare it theAltivecinstructionVPERM.[12]It takes three registers as input, the first two are source registers and the third the selector register. Each byte in the selector selects one of the bytes in one of the two input registers for the output. The selector can also apply effects on the selected bytes such as setting it to 0, reverse the bit order, and repeating the most-significant bit. All of the effects or the input can in addition be inverted. TheVPERMIL2PDandVPERMIL2PSinstructions are two source versions of theVPERMILPDandVPERMILPSinstructions inAVXwhich means likeVPPERMthey can select output from any of the fields in the two inputs. These instructions extracts the fractional part of floating point, that is the part that would be lost in conversion to integer.
https://en.wikipedia.org/wiki/XOP_instruction_set
TheLinux kernelis afree and open source,[11]: 4Unix-likekernelthat is used in manycomputer systemsworldwide. The kernel was created byLinus Torvaldsin 1991 and was soon adopted as the kernel for theGNUoperating system(OS) which was created to be afreereplacement forUnix. Since the late 1990s, it has been included in manyoperating system distributions, many of which are calledLinux. One such Linux kernel operating system isAndroidwhich is used in many mobile and embedded devices. Most of the kernel code is written inCas supported by theGNU compiler collection(GCC) which has extensions beyond standard C.[11]: 18[12]The code also containsassemblycode for architecture-specific logic such as optimizing memory use and task execution.[11]: 379–380The kernel has amodulardesign such that modules can be integrated assoftware components– including dynamically loaded. The kernel ismonolithicin an architectural sense since the entire OS kernel runs inkernel space. Linux is provided under theGNU General Public License version 2, although it contains files under othercompatible licenses.[10] In 1991, Linus Torvalds was acomputer sciencestudent enrolled at theUniversity of Helsinki. During his time there, he began to develop an operating system as a side-project inspired by UNIX, for a personal computer.[13]He started with atask switcherinIntel 80386 assembly languageand aterminal driver.[13]On 25 August 1991, Torvalds posted the following tocomp.os.minix, anewsgrouponUsenet:[14] I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486)ATclones. This has been brewing since April, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).I've currently portedbash(1.08) andgcc(1.40), and things seem to work. This implies that I'll get something practical within a few months [...]Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable [sic] (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(. On 17 September 1991, Torvalds prepared version 0.01 of Linux and put on the "ftp.funet.fi" – FTP server of the Finnish University and Research Network (FUNET). It was not even executable since its code still needed Minix to compile and test it.[15] On 5 October 1991, Torvalds announced the first "official" version of Linux, version 0.02.[16][15] [As] I mentioned a month ago, I'm working on a free version of a Minix-lookalike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution. It is just version 0.02...but I've successfully run bash, gcc, gnu-make, gnu-sed, compress, etc. under it. Linux grew rapidly as many developers, including theMINIXcommunity, contributed to the project.[citation needed]At the time, theGNU Projecthad completed many components for its free UNIX replacement,GNU, but its kernel, theGNU Hurd, was incomplete. The project adopted the Linux kernel for its OS.[17] Torvalds labeled the kernel with major version 0 to indicate that it was not yet intended for general use.[18]Version 0.11, released in December 1991, was the first version to beself-hosted; compiled on a computer running the Linux kernel. When Torvalds released version 0.12 in February 1992, he adopted theGNU General Public Licenseversion 2 (GPLv2) over his previous self-drafted license, which had not permitted commercial redistribution.[19]In contrast toUnix, allsource filesof Linux are freely available, includingdevice drivers.[20] The initial success of Linux was driven by programmers and testers across the world. With the support of thePOSIXAPIs, through the libC that, whether needed, acts as an entry point to the kernel address space, Linux could run software and applications that had been developed for Unix.[21] On 19 January 1992, the first post to the new newsgroupalt.os.linuxwas submitted.[22]On 31 March 1992, the newsgroup was renamedcomp.os.linux.[23] The fact that Linux is amonolithic kernelrather than amicrokernelwas the topic of a debate betweenAndrew S. Tanenbaum, the creator of MINIX, and Torvalds.[24]TheTanenbaum–Torvalds debatestarted in 1992 on theUsenetgroupcomp.os.minixas a general discussion about kernel architectures.[25][26] Version 0.96 released in May 1992 was the first capable of running theX Window System.[27][28]In March 1994, Linux 1.0.0 was released with 176,250 lines of code.[29]As indicated by the version number, it was the first version considered suitable for aproduction environment.[18]In June 1996, after release 1.3, Torvalds decided that Linux had evolved enough to warrant a new major number, and so labeled the next release as version 2.0.0.[30][31]Significant features of 2.0 includedsymmetric multiprocessing(SMP), support for more processors types and support for selecting specific hardware targets and for enabling architecture-specific features and optimizations.[21]Themake *configfamily of commands ofkbuildenable and configure options for building ad hoc kernel executables (vmlinux) and loadable modules.[32][33] Version 2.2, released on 20 January 1999,[34]improved locking granularity and SMP management, addedm68k,PowerPC,Sparc64,Alpha, and other 64-bit platforms support.[35]Furthermore, it added newfile systemsincludingMicrosoft'sNTFSread-only capability.[35]In 1999, IBM published its patches to the Linux 2.2.13 code for the support of theS/390architecture.[36] Version 2.4.0, released on 4 January 2001,[37]contained support forISAPlug and Play,USB, andPC Cards. Linux 2.4 added support for thePentium 4andItanium(the latter introduced theia64ISA that was jointly developed by Intel and Hewlett-Packard to supersede the olderPA-RISC), and for the newer64-bit MIPSprocessor.[38]Development for 2.4.xchanged a bit in that more features were made available throughout the series, including support forBluetooth,Logical Volume Manager(LVM) version 1,RAIDsupport,InterMezzoandext3file systems. Version 2.6.0 was released on 17 December 2003.[39]The development for 2.6.xchanged further towards including new features throughout the series. Among the changes that have been made in the 2.6 series are: integration ofμClinuxinto the mainline kernel sources,PAEsupport, support for several new lines ofCPUs, integration of Advanced Linux Sound Architecture (ALSA) into the mainline kernel sources, support for up to 232users (up from 216), support for up to 229process IDs (64-bit only, 32-bit architectures still limited to 215),[40]substantially increased the number of device types and the number of devices of each type, improved64-bitsupport, support forfile systemswhich support file sizes of up to 16terabytes, in-kernelpreemption, support for theNative POSIX Thread Library(NPTL),User-mode Linuxintegration into the mainline kernel sources,SELinuxintegration into the mainline kernel sources,InfiniBandsupport, and considerably more. Starting with 2.6.x releases, the kernel supported a large number of file systems; some designed for Linux, likeext3,ext4,FUSE,Btrfs,[41]and others native to other operating systems likeJFS,XFS, Minix,Xenix,Irix,Solaris,System V,WindowsandMS-DOS.[42] Though development had not used aversion control systemthus far, in 2002, Linux developers adoptedBitKeeper, which was made freely available to them even though it was notfree software. In 2005, because of efforts toreverse-engineerit, the company which owned the software revoked its support of the Linux community. In response, Torvalds and others wroteGit. The new system was written within weeks, and in two months the first official kernel made using it was released.[43] In 2005 thestable teamwas formed as a response to the lack of a kernel tree where people could work onbug fixes, and it would keep updatingstableversions.[44]In February 2008 thelinux-nexttree was created to serve as a place where patches aimed to be merged during the next development cycle gathered.[45][46]Several subsystem maintainers also adopted the suffix-nextfor trees containing code which they mean to submit for inclusion in the next release cycle. As of January 2014[update], the in-development version of Linux is held in an unstable branch namedlinux-next.[47] The 20th anniversary of Linux was celebrated by Torvalds in July 2011 with the release of version 3.0.0.[30]As 2.6 had been the version number for 8 years, a newuname26personality that reports 3.x as 2.6.40+x had to be added to the kernel so that old programs would work.[48] Version 3.0 was released on 22 July 2011.[49]On 30 May 2011, Torvalds announced that the big change was "NOTHING. Absolutely nothing." and asked, "...let's make sure we really make the next release not just an all new shiny number, but a good kernel too."[50]After the expected 6–7 weeks of the development process, it would be released near the 20th anniversary of Linux. On 11 December 2012, Torvalds decided to reduce kernel complexity by removing support fori386processors—specifically by not having toemulate[51]theatomicCMPXCHGinstruction introduced with thei486to allow reliablemutexes—making the 3.7 kernel series the last one still supporting the original processor.[52][53]The same series unified support for theARMprocessor.[54] The numbering change from 2.6.39 to 3.0, and from 3.19 to 4.0, involved no meaningful technical differentiation; the major version number was increased simply to avoid large minor numbers.[49][55]Stable 3.x.y kernels were released until 3.19 in February 2015. Version 3.11, released on 2 September 2013,[56]added many new features such as newO_TMPFILEflag foropen(2)to reduce temporary file vulnerabilities, experimental AMDRadeondynamic power management, low-latency network polling, andzswap(compressed swap cache).[57] In April 2015, Torvalds released kernel version 4.0.[30]By February 2015, Linux had received contributions from nearly 12,000 programmers from more than 1,200 companies, including some of the world's largest software and hardware vendors.[58]Version 4.1 of Linux, released in June 2015, contains over 19.5 million lines of code contributed by almost 14,000 programmers.[59] Linus Torvalds announced that kernel version 4.22 would instead be numbered 5.0 in March 2019, stating that "'5.0' doesn't mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes."[60]It featured many major additions such as support for the AMD RadeonFreeSyncandNVIDIAXavier display, fixes forF2FS,EXT4andXFS, restored support for swap files on theBtrfsfile systemand continued work on theIntelIcelakeGen11 graphics and on theNXPi.MX8SoCs.[61][62]This release was noticeably larger than the rest, Torvalds mentioning that "The overall changes for all of the 5.0 release are much bigger."[60] A total of 1,991 developers, of whom 334 were first-time collaborators, added more than 553,000 lines of code to version 5.8, breaking the record previously held by version 4.9.[63] According to the Stack Overflow's annual Developer Survey of 2019, more than the 53% of all respondents have developed software forLinuxand about 27% forAndroid,[64]although only about 25% develop with Linux-based operating systems.[65] Most websites run onLinux-based operating systems,[66][67]and all of theworld's 500 most powerful supercomputersrun on Linux.[68] Linux distributionsbundle the kernel withsystem software(e.g., theGNU C Library,systemd, and otherUnixutilitiesanddaemons) and a wide selection ofapplication software, but theirusage sharein desktops is low in comparison to other operating systems. SinceAndroid, which is Linux, accounts for the majority of mobile device operating systems,[69][70][71]and due to its rising use inembedded devices, Android is significantly responsible for rising use of Linux overall.[21] The cost to redevelop version 2.6.0 of the Linux kernel in a traditional proprietary development setting has been estimated to be US$612 million (€467M, £394M) in 2004 prices using theCOCOMOperson-month estimation model.[72]In 2006, a study funded by the European Union put the redevelopment cost of kernel version 2.6.8 higher, at €882M ($1.14bn, £744M).[73] This topic was revisited in October 2008 by Amanda McPherson, Brian Proffitt, and Ron Hale-Evans. Using David A. Wheeler's methodology, they estimated redevelopment of the 2.6.25 kernel now costs $1.3bn (part of a total $10.8bn to redevelop Fedora 9).[74]Again, Garcia-Garcia and Alonso de Magdaleno from University of Oviedo (Spain) estimate that the value annually added to kernel was about €100M between 2005 and 2007 and €225M in 2008, it would cost also more than €1bn (about $1.4bn as of February 2010) to develop in the European Union.[75] As of 7 March 2011[update], using then-currentLOC(lines of code) of a 2.6.x Linux kernel and wage numbers with David A. Wheeler's calculations it would cost approximately $3bn (about €2.2bn) to redevelop the Linux kernel as it keeps getting bigger. An updated calculation as of 26 September 2018[update], using then-current 20,088,609 LOC (lines of code) for the 4.14.14 Linux kernel and the current US national average programmer salary of $75,506 show that it would cost approximately $14,725,449,000 (£11,191,341,000) to rewrite the existing code.[76] Most who use Linux do so via aLinux distribution. Some distributions ship the vanilla or stable kernel. However, several vendors (such asRed HatandDebian) maintain a customized source tree. These are usually updated at a slower pace than the vanilla branch, and they usually include all fixes from the relevant stable branch, but at the same time they can also add support for drivers or features which had not been released in the vanilla version the distribution vendor started basing its branch from. The community of Linux kernel developers comprises about 5000–6000 members. According to the "2017 State of Linux Kernel Development", a study issued by the Linux Foundation, covering the commits for the releases 4.8 to 4.13, about 1500 developers were contributing from about 200–250 companies on average. The top 30 developers contributed a little more than 16% of the code. For companies, the top contributors are Intel (13.1%) and Red Hat (7.2%), Linaro (5.6%), IBM (4.1%), the second and fifth places are held by the 'none' (8.2%) and 'unknown' (4.1%) categories.[78] "Instead of a roadmap, there are technical guidelines. Instead of a central resource allocation, there are persons and companies who all have a stake in the further development of the Linux kernel, quite independently from one another: People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That's because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors out there. It's those people who own the resources who decide..." Notable conflicts among Linux kernel developers: Prominent Linux kernel developers have been aware of the importance of avoiding conflicts between developers.[91]For a long time there was no code of conduct for kernel developers due to opposition by Torvalds.[92]However, a Linux KernelCode of Conflictwas introduced on 8 March 2015.[93]It was replaced on 16 September 2018 by a newCode of Conductbased on theContributor Covenant. This coincided with a public apology by Torvalds and a brief break from kernel development.[94][95]On 30 November 2018, complying with theCode of Conduct, Jarkko Sakkinen of Intel sent out patches replacing instances of "fuck" appearing in source code comments with suitable versions focused on the word 'hug'.[96] Developers who feel treated unfairly can report this to theLinux FoundationTechnical Advisory Board.[97]In July 2013, the maintainer of the USB 3.0 driverSage Sharpasked Torvalds to address the abusive commentary in the kernel development community. In 2014, Sharp backed out of Linux kernel development, saying that "The focus on technical excellence, in combination with overloaded maintainers, and people with different cultural and social norms, means that Linux kernel maintainers are often blunt, rude, or brutal to get their job done".[98]At the linux.conf.au (LCA) conference in 2018, developers expressed the view that the culture of the community has gotten much better in the past few years. Daniel Vetter, the maintainer of the Intel drm/i915 graphics kernel driver, commented that the "rather violent language and discussion" in the kernel community has decreased or disappeared.[99] Laurent Pinchart asked developers for feedback on their experiences with the kernel community at the 2017 Embedded Linux Conference Europe. The issues brought up were discussed a few days later at the Maintainers Summit. Concerns over the lack of consistency in how maintainers responded to patches submitted by developers were echoed byShuah Khan, the maintainer of the kernel self-test framework. Torvalds contended that there would never be consistency in the handling of patches because different kernel subsystems have, over time, adopted different development processes. Therefore, it was agreed upon that each kernel subsystem maintainer would document the rules for patch acceptance.[100] Linux is evolution, notintelligent design! The kernel source code, a.k.a. source tree, is managed in theGitversion control system– also created by Torvalds.[104] As of 2021[update], the 5.11 release of the Linux kernel had around 30.34 million lines of code. Roughly 14% of the code is part of the "core," including architecture-specific code, kernel code, and memory management code, while 60% is drivers. Contributions are submitted as patches, in the form of text messages on theLinux kernel mailing list(LKML) (and often also on other mailing lists dedicated to particular subsystems). The patches must conform to a set of rules and to a formal language that, among other things, describes which lines of code are to be deleted and what others are to be added to the specified files. These patches can be automatically processed so that system administrators can apply them in order to make just some changes to the code or to incrementally upgrade to the next version.[105]Linux is distributed also inGNU zip(gzip) andbzip2formats. A developer who wants to change the Linux kernel writes and tests a code change. Depending on how significant the change is and how many subsystems it modifies, the change will either be submitted as a single patch or in multiple patches ofsource code. In case of a single subsystem that is maintained by a single maintainer, these patches are sent as e-mails to the maintainer of the subsystem with the appropriate mailing list in Cc. The maintainer and the readers of the mailing list will review the patches and provide feedback. Once the review process has finished the subsystem maintainer accepts the patches in the relevantGitkernel tree. If the changes to the Linux kernel are bug fixes that are considered important enough, a pull request for the patches will be sent to Torvalds within a few days. Otherwise, a pull request will be sent to Torvalds during the next merge window. The merge window usually lasts two weeks and starts immediately after the release of the previous kernel version.[106]The Git kernel source tree names all developers who have contributed to the Linux kernel in theCreditsdirectory and all subsystem maintainers are listed inMaintainers.[107] As with many large open-source software projects, developers are required to adhere to theContributor Covenant, acode of conductintended to address harassment of minority contributors.[108][109]Additionally, to prevent offense the use ofinclusive terminologywithin the source code is mandated.[110] Linux is written in a specialC programming languagesupported byGCC, a compiler that extends the C standard in many ways, for example usinginline sections of codewritten in theassembly language(in GCC's "AT&T-style" syntax) of the target architecture. In September 2021, the GCC version requirement for compiling and building the Linux kernel increased from GCC 4.9 to 5.1, allowing the potential for the kernel to be moved from using C code based on theC89standard to using code written with theC11standard,[111]with the migration to the standard taking place in March 2022, with the release of Linux 5.18.[112] Initial support for theRustprogramming language was added in Linux 6.1[113]which was released in December 2022,[114]with later kernel versions, such as Linux 6.2 and Linux 6.3, further improving the support.[115][116] Since 2002, code must adhere to the 21 rules comprising theLinux Kernel Coding Style.[117][118] As for most software, the kernel is versioned as a series of dot-separated numbers. For early versions, the version consisted of three or four dot-separated numbers calledmajor release,minor releaseandrevision.[11]: 9At that time, odd-numbered minor releases were for development and testing, while even numbered minor releases for production. The optional fourth digit indicated a patch level.[18]Development releases were indicated with a release candidate suffix (-rc). The current versioning conventions are different. The odd/even number implying dev/prod has been dropped, and a major version is indicated by the first two numbers together. While the time-frame is open for the development of the next major, the -rcN suffix is used to identify the n'th release candidate for the next version.[119]For example, the release of the version 4.16 was preceded by seven 4.16-rcN (from -rc1 to -rc7). Once a stable version is released, its maintenance is passed to thestable team. Updates to a stable release are identified by a three-number scheme (e.g., 4.16.1, 4.16.2, ...).[119] The kernel is usually built with theGNU toolchain. The GNU C compiler, GNU cc, part of theGNU Compiler Collection(GCC), is the defaultcompilerfor mainline Linux. Sequencing is handled by GNUmake. TheGNU Assembler(often called GAS or GNU as) outputs theobject filesfrom the GCC generatedassemblycode. Finally, theGNU Linker(GNU ld) produces a statically linked executable kernel file calledvmlinux. Bothasandldare part ofGNU Binary Utilities(binutils). GNU cc was for a long time the only compiler capable of correctly building Linux. In 2004,Intelclaimed to have modified the kernel so thatits C compilerwas also capable of compiling it.[120]There was another such reported success in 2009, with a modified 2.6.22 version.[121][122]Support for the Intel compiler has been dropped in 2023.[123] Since 2010, effort has been underway to build Linux withClang, an alternative compiler for the C language;[124]as of 12 April 2014, the official kernel could almost be compiled by Clang.[125][126]The project dedicated to this effort is namedLLVMLinuxafter theLLVMcompiler infrastructure upon which Clang is built.[127]LLVMLinux does not aim to fork either Linux or the LLVM, therefore it is a meta-project composed of patches that are eventually submitted to the upstream projects. By enabling Linux to be compiled by Clang, developers may benefit from shorter compilation times.[128] In 2017, developers completed upstreaming patches to support building the Linux kernel withClangin the 4.15 release, havingbackportedsupport forX86-64andAArch64to the 4.4, 4.9, and 4.14 branches of the stable kernel tree. Google'sPixel 2shipped with the firstClangbuiltLinuxkernel,[129]though patches forPixel (1st generation)did exist.[130]2018 sawChromeOSmove to building kernels withClangby default,[131]whileAndroidmadeClang[132]andLLVM's linker LLD[133]required for kernel builds in 2019.Googlemoved its production kernel used throughout its datacenters to being built withClangin 2020.[134]Today, theClangBuiltLinuxgroup coordinates fixes to bothLinuxandLLVMto ensure compatibility, both composed of members fromLLVMLinuxand having upstreamed patches fromLLVMLinux. As with any software, problems with the Linux kernel can be difficult totroubleshoot. Common challenges relate to userspace vs. kernel space access, misuse of synchronization primitives, and incorrect hardware management.[11]: 364 Anoopsis a non-fatal error in the kernel. After such an error, operations continue with suspect reliability.[135] A panic (generated bypanic()) is a fatal error. After such an error, the kernel prints a message and halts the computer.[11]: 371 The kernel provides fordebugging by printingviaprintk()which stores messages in a circular buffer (overwriting older entries with newer). Thesyslog(2)system call provides for reading and clearing the message buffer and for setting the maximumlog levelof the messages to be sent to the console.[136]Kernel messages are also exported to userland through the/dev/kmsginterface.[137] Theftracemechanism allow for debugging by tracing. It is used for monitoring and debugging Linux at runtime and it can analyze user space latencies due to kernel misbehavior.[138][139][140][141]Furthermore,ftraceallows users to trace Linux at boot-time.[142] kprobesandkretprobescan break into kernel execution (like debuggers in userspace) and collect information non-disruptively.[143]kprobescan be inserted into code at (almost) any address, while kretprobes work at function return.uprobeshave similar purposes but they also have some differences in usage and implementation.[144] WithKGDBLinux can be debugged in much the same way as userspace programs. KGDB requires an additional machine that runsGDBand that is connected to the target to be debugged using aserial cableorEthernet.[145] The Linux kernel project integrates new code on a rolling basis. Standard operating procedure is that software checked into the project must work andcompilewithout error. Each kernel subsystem is assigned a maintainer who is responsible for reviewing patches against the kernel code standards and keeping a queue of patches that can be submitted to Torvalds within a merge window that is usually several weeks. Patches are merged by Torvalds into the source code of the prior stable Linux kernel release, creating the release candidate (-rc) for the next stable release. Once the merge window is closed, only fixes to the new code in the development release are accepted. The -rc development release of the kernel goes throughregression testingand once it is considered stable by Torvalds and the subsystem maintainers, a new version is released and the development process starts over again.[146] The Git tree that contains the Linux kernel source code is referred to asmainline Linux. Every stable kernel release originates from the mainline tree,[147]and is frequently published onkernel.org. Mainline Linux has only solid support for a small subset of the many devices that run Linux. Non-mainline support is provided by independent projects, such asYoctoorLinaro, but in many cases the kernel from the device vendor is needed.[148]Using a vendor kernel likely requires aboard support package. Maintaining a kernel tree outside of mainline Linux has proven to be difficult.[149] Mainliningrefers to the effort of adding support for a device to the mainline kernel,[150]while there was formerly only support in a fork or no support at all. This usually includes adding drivers ordevice treefiles. When this is finished, the feature or security fix is consideredmainlined.[151] The maintainer of the stable branch,Greg Kroah-Hartman, has applied the termLinux-liketodownstreamkernel forks by vendors that add millions of lines of code to the mainline kernel.[152]In 2019,Googlestated that it wanted to use the mainline Linux kernel inAndroidso the number of kernel forks would be reduced.[153]The term Linux-like has also been applied to theEmbeddable Linux Kernel Subset, which does not include the full mainline Linux kernel but a small modified subset of the code.[154] There are certain communities that develop kernels based on the official Linux. Some interesting bits of code from theseforksthat includeLinux-libre,Compute Node Linux,INK,L4Linux,RTLinux, andUser-Mode Linux(UML) have been merged into the mainline.[155]Some operating systems developed for mobile phones initially used heavily modified versions of Linux, including GoogleAndroid,Firefox OS, HPwebOS, NokiaMaemoand JollaSailfish OS. In 2010, the Linux community criticised Google for effectively starting its own kernel tree:[156][157] This means that any drivers written for Android hardware platforms, can not get merged into the main kernel tree because they have dependencies on code that only lives in Google's kernel tree, causing it to fail to build in the kernel.org tree. Because of this, Google has now prevented a large chunk of hardware drivers and platform code from ever getting merged into the main kernel tree. Effectively creating a kernel branch that a number of different vendors are now relying on.[158] Today Android uses a customized Linux[159]where major changes are implemented in device drivers, but some changes to the core kernel code is required. Android developers also submit patches to the official Linux that finally can boot the Android operating system. For example, aNexus 7can boot and run the mainline Linux.[159] At a 2001 presentation at theComputer History Museum, Torvalds had this to say in response to a question about distributions of Linux using precisely the same kernel sources or not: They're not... well they are, and they're not. There is no single kernel. Every single distribution has their own changes. That's been going on since pretty much day one. I don't know if you may remember Yggdrasil was known for having quite extreme changes to the kernel and even today all of the major vendors have their own tweaks because they have some portion of the market they're interested in and quite frankly that's how it should be. Because if everybody expects one person, me, to be able to track everything that's not the point of GPL. That's not the point of having an open system. So actually the fact that a distribution decides that something is so important to them that they will add patches for even when it's not in the standard kernel, that's a really good sign for me. So that's for example how something like ReiserFS got added. And the reason why ReiserFS is the first journaling filesystem that was integrated in the standard kernel was not because I love Hans Reiser. It was because SUSE actually started shipping with ReiserFS as their standard kernel, which told me "ok." This is actually in production use. Normal People are doing this. They must know something I don't know. So in a very real sense what a lot of distribution houses do, they are part of this "let's make our own branch" and "let's make our changes to this." And because of the GPL, I can take the best portions of them.[160] The latest version and older versions are maintained separately. Most of the latest kernel releases were supervised by Torvalds.[161] The Linux kernel developer community maintains a stable kernel by applying fixes forsoftware bugsthat have been discovered during the development of the subsequent stable kernel. Therefore, www.kernel.org always lists two stable kernels. The next stable Linux kernel is released about 8 to 12 weeks later. Some releases aredesignatedforlong-term supportaslongtermwith bug fix releases for two or more years.[162] Some projects have attempted to reduce the size of the Linux kernel. One of them isTinyLinux. In 2014, Josh Triplett started the -tiny source tree for a reduced size version.[163][164][165][166] Even though seemingly contradictory, the Linux kernel is both monolithic and modular. The kernel is classified as amonolithic kernelarchitecturally since the entire OS runs in kernel space. The design is modular since it can be assembled frommodulesthat in some cases are loaded and unloaded at runtime.[11]: 338[167]It supports features once only available in closed source kernels of non-free operating systems. The rest of the article makes use of the UNIX and Unix-like operating systems convention of themanual pages. The number that follows the name of a command, interface, or other feature specifies the section (i.e. the type of the OS' component or feature) it belongs to. For exampleexecve(2)refers to a system call, andexec(3)refers to a userspace library wrapper. The following is an overview of architectural design and of noteworthy features. Mostdevice driversand kernel extensions run inkernel space(ring 0in manyCPUarchitectures), with full access to the hardware. Some exceptions run inuser space; notable examples are filesystems based onFUSE/CUSE, and parts of UIO.[191][192]Furthermore, theX Window SystemandWayland, the windowing system and display server protocols that most people use with Linux, do not run within the kernel. Differently, the actual interfacing withGPUsofgraphics cardsis an in-kernel subsystem calledDirect Rendering Manager(DRM). Unlike standard monolithic kernels, device drivers are easily configured asmodules, and loaded or unloaded while the system is running and can also be pre-empted under certain conditions in order to handlehardware interruptscorrectly and to better supportsymmetric multiprocessing.[174]By choice, Linux has no stable device driverapplication binary interface.[193] Linux typically makes use ofmemory protectionandvirtual memoryand can also handlenon-uniform memory access,[194]however the project has absorbedμClinuxwhich also makes it possible to run Linux onmicrocontrollerswithout virtual memory.[195] The hardware is represented in the file hierarchy. User applications interact with device drivers via entries in the/devor/sysdirectories.[196]Process information is mapped into the/procdirectory.[196] Linux started as a clone of UNIX, and aims towardPOSIXandSingle UNIX Specificationcompliance.[198]The kernel provides system calls and other interfaces that are Linux-specific. In order to be included in the official kernel, the code must comply with a set of licensing rules.[5][10] The Linuxapplication binary interface(ABI) between the kernel and the user space has four degrees of stability (stable, testing, obsolete, removed);[199]Thesystem callsare expected to never change in order to preservecompatibilityforuserspaceprograms that rely on them.[200] Loadable kernel modules(LKMs), by design, cannot rely on a stable ABI.[193]Therefore, they must always be recompiled whenever a new kernel executable is installed in a system, otherwise they will not be loaded. In-tree drivers that are configured to become an integral part of the kernel executable (vmlinux) are statically linked by the build process. There is no guarantee of stability of source-level in-kernel API[193]and, because of this,device drivercode, as well as the code of any other kernel subsystem, must be kept updated with kernel evolution. Any developer who makes an API change is required to fix any code that breaks as the result of their change.[201] The set of theLinux kernel APIthat regards the interfaces exposed to user applications is fundamentally composed of UNIX and Linux-specificsystem calls.[202]A system call is an entry point into the Linux kernel.[203]For example, among the Linux-specific ones there is the family of theclone(2)system calls.[204]Most extensions must be enabled by defining the_GNU_SOURCEmacroin aheader fileor when the user-land code is being compiled.[205] System calls can only be invoked via assembly instructions that enable the transition from unprivileged user space to privileged kernel space inring 0. For this reason, theC standard library(libC) acts as a wrapper to most Linux system calls, by exposing C functions that, if needed,[206]transparently enter the kernel which will execute on behalf of the calling process.[202]For system calls not exposed by libC, such as thefast userspace mutex,[207]the library provides a function calledsyscall(2)which can be used to explicitly invoke them.[208] Pseudo filesystems(e.g., thesysfsandprocfsfilesystems) andspecial files(e.g.,/dev/random,/dev/sda,/dev/tty, and many others) constitute another layer of interface to kernel data structures representing hardware or logical (software) devices.[209][210] Because of the differences existing between the hundreds of various implementations of the Linux OS, executable objects, even though they are compiled, assembled, and linked for running on a specific hardware architecture (that is, they use theISAof the target hardware), often cannot run on different Linux distributions. This issue is mainly due to distribution-specific configurations and a set of patches applied to the code of the Linux kernel, differences in system libraries, services (daemons), filesystem hierarchies, and environment variables. The main standard concerning application and binary compatibility of Linux distributions is theLinux Standard Base(LSB).[211][212]However, the LSB goes beyond what concerns the Linux kernel, because it also defines the desktop specifications, the X libraries and Qt that have little to do with it.[213]The LSB version 5 is built upon several standards and drafts (POSIX, SUS, X/Open,File System Hierarchy(FHS), and others).[214] The parts of the LSB more relevant to the kernel are theGeneral ABI(gABI),[215]especially theSystem V ABI[216][217]and theExecutable and Linking Format(ELF),[218][219]and theProcessor Specific ABI(psABI), for example theCore Specification for X86-64.[220][221] The standard ABI for how x86_64 user programs invoke system calls is to load the syscall number into theraxregister, and the other parameters intordi,rsi,rdx,r10,r8, andr9, and finally to put thesyscallassembly instruction in the code.[222][223][224] There are several internal kernel APIs between kernel subsystems. Some are available only within the kernel subsystems, while a somewhat limited set of in-kernel symbols (i.e., variables, data structures, and functions) is exposed to dynamically loadable modules (e.g., device drivers loaded on demand) whether they're exported with theEXPORT_SYMBOL()andEXPORT_SYMBOL_GPL()macros[226][227](the latter reserved to modules released under a GPL-compatible license).[228] Linux provides in-kernel APIs that manipulate data structures (e.g.,linked lists,radix trees,[229]red-black trees,[230]queues) or perform common routines (e.g., copy data from and to user space, allocate memory, print lines to the system log, and so on) that have remained stable at least since Linux version 2.6.[231][232][233] In-kernel APIs include libraries of low-level common services used by device drivers: The Linux developers chose not to maintain a stable in-kernel ABI. Modules compiled for a specific version of the kernel cannot be loaded into another version without being recompiled.[193] Linux, as otherkernels, has the ability to manage processes including creating, suspending, resuming and terminating. Unlike other operating systems, the Linux kernel implements processes as a group of threads called tasks. If two tasks share the sameTGID, then they are called in the kernel terminology a task group. Each task is represented by atask_structdata structure. When a process is created it is assigned a globally unique identifier calledPIDand cannot be shared[243][244] A new process can be created by callingclone[245]family of system calls orforksystem call. Processes can be suspended and resumed by the kernel by sending signals likeSIGSTOPandSIGCONT. A process can terminate it's self by callingexitsystem call, or terminated by another process by sending signals likeSIGKILL,SIGABRTorSIGINT. If the executable is dynamically linked to shared libraries, adynamic linkeris used to find and load the needed objects, prepare the program to run and then run it.[246] TheNative POSIX Thread Library(NPTL)[247]provides the POSIX standard thread interface (pthreads) to userspace. The kernel isn't aware of processes nor threads but it is aware oftasks, thus threads are implemented in userspace. Threads in Linux are implemented astaskssharing resources, while if they aren't sharing called to be independent processes. The kernel provides thefutex(7)(fast user-space mutex) mechanisms for user-space locking and synchronization.[248]The majority of the operations are performed in userspace but it may be necessary to communicate with the kernel using thefutex(2)system call.[207] As opposed to userspace threads described above,kernel threadsrun in kernel space.[249]They are threads created by the kernel it's self for specialized tasks, they are privileged like kernel and aren't bond to any process or application. The Linuxprocess scheduleris modular, in the sense that it enables different scheduling classes and policies.[250][251]Scheduler classes are plugable scheduler algorithms that can be registered with the base scheduler code. Each class schedules different types of processes. The core code of the scheduler iterates over each class in order of priority and chooses the highest priority scheduler that has a schedulable entity of type struct sched_entity ready to run.[11]: 46–47Entities may be threads, group of threads, and even all the processes of a specific user. Linux provides bothuser preemptionas well as fullkernel preemption.[11]: 62–63Preemption reduceslatency, increases responsiveness,[252]and makes Linux more suitable for desktop andreal-timeapplications. For normal tasks, by default, the kernel uses theCompletely Fair Scheduler(CFS) class, introduced in version 2.6.23.[176]The scheduler is defined as a macro in a C header asSCHED_NORMAL. In other POSIX kernels, a similar policy known asSCHED_OTHERallocates CPU timeslices (i.e, it assigns absolute slices of the processor time depending on either predetermined or dynamically computed priority of each process). The Linux CFS does away with absolute timeslices and assigns a fair proportion of CPU time, as a function of parameters like the total number of runnable processes and the time they have already run; this function also takes into account a kind of weight that depends on their relative priorities (nice values).[11]: 46–50 With user preemption, the kernel scheduler can replace the current process with the execution of acontext switchto a different one that therefore acquires the computing resources for running (CPU, memory, and more). It makes it according to the CFS algorithm (in particular, it uses a variable calledvruntimefor sorting entities and then chooses the one that has the smaller vruntime, - i.e., the schedulable entity that has had the least share of CPU time), to the active scheduler policy and to the relative priorities.[253]With kernel preemption, the kernel can preempt itself when an interrupt handler returns, when kernel tasks block, and whenever a subsystem explicitly calls the schedule() function. The kernel also contains two POSIX-compliant[254]real-time scheduling classes namedSCHED_FIFO(realtimefirst-in-first-out) andSCHED_RR(realtimeround-robin), both of which take precedence over the default class.[250]An additional scheduling policy known asSCHED DEADLINE, implementing theearliest deadline first algorithm(EDF), was added in kernel version 3.14, released on 30 March 2014.[255][256]SCHED_DEADLINEtakes precedence over all the other scheduling classes. Real-timePREEMPT_RTpatches, included into the mainline Linux since version 2.6, provide adeterministic scheduler, the removal of preemption and interrupt disabling (where possible), PI Mutexes (i.e., locking primitives that avoid priority inversion),[257][258]support forHigh Precision Event Timers(HPET), preemptiveread-copy-update(RCU), (forced) IRQ threads, and other minor features.[259][260][261] In 2023, Peter Zijlstra proposed replacing CFS with anearliest eligible virtual deadline first scheduling(EEVDF) scheduler,[262][263]to prevent the need for CFS "latency nice" patches.[264]The EEVDF scheduler replaced CFS in version 6.6 of the Linux kernel.[175] The kernel has different causes of concurrency (e.g., interrupts, bottom halves, preemption of kernel and users tasks, symmetrical multiprocessing).[11]: 167 For protecting critical regions (sections of code that must be executed atomically), shared memory locations (likeglobal variablesand other data structures with global scope), and regions of memory that are asynchronously modifiable by hardware (e.g., having the Cvolatiletype qualifier), Linux provides a large set of tools. They consist ofatomic types(which can only be manipulated by a set of specific operators),spinlocks,semaphores,mutexes,[265][11]: 176–198[266]andlockless algorithms(e.g.,RCUs).[267][268][269]Most lock-less algorithms are built on top ofmemory barriersfor the purpose of enforcingmemory orderingand prevent undesired side effects due tocompiler optimization.[270][271][272][273] PREEMPT_RTcode included in mainline Linux provideRT-mutexes, a special kind of Mutex which do not disable preemption and have support for priority inheritance.[274][275]Almost all locks are changed into sleeping locks when using configuration for realtime operation.[276][261][275]Priority inheritanceavoids priority inversion by granting a low-priority task which holds a contended lock the priority of a higher-priority waiter until that lock is released.[277][278] Linux includes a kernel lock validator calledLockdep.[279][280] Although the management ofinterruptscould be seen as a single job, it is divided into two. This split in two is due to the different time constraints and to the synchronization needs of the tasks whose the management is composed of. The first part is made up of an asynchronousinterrupt service routinethat in Linux is known as thetop half, while the second part is carried out by one of three types of the so-calledbottom halves(softirq,tasklets,andwork queues).[11]: 133–137 Linux interrupt service routines can be nested. A new IRQ can trap into a high priority ISR that preempts any other lower priority ISR. Linux kernel manages both physical and virtual memory. Kernel divides physical memory into zones,[281]each zone has specific purpose. Those zones are most common others exist as the official documentation.[281] And when it comes to virtual memory Linux implementsvirtual memorywith 4 or 5-levelpage tables.[282]The kernel is notpageable(meaning it is always resident in physical memory and cannot be swapped to the disk) and there is no memory protection (noSIGSEGVsignals, unlike in user space), therefore memory violations lead to instability and system crashes.[11]: 20User memory is pageable by default, although paging for specific memory areas can be disabled with themlock()system callfamily. Page frameinformation is maintained in apposite data structures (of typestruct page) that are populated immediately after boot and kept until shutdown, regardless of whether they are associated with virtual pages. The physical address space is divided into different zones, according to architectural constraints and intended use. NUMA systems with multiple memory banks are also supported.[283] Small chunks of memory can be dynamically allocated in kernel space via the family ofkmalloc()APIs and freed with the appropriate variant ofkfree().vmalloc()andkvfree()are used for large virtually contiguous chunks.alloc_pages()allocates the desired number of entire pages. The kernel used to include the SLAB, SLUB and SLOB allocators as configurable alternatives.[285][286]The SLOB allocator was removed in Linux 6.4[287]and the SLAB allocator was removed in Linux 6.8.[288]The sole remaining allocator is SLUB, which aims for simplicity and efficiency,[286]isPREEMPT_RTcompatible[289]and was introduced in Linux 2.6. It is the subsystem that implements filesystem and everything related to filesystem. Linux supports a nomorous amount of filesystems with different features and functionality. Because of that it was necessary to implement generic filesystem that is independent from underlying filesystem.Virtual filesystemexposes other linux subsystems or userspace,APIsthat abstract away the different implementation of underlying filesystem. VFS implements system calls like creat,open,read,write and close. VFS implements generic super block[290]andInode blockthat is independent from the one that the underlying filesystem has. In this subsystem directories and files are represented by adata structure(struct file). Whenuserspacerequests access to a file it returned afile descriptor(non negative integer value) but in the kernel side it isstruct filestructure. That structure stores everything the kernel need to know about a file or directory. sysfsandprocfsare virtual filesystems that exposeuserspaceprograms runtime and hardware information. Those filesystems aren't present in disk and instead the kernel implements as acallbacksor routines get called when those files accessed by userspace. While not originally designed to beportable,[14][291]Linux is now one of the most widely ported operating system kernels, running on a diverse range of systems from theARM architectureto IBMz/Architecturemainframe computers. The first port was performed on theMotorola 68000platform. The modifications to the kernel were so fundamental that Torvalds viewed the Motorola version as aforkand a "Linux-like operating system".[291]However, that moved Torvalds to lead a major restructure of the code to facilitate porting to more computing architectures. The first Linux that, in a single source tree, had code for more than i386 alone, supported theDECAlpha AXP64-bit platform.[292][293][291] Linux runs as the main operating system onIBM'sSummit; as of October 2019[update], all of the world's500 fastest supercomputersrun some operating system based on the Linux kernel,[294]a big change from 1998 when the first Linux supercomputer got added to the list.[295] Linux has also been ported to various handheld devices such asApple'siPhone3G andiPod.[296] In 2007, the LKDDb project has been started to build a comprehensive database of hardware and protocols known by Linux kernels.[297]The database is built automatically by static analysis of the kernel sources. Later in 2014, the Linux Hardware project was launched to automatically collect a database of all tested hardware configurations with the help of users of various Linux distributions.[298] Rebootless updates can even be applied to the kernel by usinglive patchingtechnologies such asKsplice,kpatchandkGraft. Minimalistic foundations for live kernel patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on 12 April 2015. Those foundations, known aslivepatchand based primarily on the kernel'sftracefunctionality, form a common core capable of supporting hot patching by both kGraft and kpatch, by providing anapplication programming interface(API) for kernel modules that contain hot patches and anapplication binary interface(ABI) for the userspace management utilities. However, the common core included into Linux kernel 4.0 supports only thex86architecture and does not provide any mechanisms for ensuringfunction-level consistency while the hot patches are applied. Kernel bugs present potential security issues. For example, they may allow forprivilege escalationor createdenial-of-service attackvectors. Over the years, numerous bugs affecting system security were found and fixed.[299]New features are frequently implemented to improve the kernel's security.[300][301] Capabilities(7) have already been introduced in the section about the processes and threads. Android makes use of them andsystemdgives administrators detailed control over the capabilities of processes.[302] Linux offers a wealth of mechanisms to reduce kernel attack surface and improve security which are collectively known as theLinux Security Modules(LSM).[303]They comprise theSecurity-Enhanced Linux(SELinux) module, whose code has been originally developed and then released to the public by theNSA,[304]andAppArmor[190]among others. SELinux is now actively developed and maintained onGitHub.[189]SELinux and AppArmor provide support to access control security policies, includingmandatory access control(MAC), though they profoundly differ in complexity and scope. Another security feature is the Seccomp BPF (SECure COMPuting with Berkeley Packet Filters) which works by filtering parameters and reducing the set of system calls available to user-land applications.[305] Critics have accused kernel developers of covering up security flaws, or at least not announcing them; in 2008, Torvalds responded to this with the following:[306][307] I personally consider security bugs to be just "normal bugs". I don't cover them up, but I also don't have any reason what-so-ever to think it's a good idea to track them and announce them as something special...one reason I refuse to bother with the whole security circus is that I think it glorifies—and thus encourages—the wrong behavior. It makes "heroes" out of security people, as if the people who don't just fix normal bugs aren't as important. In fact, all the boring normal bugs arewaymore important, just because there's[sic] a lot more of them. I don't think some spectacular security hole should be glorified or cared about as being any more "special" than a random spectacular crash due to bad locking. Linux distributions typically release security updates to fix vulnerabilities in the Linux kernel. Many offerlong-term supportreleases that receive security updates for a certain Linux kernel version for an extended period of time. Initially, Torvalds released Linux under a license which forbade any commercial use.[308]This was changed in version 0.12 by a switch to theGNU General Public Licenseversion 2 (GPLv2).[19]This license allows distribution and sale of possibly modified and unmodified versions of Linux but requires that all those copies be released under the same license and be accompanied by - or that, on request, free access is given to - the complete corresponding source code.[309]Torvalds has described licensing Linux under the GPLv2 as the "best thing I ever did".[308] The Linux kernel is licensed explicitly underGNU General Public Licenseversion 2 only (GPL-2.0-only) with an explicit syscall exception (Linux-syscall-note),[5][8][9]without offering the licensee the option to choose any later version, which is a common GPL extension. Contributed code must be available underGPL-compatible license.[10][201] There was considerable debate about how easily the license could be changed to use later GPL versions (including version 3), and whether this change is even desirable.[310]Torvalds himself specifically indicated upon the release of version 2.4.0 that his own code is released only under version 2.[311]However, the terms of the GPL state that if no version is specified, then any version may be used,[312]andAlan Coxpointed out that very few other Linux contributors had specified a particular version of the GPL.[313] In September 2006, a survey of 29 key kernel programmers indicated that 28 preferred GPLv2 to the then-current GPLv3 draft. Torvalds commented, "I think a number of outsiders... believed that I personally was just the odd man out because I've been so publicly not a huge fan of the GPLv3."[314]This group of high-profile kernel developers, including Torvalds,Greg Kroah-HartmanandAndrew Morton, commented on mass media about their objections to the GPLv3.[315]They referred to clauses regardingDRM/tivoization, patents, "additional restrictions" and warned aBalkanisationof the "Open Source Universe" by the GPLv3.[315][316]Torvalds, who decided not to adopt the GPLv3 for the Linux kernel, reiterated his criticism even years later.[317] It is debated whether someloadable kernel modules(LKMs) are to be consideredderivative worksunder copyright law, and thereby whether or not they fall under the terms of the GPL. In accordance with the license rules, LKMs using only a public subset of the kernel interfaces[226][227]are non-derived works, thus Linux gives system administrators the mechanisms to load out-of-tree binary objects into the kernel address space.[10] There are some out-of-tree loadable modules that make legitimate use of thedma_bufkernel feature.[318]GPL compliant code can certainly use it. However, a different possible use case would beNvidia Optimusthat pairs a fast GPU with an Intel integrated GPU, where the Nvidia GPU writes into theIntelframebuffer when it is active. But, Nvidia cannot use this infrastructure because it necessitates bypassing a rule that can only be used by LKMs that are also GPL.[228]Alan Coxreplied onLKML, rejecting a request from one of Nvidia's engineers to remove this technical enforcement from the API.[319]Torvalds clearly stated on the LKML that "[I] claim that binary-only kernel modules ARE derivative "by default"'".[320] On the other hand, Torvalds has also said that "[one] gray area in particular is something like a driver that was originally written for another operating system (i.e., clearly not a derived work of Linux in origin). THAT is a gray area, and _that_ is the area where I personally believe that some modules may be considered to not be derived works simply because they weren't designed for Linux and don't depend on any special Linux behaviour".[321]Proprietarygraphics drivers, in particular, are heavily discussed. Whenever proprietary modules are loaded into Linux, the kernel marks itself as being "tainted",[322]and therefore bug reports from tainted kernels will often be ignored by developers. The official kernel, that is the Linus git branch at the kernel.org repository, contains binary blobs released under the terms of the GNU GPLv2 license.[5][10]Linux can also search filesystems to locate binary blobs, proprietary firmware, drivers, or other executable modules, then it can load and link them into kernel space.[323] When it is needed (e.g., for accessing boot devices or for speed) firmware can be built-in to the kernel, this means building the firmware intovmlinux; however this is not always a viable option for technical or legal issues (e.g., it is not permitted to do this with firmware that is non-GPL compatible, although this is quite common nonetheless).[324] Linux is a registeredtrademarkof Linus Torvalds in the United States, the European Union, and some other countries.[325][326]A legal battle over the trademark began in 1996, when William Della Croce, a lawyer who was never involved in the development of Linux, started requesting licensing fees for the use of the wordLinux. After it was proven that the word was in common use long before Della Croce's claimed first use, the trademark was awarded to Torvalds.[327][328][329] In October 2024, kernel developerGreg Kroah-Hartmanremoved some kernel developers whose email addresses suggested a connection to Russia from their roles as maintainers.[330][331]Linus Torvaldsresponded that he did not support Russian aggression and would not revert the patch, insinuating that opponents of the patch were Russian trolls.[332]James Bottomley, a kernel developer, issued an apology for the handling of the situation and clarified that the action was a consequence of U.S. sanctions against Russia.[333]
https://en.wikipedia.org/wiki/Linux_kernel
Xen(pronounced/ˈzɛn/) is afree and open-sourcetype-1hypervisor, providing services that allow multiple computeroperating systemsto execute on the samecomputer hardwareconcurrently. It was originally developed by theUniversity of Cambridge Computer Laboratoryand is now being developed by theLinux Foundationwith support fromIntel,Citrix,Arm Ltd,Huawei,AWS,Alibaba Cloud,AMD,BitdefenderandEPAM Systems. The Xen Project community develops and maintains Xen Project asfree and open-source software, subject to the requirements of theGNU General Public License(GPL), version 2. Xen Project is currently available for theIA-32,x86-64andARMinstruction sets.[4] Xen Project runs in a more privileged CPU state than any other software on the machine, except forfirmware. Responsibilities of the hypervisor include memory management and CPU scheduling of all virtual machines ("domains"), and for launching the most privileged domain ("dom0") - the only virtual machine which by default has direct access to hardware. From the dom0 the hypervisor can be managed and unprivileged domains ("domU") can be launched.[5] The dom0 domain is typically a version ofLinuxorBSD. User domains may either be traditional operating systems, such asMicrosoft Windowsunder which privileged instructions are provided by hardware virtualization instructions (if the host processor supportsx86 virtualization, e.g.,Intel VT-xandAMD-V),[6]orparavirtualizedoperating systems whereby the operating system is aware that it is running inside a virtual machine, and so makes hypercalls directly, rather than issuing privileged instructions. Xen Project boots from abootloadersuch asGNU GRUB, and then usually loads aparavirtualizedhost operating system into the host domain (dom0). Xen originated as a research project at theUniversity of Cambridgeled byIan Pratt, asenior lecturerin theComputer Laboratory, and his PhD student Keir Fraser. According toAnil Madhavapeddy, an early contributor, Xen started as a bet on whether Fraser could make multiple Linux Kernels boot on the same hardware in a weekend.[7]The first public release of Xen was made in 2003, with v1.0 following in 2004. Soon after, Pratt and Fraser along with other Cambridge alumni including Simon Crosby and founding CEO Nick Gault created XenSource Inc. to turn Xen into a competitive enterprise product. To support embedded systems such as smartphone/ IoT with relatively scarce hardware computing resources, the Secure Xen ARM architecture on an ARM CPU was exhibited at Xen Summit on April 17, 2007, held in IBM TJ Watson.[8][9]The first public release of Secure Xen ARM source code was made at Xen Summit on June 24, 2008[10][11]bySang-bum Suh,[12]a Cambridge alumnus, in Samsung Electronics. On October 22, 2007,Citrix Systemscompleted its acquisition of XenSource,[13]and the Xen Project moved to the xen.org domain. This move had started some time previously, and made public the existence of the Xen Project Advisory Board (Xen AB), which had members fromCitrix,IBM,Intel,Hewlett-Packard,Novell,Red Hat,Sun MicrosystemsandOracle. The Xen Advisory Board advises the Xen Project leader and is responsible for the Xen trademark,[14]which Citrix has freely licensed to all vendors and projects that implement the Xenhypervisor.[15]Citrix also used the Xen brand itself for some proprietary products unrelated to Xen, includingXenAppandXenDesktop. On April 15, 2013, it was announced that the Xen Project was moved under the auspices of theLinux Foundationas a Collaborative Project.[16]The Linux Foundation launched a new trademark for "Xen Project" to differentiate the project from any commercial use of the older "Xen" trademark. A new community website was launched at xenproject.org[17]as part of the transfer. Project members at the time of the announcement included: Amazon, AMD, Bromium, CA Technologies, Calxeda, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon.[18]The Xen project itself is self-governing.[19] Since version 3.0 of theLinux kernel, Xen support for dom0 and domU exists in the mainline kernel.[20] The releases up to 3.0.4 also added: Internet hosting servicecompanies use hypervisors to providevirtual private servers.Amazon EC2(from August 2006 to November 2017),[49]IBMSoftLayer,[50]Liquid Web,Fujitsu Global Cloud Platform,[51]Linode,OrionVM[52]andRackspace Clouduse Xen as the primary VM hypervisor for their product offerings.[53] Virtual machine monitors (also known as hypervisors) also often operate onmainframesand large servers running IBM, HP, and other systems.[citation needed]Server virtualization can provide benefits such as: Xen's support for virtual machine live migration from one host to another allowsload balancingand the avoidance of downtime. Virtualization also has benefits when working on development (including the development of operating systems): running the new system as a guest avoids the need to reboot the physical computer whenever a bug occurs.Sandboxedguest systems can also help in computer-security research, allowing study of the effects of somevirusorwormwithout the possibility of compromising the host system. Finally, hardware appliance vendors may decide to ship their appliance running several guest systems, so as to be able to execute various pieces of software that require different operating systems.[citation needed] Xen offers five approaches to running the guest operating system:[54][55][56] Xen provides a form of virtualization known as paravirtualization, in which guests run a modified operating system. The guests are modified to use a special hypercallABI, instead of certain architectural features. Through paravirtualization, Xen can achieve high performance even on its host architecture (x86) which has a reputation for non-cooperation with traditional virtualization techniques.[57][58]Xen can run paravirtualized guests ("PV guests" in Xen terminology) even on CPUs without any explicit support for virtualization. Paravirtualization avoids the need to emulate a full set of hardware and firmware services, which makes a PV system simpler to manage and reduces the attack surface exposed to potentially malicious guests. On 32-bit x86, the Xen host kernel code runs inRing 0, while the hosted domains run inRing 1(kernel) andRing 3(applications). CPUs that support virtualization make it possible to run unmodified guests, including proprietary operating systems (such as Microsoft Windows). This is known ashardware-assisted virtualization, however, in Xen this is known as hardware virtual machine (HVM). HVM extensions provide additional execution modes, with an explicit distinction between the most-privileged modes used by the hypervisor with access to the real hardware (called "root mode" in x86) and the less-privileged modes used by guest kernels and applications with "hardware" accesses under complete control of the hypervisor (in x86, known as "non-root mode"; both root and non-root mode have Rings 0–3). Both Intel andAMDhave contributed modifications to Xen to exploit their respective Intel VT-x and AMD-V architecture extensions.[59]Use ofARMv7A and v8A virtualization extensions came with Xen 4.3.[60]HVM extensions also often offer new instructions to allow direct calls by a paravirtualized guest/driver into the hypervisor, typically used for I/O or other operations needing high performance. These allow HVM guests with suitable minor modifications to gain many of the performance benefits of paravirtualized I/O. In current versions of Xen (up to 4.2) only fully virtualized HVM guests can make use of hardware facilities for multiple independent levels of memory protection and paging. As a result, for some workloads, HVM guests with PV drivers (also known as PV-on-HVM, or PVH) provide better performance than pure PV guests. Xen HVM has device emulation based on theQEMUproject to provide I/O virtualization to the virtual machines. The system emulates hardware via a patched QEMU "device manager" (qemu-dm) daemon running as a backend in dom0. This means that the virtualized machines see an emulated version of a fairly basic PC. In a performance-critical environment, PV-on-HVM disk and network drivers are used during the normal guest operation, so that the emulated PC hardware is mostly used for booting. Administrators can "live migrate" Xen virtual machines between physical hosts across a LAN without loss of availability. During this procedure, the LAN iteratively copies the memory of the virtual machine to the destination without stopping its execution. The process requires a stoppage of around 60–300 ms to perform final synchronization before the virtual machine begins executing at its final destination, providing an illusion of seamless migration. Similar technology can serve to suspend running virtual machines to disk, "freezing" their running state for resumption at a later date. Xen can scale to 4095 physical CPUs, 256 VCPUs[clarification needed]per HVM guest, 512 VCPUs per PV guest, 16 TB of RAM per host, and up to 1 TB of RAM per HVM guest or 512 GB of RAM per PV guest.[61] The Xen hypervisor has been ported to a number of processor families: Xen can be shipped in a dedicated virtualization platform, such asXCP-ngor XenServer (formerly Citrix Hypervisor, and before that Citrix XenServer, and before that XenSource's XenEnterprise). Alternatively, Xen is distributed as an optional configuration of many standard operating systems. Xen is available for and distributed with: Guest systems can run fully virtualized (which requires hardware support), paravirtualized (which requires a modified guest operating system), or fully virtualized with paravirtualized drivers (PVHVM[74]).[75]Most operating systems which can run on PCs can run as a Xen HVM guest. The following systems can operate as paravirtualized Xen guests: Xen version 3.0 introduced the capability to run Microsoft Windows as a guest operating system unmodified if the host machine's processor supportshardware virtualizationprovided by Intel VT-x (formerly codenamed Vanderpool) or AMD-V (formerly codenamed Pacifica). During the development of Xen 1.x,Microsoft Research, along with the University of Cambridge Operating System group, developed a port ofWindows XPto Xen — made possible byMicrosoft's Academic Licensing Program. The terms of this license do not allow the publication of this port, although documentation of the experience appears in the original XenSOSPpaper.[79]James Harper and the Xen open-source community have started developing free software paravirtualization drivers for Windows. These provide front-end drivers for the Xen block and network devices and allow much higher disk and network performance for Windows systems running in HVM mode. Without these drivers all disk and network traffic has to be processed through QEMU-DM.[80]Subsequently, Citrix has released under a BSD license (and continues to maintain) PV drivers for Windows.[81] Third-party developers have built a number of tools (known as Xen Management Consoles) to facilitate the common tasks of administering a Xen host, such as configuring, starting, monitoring and stopping of Xen guests. Examples include: The Xen hypervisor is covered by the GNU General Public Licence, so all of these versions contain a core of free software with source code. However, many of them contain proprietary additions.
https://en.wikipedia.org/wiki/Xen_hypervisor
CPU-Zis afreewaresystem profilingandmonitoringapplication forMicrosoft WindowsandAndroidthat detects thecentral processing unit,RAM,motherboardchipset, and other hardware features of a modernpersonal computerorAndroid device. CPU-Z is more comprehensive in virtually all areas compared to the tools provided in Windows to identify various hardware components, and thus assists in identifying certain components without the need of opening the case; particularly the core revision andRAMclock rate. It also provides information on the system'sGPU. ThisMicrosoft Windowssoftware-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/CPU-Z
Spectreis one of thespeculative execution CPU vulnerabilitieswhich involveside-channel attacks. These affect modernmicroprocessorsthat performbranch predictionand other forms ofspeculative execution.[1][2][3]On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may revealprivate datato attackers. For example, if the pattern of memory accesses performed by such speculative execution depends on private data, the resulting state of the data cache constitutes aside channelthrough which an attacker may be able to extract information about the private data using atiming attack.[4][5][6] In addition to vulnerabilities associated with installed applications,JIT enginesused for JavaScript were found to be vulnerable. A website can read data stored in the browser for another website, or the browser's memory itself.[7] TwoCommon Vulnerabilities and Exposuresrecords related to Spectre,CVE-2017-5753(bounds check bypass, Spectre-V1, Spectre 1.0) and CVE-2017-5715(branch target injection, Spectre-V2), have been issued.[8] In early 2018,Intelreported that it would redesign itsCPUsto help protect against the Spectre and relatedMeltdownvulnerabilities (especially, Spectre variant 2 and Meltdown, but not Spectre variant 1).[9][10][11][12]On 8 October 2018, Intel was reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors.[13] In 2002 and 2003, Yukiyasu Tsunoo and colleagues fromNECshowed how to attackMISTYandDESsymmetric key ciphers, respectively. In 2005,Daniel Bernsteinfrom theUniversity of Illinois, Chicagoreported an extraction of anOpenSSLAESkey via a cache timing attack, andColin Percivalhad a working attack on the OpenSSLRSAkey using the Intel processor's cache. In 2013 Yuval Yarom and Katrina Falkner from theUniversity of Adelaideshowed how measuring the access time to data lets a nefarious application determine if the information was read from the cache or not. If it was read from the cache the access time would be very short, meaning the data read could contain the private key of encryption algorithms. This technique was used to successfully attack GnuPG, AES and other cryptographic implementations.[14][15][16][17][18][19]In January 2017, Anders Fogh gave a presentation at theRuhr University Bochumabout automatically finding covert channels, especially on processors with a pipeline used by more than one processor core.[20] Spectre proper was discovered independently by Jann Horn fromGoogle'sProject ZeroandPaul Kocherin collaboration with Daniel Genkin, Mike Hamburg, Moritz Lipp, and Yuval Yarom.[4][21]It was made public in conjunction with another vulnerability, Meltdown, on 3 January 2018, after the affected hardware vendors had already been made aware of the issue on 1 June 2017.[22]The vulnerability was called Spectre because it was "based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time."[23] On 28 January 2018, it was reported that Intel shared news of the Meltdown and Spectre security vulnerabilities with Chinese technology companies, before notifying the U.S. government of the flaws.[24] On 29 January 2018, Microsoft was reported to have released aWindows updatethat disabled the problematicIntel Microcodefix—which had, in some cases, caused reboots, system instability, and data loss or corruption—issued earlier by Intel for the Spectre Variant 2 attack.[25][26]Woody Leonhard ofComputerWorldexpressed a concern about installing the new Microsoft patch.[27] Since the disclosure of Spectre and Meltdown in January 2018, much research had been done on vulnerabilities related to speculative execution. On 3 May 2018, eight additional Spectre-class flaws provisionally namedSpectre-NGbyc't(a German computer magazine) were reported affecting Intel and possibly AMD and ARM processors. Intel reported that they were preparing new patches to mitigate these flaws.[28][29][30][31]Affected are allCore i Seriesprocessors andXeonderivates sinceNehalem(2010) andAtom-based processors since 2013.[32]Intel postponed their release ofmicrocodeupdates to 10 July 2018.[33][32] On 21 May 2018, Intel published information on the first two Spectre-NG class side-channel vulnerabilities CVE-2018-3640(Rogue System Register Read, Variant 3a) and CVE-2018-3639(Speculative Store Bypass, Variant 4),[34][35]also referred to as Intel SA-00115 and HP PSR-2018-0074, respectively. According toAmazon Germany, Cyberus Technology,SYSGO, and Colin Percival (FreeBSD), Intel revealed details on the third Spectre-NG variant CVE-2018-3665(Lazy FP State Restore, Intel SA-00145) on 13 June 2018.[36][37][38][39]It is also known asLazy FPU state leak(abbreviated "LazyFP") and "Spectre-NG 3".[38] On 10 July 2018, Intel revealed details on another Spectre-NG class vulnerability called "Bounds Check Bypass Store" (BCBS), or "Spectre 1.1" (CVE-2018-3693), which was able to write as well as read out of bounds.[40][41][42][43]Another variant named "Spectre 1.2" was mentioned as well.[43] In late July 2018, researchers at the universities of Saarland and California revealedret2spec(aka "Spectre v5") andSpectreRSB, new types of code execution vulnerabilities using thereturn stack buffer(RSB).[44][45][46] At the end of July 2018, researchers at the Graz University of Technology revealed "NetSpectre", a new type of remote attack similar to Spectre v1, but which does not need attacker-controlled code to be run on the target device at all.[47][48] On 8 October 2018, Intel was reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors.[13] In November 2018, five new variants of the attacks were revealed. Researchers attempted to compromise CPU protection mechanisms using code to exploit the CPUpattern history table, branch target buffer, return stack buffer, and branch history table.[49] In August 2019, a relatedspeculative execution CPU vulnerability,Spectre SWAPGS(CVE-2019-1125), was reported.[50][51][52] In July 2020 a team of researchers from TU Kaiserslautern, Germany published a new Spectre variant called "Spectre-STC" (single-threaded contention). This variant makes use of port contention in shared resources and can be applied even in single-threaded cores.[53] In late April 2021, a related vulnerability was discovered that breaks through the security systems designed to mitigate Spectre through use of the micro-op cache. The vulnerability is known to affect Skylake and later processors from Intel and Zen-based processors from AMD.[54] In February 2023, a team of researchers at North Carolina State University uncovered a new code execution vulnerability called Spectre-HD, also known as "Spectre SRV" or "Spectre v6". This vulnerability leverages speculative vectorization with selective replay (SRV) technique showing "Leakage from Higher Dimensional Speculation".[55][56] Instead of a single easy-to-fix vulnerability, the Spectre white paper[1]describes a whole class[57]of potential vulnerabilities. They are all based on exploitingside effectsofspeculative execution, a common means of hidingmemory latencyand so speeding up execution in modernmicroprocessors. In particular, Spectre centers onbranch prediction, which is a special case of speculative execution. Unlike the related Meltdown vulnerability disclosed at the same time, Spectre does not rely on a specific feature of a single processor'smemory managementand protection system, but is instead a more generalized idea. The starting point of the white paper is that of a side-channel timing attack[58]applied to the branch prediction machinery of modern microprocessors withspeculative execution. While at thearchitecturallevel documented in processor data books, any results of misprediction are specified to be discarded after the fact, the resulting speculative execution may still leave side effects, like loadedcachelines. These can then affect the so-callednon-functionalaspects of the computing environment later on. If such side effects – including but not limited to memory access timing – are visible to a malicious program, and can be engineered to depend onsensitive dataheld by the victimprocess, then these side effects can result in such data becoming discernible. This can happen despite the formal architecture-level security arrangements working as designed; in this case,lower,microarchitecture-level optimizations to code execution can leak information not essential to the correctness of normal program execution. The Spectre paper explains the attack in four essential steps: Spectre Variant 1, also calledBounds Check Bypass, is an exploit of CPU speculative execution in conditional branches related to memory access bounds. This occurs because the CPU speculatively accesses memory with specific bounds, such as arrays, leading to a bounds bypass (out-of-bounds index access). This speculative execution happens before the CPU validates the bounds check or reverts after a misprediction occurs, resulting in a side-channel leakage.[59] This attack is the result of conditional branch misprediction, which causes a vulnerable processor to speculatively access out-of-bounds data before the access is validated and before any exception arises. Spectre Variant 2, also calledBranch Target Injection, is an exploitation of the CPU's speculative execution of indirect branches, unlike Spectre Variant 1, which is related to conditional branches. This vulnerability arises due to misprediction by the indirect branch predictor. This vulnerability differs from Variant 1 because indirect branches are branches whose targets are unknown at compile time and need to be resolved dynamically. An attacker can poison the Branch Target Buffer (a buffer that stores the history of previously taken branches), causing the indirect branch predictor to mispredict and redirect execution to locations that the program's control flow would never legitimately reach. While Spectre is simpler to exploit with acompiled languagesuch asCorC++by locally executingmachine code, it can also beremotely exploitedby code hosted on remote maliciousweb pages, for exampleinterpreted languageslikeJavaScript, which run locally using aweb browser. The scriptedmalwarewould then have access to all the memory mapped to the address space of the running browser.[60] The exploit using remote JavaScript follows a similar flow to that of a local machine code exploit: flush cache → mistrain branch predictor → timed reads (tracking hit / miss). Theclflushinstruction (cache-lineflush) cannot be used directly from JavaScript, so ensuring it is used requires another approach. There are several automaticcache evictionpolicies which the CPU may choose, and the attack relies on being able to force that eviction for the exploit to work. It was found that using a second index on the large array, which was kept several iterations behind the first index, would cause theleast recently used(LRU) policy to be used. This allows the exploit to effectively clear the cache just by doing incremental reads on a large dataset. The branch predictor would then be mistrained by iterating over a very large dataset using bitwise operations for setting the index to in-range values, and then using an out-of-bounds address for the final iteration. A high-precision timer would then be required in order to determine if a set of reads led to a cache-hit or a cache-miss. While browsers likeChrome,Firefox, andTor Browser(based on Firefox) have placed restrictions on the resolution of timers (required in Spectre exploit to determine if cache hit/miss), at the time of authoring the white paper, the Spectre author was able to create a high-precision timer using theweb workerfeature ofHTML5. Careful coding and analysis of the machine code executed by thejust-in-time compilation(JIT) compiler was required to ensure the cache-clearing and exploitive reads were not optimized out. As of 2018, almost every computer system is affected by Spectre, including desktops, laptops, and mobile devices. Specifically, Spectre has been shown to work onIntel,AMD,ARM-based, andIBMprocessors.[61][62][63]Intel responded to the reported security vulnerabilities with an official statement.[64]AMD originally acknowledged vulnerability to one of the Spectre variants (GPZvariant 1), but stated that vulnerability to another (GPZ variant 2) had not been demonstrated on AMD processors, claiming it posed a "near zero risk of exploitation" due to differences in AMD architecture. In an update nine days later, AMD said that "GPZ Variant 2 ... is applicable to AMD processors" and defined upcoming steps to mitigate the threat. Several sources took AMD's news of the vulnerability to GPZ variant 2 as a change from AMD's prior claim, though AMD maintained that their position had not changed.[65][66][67] Researchers have indicated that the Spectre vulnerability can possibly affect someIntel,AMD, andARMprocessors.[68][69][70][71]Specifically, processors withspeculative executionare affected with these vulnerabilities.[72] ARM has reported that the majority of their processors are not vulnerable, and published a list of the specific processors that are affected by the Spectre vulnerability:Cortex-R7,Cortex-R8,Cortex-A8,Cortex-A9,Cortex-A15,Cortex-A17,Cortex-A57,Cortex-A72,Cortex-A73andARM Cortex-A75cores.[73]Other manufacturers' custom CPU cores implementing the ARM instruction set, such as those found in newer members of theApple A seriesprocessors, have also been reported to be vulnerable.[74]In general, higher-performance CPUs tend to have intensive speculative execution, making them vulnerable to Spectre.[75] Spectre has the potential of having a greater impact oncloud providersthan Meltdown. Whereas Meltdown allows unauthorized applications to read from privileged memory to obtain sensitive data from processes running on the same cloud server, Spectre can allow malicious programs to induce ahypervisorto transmit the data to a guest system running on top of it.[76] Since Spectre represents a whole class of attacks, most likely, there cannot be a single patch for it.[3]While work is already being done to address special cases of the vulnerability, the original website devoted to Spectre and Meltdown states, "As [Spectre] is not easy to fix, it will haunt us for a long time."[4]At the same time, according toDell:"No 'real-world' exploits of these vulnerabilities [i.e., Meltdown and Spectre] have been reported to date [7 February 2018], though researchers have produced proof-of-concepts."[77][78] Several procedures to help protect home computers and related devices from the vulnerability have been published.[79][80][81][82]Spectre patches have been reported to significantly slow down performance, especially on older computers; on theeighth generation Core platforms, benchmark performance drops of 2–14 percent have been measured.[83][5][84][85][86]On 18 January 2018, unwanted reboots, even for newer Intel chips, due to Meltdown and Spectre patches, were reported. In early January 2018, Chris Hoffman of the website HowToGeek suggested that the fix would require "a complete hardware redesign for CPUs across the board" and noted how, once software fixes were released, benchmarks showed and vendors claimed that some users may notice slowdowns on their computers once patched.[87] As early as 2018,machine learninghas been employed to detect attacks in real time.[88]This has led to anarms racewhere attackers also employ machine learning to thwart machine learning based detectors, and detectors in turn employGenerative Adversarial Networksto adapt detection techniques.[89] On 4 January 2018, Google detailed a new technique on their security blog called "Retpoline" (aportmanteauofreturnandtrampoline)[90]which can overcome the Spectre vulnerability with a negligible amount of processor overhead. It involvescompiler-level steering of indirect branches towards a different target that does not result in a vulnerable speculativeout-of-order executiontaking place.[91][92]While it was developed for thex86instruction set, Google engineers believe the technique is transferable to other processors as well.[93] On 25 January 2018, the current status and possible future considerations in solving the Meltdown and Spectre vulnerabilities were presented.[94] In March 2018, Intel announced that they had developed hardware fixes for Meltdown and Spectre-V2 only, but not Spectre-V1.[9][10][11]The vulnerabilities were mitigated by a new partitioning system that improves process and privilege-level separation.[12] On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its Coffee Lake-R processors and onwards.[13] On 18 October 2018, MIT researchers suggested a new mitigation approach, called DAWG (Dynamically Allocated Way Guard), which may promise better security without compromising performance.[95] On 16 April 2019, researchers from UC San Diego and University of Virginia proposedContext-Sensitive Fencing, a microcode-based defense mechanism that surgically injects fences into the dynamic execution stream, protecting against a number of Spectre variants at just 8% degradation in performance.[96] On 26 November 2021, researchers from Texas A&M University and Intel showed that Spectre attack (and other family of transient attacks) cannot be detected by typical antivirus or anti-malware software currently available, before they leak data. Especially, they show that it is easy to generate evasive versions of these attacks to build malware instead of their generic gadgets to bypass current antivirus applications. It was shown that this is due to the fact that these attacks can leak data using transient instructions that never get committed during a very short transient window and so are not visible from architecture layer (software) before leakage, but they are visible in microarchitecture layer (hardware). Additionally, software is limited to monitor four Hardware Performance Counters (HPCs) every 100 ns, which makes it difficult and almost impossible to collect information about malicious activity correlated with these attacks from software using antivirus applications before they can leak data.[88] On 20 October 2022, researchers from North Carolina State University, UC San Diego and Intel announced that they were able to design the first detection technology that can detect transient attacks before leakage in the microarchitecture layer (hardware). This was accomplished by building the first machine learning accelerator for security, designed to be built in Intel chips. This technology has a fast speed of sampling activity of transient instructions every 1ns and making predictions every 10 nanoseconds, allowing detection of transient attacks such as Spectre and Meltdown before data leakage occurs, and it automatically enables counter measurements in the chip. This technology is also equipped with adversarial training, making it immune to large category of adversarial and evasive versions of Spectre attack.[89] When Intel announced that Spectre mitigation can be switched on as a "security feature" instead of being an always-on bugfix, Linux creatorLinus Torvaldscalled the patches "complete and utter garbage".[97][98]Ingo Molnárthen suggested the use offunction tracingmachinery in the Linux kernel to fix Spectre withoutIndirect Branch Restricted Speculation(IBRS) microcode support. This would, as a result, only have a performance impact on processors based on IntelSkylakeand newer architecture.[99][100][101]This ftrace and retpoline-based machinery was incorporated into Linux 4.15 of January 2018.[102]The Linux kernel provides asysfsinterface to enumerate the current status of the system regarding Spectre in/sys/devices/system/cpu/vulnerabilities/[75] On 2 March 2019, Microsoft is reported to have released an important Windows 10 (v1809) software mitigation to the Spectre v2 CPU vulnerability.[103] Several procedures to help protect home computers and related devices from the vulnerability have been published.[79][80][81][82] Initial mitigation efforts were not entirely without incident. At first, Spectre patches were reported to significantly slow down performance, especially on older computers. On theeighth generation Core platforms, benchmark performance drops of 2–14 percent were measured.[83]On 18 January 2018, unwanted reboots were reported even for newer Intel chips.[99] Since exploitation of Spectre throughJavaScriptembedded in websites is possible,[1]it was planned to include mitigations against the attack by default inChrome64. Chrome 63 users could manually mitigate the attack by enabling thesite isolationfeature (chrome://flags#enable-site-per-process).[106] As ofFirefox57.0.4,Mozillawas reducing the resolution of JavaScript timers to help prevent timing attacks, with additional work on time-fuzzing techniques planned for future releases.[21][107] On January 15, 2018, Microsoft introduced mitigation for Spectre in Visual Studio. This can be applied by using the /Qspectre switch. A developer would need to download and install the appropriate libraries using the Visual Studio installer.[108]
https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
Speculative Store Bypass(SSB) (CVE-2018-3639) is the name given to a hardware security vulnerability and its exploitation that takes advantage ofspeculative executionin a similar way to theMeltdownandSpectresecurity vulnerabilities.[1]It affects theARM,AMDandIntelfamilies of processors. It was discovered by researchers atMicrosoft Security Response CenterandGoogle Project Zero(GPZ).[2]After being leaked on 3 May 2018 as part of a group of eight additional Spectre-class flaws provisionally namedSpectre-NG,[3][4][5][6]it was first disclosed to the public as "Variant 4" on 21 May 2018, alongside a related speculative execution vulnerability designated "Variant 3a".[7][1] Speculative execution exploit Variant 4,[8]is referred to as Speculative Store Bypass (SSB),[1][9]and has been assignedCVE-2018-3639.[7]SSB is named Variant 4, but it is the fifth variant in the Spectre-Meltdown class of vulnerabilities.[7] Steps involved in exploit:[1] Intel claims that web browsers that are already patched to mitigate Spectre Variants 1 and 2 are partially protected against Variant 4.[7]Intel said in a statement that the likelihood of end users being affected was "low" and that not all protections would be on by default due to some impact on performance.[10]The Chrome JavaScript team confirmed that effective mitigation of Variant 4 in software is infeasible, in part due to performance impact.[11] Intel is planning to address Variant 4 by releasing amicrocodepatch that creates a new hardware flag namedSpeculative Store Bypass Disable(SSBD).[7][2][12]Astablemicrocode patch is yet to be delivered, with Intel suggesting that the patch will be ready "in the coming weeks".[needs update][7]Many operating system vendors will be releasing software updates to assist with mitigating Variant 4;[13][2][14]however, microcode/firmwareupdates are required for the software updates to have an effect.[13]
https://en.wikipedia.org/wiki/Speculative_Store_Bypass
Theproc filesystem(procfs) is a special filesystem inUnix-likeoperating systems that presents information aboutprocessesand other system information in a hierarchical file-like structure, providing a more convenient and standardized method for dynamically accessing process data held in the kernel than traditionaltracingmethods or direct access tokernelmemory. Typically, it is mapped to amount pointnamed/procat boot time. The proc file system acts as an interface to internal data structures about running processes in the kernel. InLinux, it can also be used to obtain information about the kernel and to change certain kernel parameters at runtime (sysctl). Many Unix-like operating systems support the proc filesystem, includingSystem V,Solaris,IRIX,Tru64 UNIX,BSD,Linux,IBM AIX,[1]QNX, andPlan 9 from Bell Labs.OpenBSDdropped support in version 5.7, released in May 2015. It is absent fromHP-UX[1]andmacOS.[2] TheLinux kernelextends it to non–process-related data. The proc filesystem provides a method of communication betweenkernel spaceanduser space. For example, theGNUversion of the process reporting utilitypsuses the proc file system to obtain its data, without using any specializedsystem calls. Tom J. Killianimplemented theUNIX 8th Edition(V8) version of/proc: he presented a paper titled"Processes as Files"atUSENIXin June 1984. The design of procfs aimed to replace theptracesystem call used for process tracing. Detailed documentation can be found in theproc(4) manual page. The original AT&T System V Release 3 (SVR3) operating system (available internally to AT&T in 1986 and generally in 1987) did not come with the/procfilesystem, but a subsequent incremental version of it did. It only contained files representing the processes rather than the now common subdirectories. Roger FaulknerandRon Gomesported V8/proctoSVR4, and published a paper called"The Process File System and Process Model in UNIX System V"at USENIX in January 1991. This kind of procfs supported the creation ofps, but the files could only be accessed with functionsread(),write(), andioctl(). Between 1995 and 1996, Roger Faulkner created the procfs-2 interface for Solaris-2.6 that offers a structured /proc filesystem with sub-directories. Plan 9implemented a process file system, but went further than V8. V8's process file system implemented a single file per process. Plan 9 created a hierarchy of separate files to provide those functions, and made /proc a real part of the file system. 4.4BSDcloned its implementation of/procfrom Plan 9.[citation needed]As of February 2011[update], procfs is gradually becoming phased out in FreeBSD,[3]and it has turned to use thesysctlinterface instead for process-related information. To provide binary compatibility with Linux user space programs, the FreeBSD kernel also provideslinprocfsthat is similar to the Linux procfs.[4]It was removed fromOpenBSDin version 5.7, which was released in May 2015, because it "always suffered from race conditions and is now unused".[5]macOSdid not implement procfs and user space programs have to use thesysctlinterface for retrieving process data.[2] /proc in Solaris was available from the beginning (June 1992). Solaris 2.6 in 1996 introduced procfs2 from Roger Faulkner. Linux first added a /proc filesystem inv0.97.3, September 1992, and first began expanding it to non-process related data in v0.98.6, December 1992. As of 2020, the Linux implementation includes a directory for each running process, includingkernelprocesses, in directories named/proc/PID, wherePIDis the process number. Each directory contains information about one process, including: (Users may obtain thePIDwith a utility such aspgrep,pidoforps: ) /procalso includes non-process-related system information, although in the 2.6 kernel much of that information moved to a separate pseudo-file system,sysfs, mounted under/sys: On multi-core CPUs, /proc/cpuinfo contains the fields for "siblings" and "cpu cores" which represent the following calculation is applied:[7] A CPU package means physical CPU which can have multiple cores (single corefor one,dual corefor two,quad corefor four). This allows a distinction betweenhyper-threadingand dual-core, i.e. the number of hyper-threads per CPU package can be calculated bysiblings / CPU cores. If both values for a CPU package are the same, then hyper-threading is not supported.[8]For instance, a CPU package with siblings=2 and "cpu cores"=2 is a dual-core CPU but does not support hyper-threading. The basic utilities that use /proc under Linux come in theprocps(/procprocesses) package, and only function in conjunction with a mounted/proc. Cygwinimplemented a procfs that is basically the same as the Linux procfs.
https://en.wikipedia.org/wiki/Cpuinfo
Linear hashing(LH) is a dynamic data structure which implements ahash tableand grows or shrinks one bucket at a time. It was invented by Witold Litwin in 1980.[1][2]It has been analyzed by Baeza-Yates and Soza-Pollman.[3]It is the first in a number of schemes known as dynamic hashing[3][4]such as Larson's Linear Hashing with Partial Extensions,[5]Linear Hashing with Priority Splitting,[6]Linear Hashing with Partial Expansions and Priority Splitting,[7]or Recursive Linear Hashing.[8] The file structure of a dynamic hashing data structure adapts itself to changes in the size of the file, so expensive periodic file reorganization is avoided.[4]A Linear Hashing file expands by splitting a predetermined bucket into two and shrinks by merging two predetermined buckets into one. The trigger for a reconstruction depends on the flavor of the scheme; it could be an overflow at a bucket orload factor(i.e., the number of records divided by the number of buckets) moving outside of a predetermined range.[1]In Linear Hashing there are two types of buckets, those that are to be split and those already split. While extendible hashing splits only overflowing buckets,spiral hashing(a.k.a. spiral storage) distributes records unevenly over the buckets such that buckets with high costs of insertion, deletion, or retrieval are earliest in line for a split.[5] Linear Hashing has also been made into a scalable distributed data structure,LH*. In LH*, each bucket resides at a different server.[9]LH* itself has been expanded to provide data availability in the presence of failed buckets.[10]Key based operations (inserts, deletes, updates, reads) in LH and LH* take maximum constant time independent of the number of buckets and hence of records.[1][10] Records in LH or LH* consists of a key and a content, the latter basically all the other attributes of the record.[1][10]They are stored in buckets. For example, in Ellis' implementation, a bucket is a linked list of records.[2]The file allows the key based CRUD operations create or insert, read, update, and delete as well as a scan operations that scans all records, for example to do a database select operation on a non-key attribute.[10]Records are stored in buckets whose numbering starts with 0.[10] The key distinction from schemes such as Fagin's extendible hashing is that as the file expands due to insertions, only one bucket is split at a time, and the order in which buckets are split is already predetermined.[11] The hash functionhi(c){\displaystyle h_{i}(c)}returns the 0-based index of the bucket that contains the record with keyc{\displaystyle c}. When a bucket which uses the hash functionhi{\displaystyle h_{i}}is split into two new buckets, the hash functionhi{\displaystyle h_{i}}is replaced withhi+1{\displaystyle h_{i+1}}for both of those new buckets. At any time, at most two hash functionshl{\displaystyle h_{l}}andhl+1{\displaystyle h_{l+1}}are used; such thatl{\displaystyle l}corresponds to the currentlevel. The family of hash functionshi(c){\displaystyle h_{i}(c)}is also referred to as the dynamic hash function. Typically, the value ofi{\displaystyle i}inhi{\displaystyle h_{i}}corresponds to the number of rightmost binary digits of the keyc{\displaystyle c}that are used to segregate the buckets. This dynamic hash function can be expressed arithmetically ashi(c)↦(cmod2i){\textstyle h_{i}(c)\mapsto (c{\bmod {2}}^{i})}. Note that when the total number of buckets is equal to one,i=0{\displaystyle i=0}. Complete the calculations below to determine the correct hashing function for the given hashing keyc{\displaystyle c}.[10] Linear hashing algorithms may use only controlled splits or both controlled and uncontrolled splits. Controlled splittingoccurs if a split is performed whenever theload factor, which is monitored by the file, exceeds a predetermined threshold.[10]If the hash index uses controlled splitting, the buckets are allowed to overflow by using linked overflow blocks. When theload factorsurpasses a set threshold, thesplit pointer'sdesignated bucket is split. Instead of using the load factor, this threshold can also be expressed as an occupancy percentage, in which case, the maximum number of records in the hash index equals (occupancy percentage)*(max records per non-overflowed bucket)*(number of buckets).[12] Anuncontrolled splitoccurs when a split is performed whenever a bucket overflows, in which case that bucket would be split into two separate buckets. File contractionoccurs in some LH algorithm implementations if a controlled split causes the load factor to sink below a threshold. In this case, a merge operation would be triggered which would undo the last split, and reset the file state.[10] The index of the next bucket to be split is part of the file state and called thesplit pointers{\displaystyle s}. The split pointer corresponds to the first bucket that uses the hash functionhl{\displaystyle h_{l}}instead ofhl+1{\displaystyle h_{l+1}}.[10] For example, if numerical records are inserted into the hash index according to their farthest right binary digits, the bucket corresponding with the appended bucket will be split. Thus, if we have the buckets labelled as 000, 001, 10, 11, 100, 101, we would split the bucket 10 because we are appending and creating the next sequential bucket 110. This would give us the buckets 000, 001, 010, 11, 100, 101, 110.[12] When a bucket is split, split pointer and possibly the level are updated according to the following, such that the level is 0 when the linear hashing index only has 1 bucket.[10] The main contribution of LH* is to allow a client of an LH* file to find the bucket where the record resides even if the client does not know the file state. Clients in fact store their version of the file state, which is initially just the knowledge of the first bucket, namely Bucket 0. Based on their file state, a client calculates the address of a key and sends a request to that bucket. At the bucket, the request is checked and if the record is not at the bucket, it is forwarded. In a reasonably stable system, that is, if there is only one split or merge going on while the request is processed, it can be shown that there are at most two forwards. After a forward, the final bucket sends an Image Adjustment Message to the client whose state is now closer to the state of the distributed file.[10]While forwards are reasonably rare for active clients, their number can be even further reduced by additional information exchange between servers and clients[13] The file state consists of split pointers{\displaystyle s}and levell{\displaystyle l}. If the original file started withN=1{\displaystyle N=1}buckets, then the number of bucketsn{\displaystyle n}and the file state are related via[13] n=2l+s{\displaystyle n=2^{l}+s}. Griswold and Townsend[14]discussed the adoption of linear hashing in theIcon language. They discussed the implementation alternatives ofdynamic arrayalgorithm used in linear hashing, and presented performance comparisons using a list of Icon benchmark applications. Linear hashing is used in theBerkeley database system (BDB), which in turn is used by many software systems, using a C implementation derived from theCACMarticle and first published on the Usenet in 1988 by Esmond Pitt.
https://en.wikipedia.org/wiki/Linear_hashing
Incomputer science, aJudy arrayis adata structureimplementing a type ofassociative arraywith high performance and low memory usage.[1]Unlike most otherkey-value stores, Judy arrays use no hashing, leverage compression on their keys (which may be integers or strings), and can efficiently represent sparse data; that is, they may have large ranges of unassigned indices without greatly increasing memory usage or processing time. They are designed to remain efficient even on structures with sizes in the peta-element range, with performance scaling on the order of O(logn).[2]Roughly speaking, Judy arrays are highly optimized 256-aryradix trees.[3] Judy trees are usually faster thanAVL trees,B-trees,hash tablesandskip listsbecause they are highly optimized to maximize usage of theCPU cache. In addition, they require no tree balancing and no hashing algorithm is used.[4] The Judy array was invented by Douglas Baskins and named after his sister.[5] Judy arrays aredynamicand can grow or shrink as elements are added to, or removed from, the array. The memory used by Judy arrays is nearly proportional to the number of elements in the Judy array. Judy arrays are designed to minimize the number of expensivecache-linefills fromRAM, and so the algorithm contains much complex logic to avoid cache misses as often as possible. Due to thesecacheoptimizations, Judy arrays are fast, especially for very large datasets. On data sets that are sequential or nearly sequential, Judy arrays can even outperform hash tables, since, unlike hash tables, the internal tree structure of Judy arrays maintains the ordering of the keys.[6] Judy arrays are extremely complicated. The smallest implementations are thousands of lines of code.[5]In addition, Judy arrays are optimized for machines with 64 byte cache lines, making them essentially unportable without a significant rewrite.[6]
https://en.wikipedia.org/wiki/Judy_array
Incomputer science, aradix tree(alsoradix trieorcompact prefix treeorcompressed trie) is adata structurethat represents aspace-optimizedtrie(prefix tree) in which each node that is the only child is merged with its parent. The result is that the number of children of every internal node is at most theradixrof the radix tree, wherer= 2xfor some integerx≥ 1. Unlike regular trees, edges can be labeled with sequences of elements as well as single elements. This makes radix trees much more efficient for small sets (especially if the strings are long) and for sets of strings that share long prefixes. Unlike regular trees (where whole keys are compareden massefrom their beginning up to the point of inequality), the key at each node is compared chunk-of-bits by chunk-of-bits, where the quantity of bits in that chunk at that node is the radixrof the radix trie. Whenris 2, the radix trie is binary (i.e., compare that node's 1-bit portion of the key), which minimizes sparseness at the expense of maximizing trie depth—i.e., maximizing up to conflation of nondiverging bit-strings in the key. Whenr≥ 4 is a power of 2, then the radix trie is anr-ary trie, which lessens the depth of the radix trie at the expense of potential sparseness. As an optimization, edge labels can be stored in constant size by using two pointers to a string (for the first and last elements).[1] Note that although the examples in this article show strings as sequences of characters, the type of the string elements can be chosen arbitrarily; for example, as a bit or byte of the string representation when usingmultibyte characterencodings orUnicode. Radix trees are useful for constructingassociative arrayswith keys that can be expressed as strings. They find particular application in the area ofIProuting,[2][3][4]where the ability to contain large ranges of values with a few exceptions is particularly suited to the hierarchical organization ofIP addresses.[5]They are also used forinverted indexesof text documents ininformation retrieval. Radix trees support insertion, deletion, and searching operations. Insertion adds a new string to the trie while trying to minimize the amount of data stored. Deletion removes a string from the trie. Searching operations include (but are not necessarily limited to) exact lookup, find predecessor, find successor, and find all strings with a prefix. All of these operations are O(k) where k is the maximum length of all strings in the set, where length is measured in the quantity of bits equal to the radix of the radix trie. The lookup operation determines if a string exists in a trie. Most operations modify this approach in some way to handle their specific tasks. For instance, the node where a string terminates may be of importance. This operation is similar to tries except that some edges consume multiple elements. The following pseudo code assumes that these methods and members exist. Edge Node To insert a string, we search the tree until we can make no further progress. At this point we either add a new outgoing edge labeled with all remaining elements in the input string, or if there is already an outgoing edge sharing a prefix with the remaining input string, we split it into two edges (the first labeled with the common prefix) and proceed. This splitting step ensures that no node has more children than there are possible string elements. Several cases of insertion are shown below, though more may exist. Note that r simply represents the root. It is assumed that edges can be labelled with empty strings to terminate strings where necessary and that the root has no incoming edge. (The lookup algorithm described above will not work when using empty-string edges.) To delete a string x from a tree, we first locate the leaf representing x. Then, assuming x exists, we remove the corresponding leaf node. If the parent of our leaf node has only one other child, then that child's incoming label is appended to the parent's incoming label and the child is removed. The datastructure was invented in 1968 by Donald R. Morrison,[6]with whom it is primarily associated, and by Gernot Gwehenberger.[7] Donald Knuth, pages 498-500 in Volume III ofThe Art of Computer Programming, calls these "Patricia's trees", presumably after the acronym in the title of Morrison's paper: "PATRICIA - Practical Algorithm to Retrieve Information Coded in Alphanumeric". Today, Patricia trees are seen as radix trees with radix equals 2, which means that each bit of the key is compared individually and each node is a two-way (i.e., left versus right) branch. (In the following comparisons, it is assumed that the keys are of lengthkand the data structure containsnmembers.) Unlikebalanced trees, radix trees permit lookup, insertion, and deletion in O(k) time rather than O(logn). This does not seem like an advantage, since normallyk≥ logn, but in a balanced tree every comparison is a string comparison requiring O(k) worst-case time, many of which are slow in practice due to long common prefixes (in the case where comparisons begin at the start of the string). In a trie, all comparisons require constant time, but it takesmcomparisons to look up a string of lengthm. Radix trees can perform these operations with fewer comparisons, and require many fewer nodes. Radix trees also share the disadvantages of tries, however: as they can only be applied to strings of elements or elements with an efficiently reversible mapping to strings, they lack the full generality of balanced search trees, which apply to any data type with atotal ordering. A reversible mapping to strings can be used to produce the required total ordering for balanced search trees, but not the other way around. This can also be problematic if a data type onlyprovidesa comparison operation, but not a (de)serializationoperation. Hash tablesare commonly said to have expected O(1) insertion and deletion times, but this is only true when considering computation of the hash of the key to be a constant-time operation. When hashing the key is taken into account, hash tables have expected O(k) insertion and deletion times, but may take longer in the worst case depending on how collisions are handled. Radix trees have worst-case O(k) insertion and deletion. The successor/predecessor operations of radix trees are also not implemented by hash tables. A common extension of radix trees uses two colors of nodes, 'black' and 'white'. To check if a given string is stored in the tree, the search starts from the top and follows the edges of the input string until no further progress can be made. If the search string is consumed and the final node is a black node, the search has failed; if it is white, the search has succeeded. This enables us to add a large range of strings with a common prefix to the tree, using white nodes, then remove a small set of "exceptions" in a space-efficient manner byinsertingthem using black nodes. TheHAT-trieis a cache-conscious data structure based on radix trees that offers efficient string storage and retrieval, and ordered iterations. Performance, with respect to both time and space, is comparable to the cache-conscioushashtable.[8][9] APATRICIAtrie is a special variant of the radix 2 (binary) trie, in which rather than explicitly store every bit of every key, the nodes store only the position of the first bit which differentiates two sub-trees. During traversal the algorithm examines the indexed bit of the search key and chooses the left or right sub-tree as appropriate. Notable features of the PATRICIA trie include that the trie only requires one node to be inserted for every unique key stored, making PATRICIA much more compact than a standard binary trie. Also, since the actual keys are no longer explicitly stored it is necessary to perform one full key comparison on the indexed record in order to confirm a match. In this respect PATRICIA bears a certain resemblance to indexing using a hash table.[6] Theadaptive radix treeis a radix tree variant that integrates adaptive node sizes to the radix tree. One major drawback of the usual radix trees is the use of space, because it uses a constant node size in every level. The major difference between the radix tree and the adaptive radix tree is its variable size for each node based on the number of child elements, which grows while adding new entries. Hence, the adaptive radix tree leads to a better use of space without reducing its speed.[10][11][12] A common practice is to relax the criteria of disallowing parents with only one child in situations where the parent represents a valid key in the data set. This variant of radix tree achieves a higher space efficiency than the one which only allows internal nodes with at least two children.[13]
https://en.wikipedia.org/wiki/Radix_tree
This is a list ofhash functions, includingcyclic redundancy checks,checksumfunctions, andcryptographic hash functions. Adler-32is often mistaken for a CRC, but it is not: it is achecksum.
https://en.wikipedia.org/wiki/Non-cryptographic_hash_functions
Perceptual hashingis the use of afingerprinting algorithmthat produces a snippet,hash, orfingerprintof various forms ofmultimedia.[1][2]A perceptual hash is a type oflocality-sensitive hash, which is analogous iffeaturesof the multimedia are similar. This is in contrast tocryptographic hashing, which relies on theavalanche effectof a small change in input value creating a drastic change in output value. Perceptual hash functions are widely used in finding cases of onlinecopyright infringementas well as indigital forensicsbecause of the ability to have a correlation between hashes so similar data can be found (for instance with a differingwatermark). The 1980 work ofMarr and Hildrethis a seminal paper in this field.[3] In 2009,Microsoft CorporationdevelopedPhotoDNAin collaboration withHany Farid, professor atDartmouth College. PhotoDNA is a perceptual hashing capability developed to combat the distribution ofchild sexual abuse material(CSAM) online. Provided by Microsoft for no cost, PhotoDNA remains a critical tool used by major software companies, NGOs and law enforcement agencies around the world.[4] The July 2010 thesis of Christoph Zauner is a well-written introduction to the topic.[5] In June 2016 Azadeh Amir Asgari published work on robust image hash spoofing. Asgari notes that perceptual hash function like any other algorithm is prone to errors.[6] Researchers remarked in December 2017 thatGoogle image searchis based on a perceptual hash.[7] In research published in November 2021 investigators focused on a manipulated image ofStacey Abramswhich was published to the internet prior to her loss in the2018 Georgia gubernatorial election. They found that the pHash algorithm was vulnerable to nefarious actors.[8] Research reported in January 2019 atNorthumbria Universityhas shown for video it can be used to simultaneously identify similar contents forvideo copy detectionand detect malicious manipulations for video authentication. The system proposed performs better than currentvideo hashingtechniques in terms of both identification and authentication.[9] Research reported in May 2020 by theUniversity of Houstonin deep learning based perceptual hashing for audio has shown better performance than traditionalaudio fingerprintingmethods for the detection of similar/copied audio subject to transformations.[10] In addition to its uses in digital forensics, research by a Russian group reported in 2019 has shown that perceptual hashing can be applied to a wide variety of situations. Similar to comparing images for copyright infringement, the group found that it could be used to compare and match images in a database. Their proposed algorithm proved to be not only effective, but more efficient than the standard means of database image searching.[11] A Chinese team reported in July 2019 that they had discovered a perceptual hash forspeech encryptionwhich proved to be effective. They were able to create a system in which the encryption was not only more accurate, but more compact as well.[12] Apple Increported as early as August 2021 achild sexual abuse material(CSAM) system that they know asNeuralHash. A technical summary document, which nicely explains the system with copious diagrams and example photographs, offers that "Instead of scanning images [on corporate]iCloud[servers], the system performs on-device matching using a database of known CSAM image hashes provided by [theNational Center for Missing and Exploited Children] (NCMEC) and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users' devices."[13] In an essay entitled "The Problem With Perceptual Hashes", Oliver Kuederle produces a startling collision generated by a piece of commercialneural netsoftware, of the NeuralHash type. A photographic portrait of a real woman (Adobe Stock #221271979) reduces through the test algorithm to a similar hash as the photograph of a butterfly painted in watercolor (from the "deposit photos" database). Both sample images are in commercial databases. Kuederle is concerned with collisions like this. "These cases will be manually reviewed. That is, according to Apple, an Apple employee will then look at your (flagged) pictures... Perceptual hashes are messy. When such algorithms are used to detect criminal activities, especially at Apple scale, many innocent people can potentially face serious problems... Needless to say, I’m quite worried about this."[14] Researchers have continued to publish a comprehensive analysis entitled "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash", in which they investigate the vulnerability of NeuralHash as a representative of deep perceptual hashing algorithms to various attacks. Their results show that hash collisions between different images can be achieved with minor changes applied to the images. According to the authors, these results demonstrate the real chance of such attacks and enable the flagging and possible prosecution of innocent users. They also state that the detection of illegal material can easily be avoided, and the system be outsmarted by simple image transformations, such as provided by free-to-use image editors. The authors assume their results to apply to other deep perceptual hashing algorithms as well, questioning their overall effectiveness and functionality in applications such asclient-side scanningand chat controls.[15]
https://en.wikipedia.org/wiki/Perceptual_hashing
Onwebsitesthat allow users to create content,content moderationis the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users toblockandfiltercontent themselves.[1]It is part of the wider discipline oftrust and safety. Various types of Internet sites permituser-generated contentsuch as posts, comments, videos includingInternet forums,blogs, and news sites powered by scripts such asphpBB, awiki,PHP-Nuke, etc. Depending on the site's content and intended audience, the site's administrators will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lessermoderators. Most often, they will attempt to eliminatetrolling,spamming, orflaming, although this varies widely from site to site. Major platforms use a combination of algorithmic tools, user reporting and human review.[1]Social media sitesmay also employ content moderators to manually flag or remove content flagged forhate speechor other objectionable content. Other content issues includerevenge porn,graphic content,child abuse materialandpropaganda.[1]Some websites must also make their content hospitable to advertisements.[1] In the United States, content moderation is governed bySection 230of theCommunications Decency Act, and has seen several cases concerning the issue make it to theUnited States Supreme Court, such as the currentMoody v. NetChoice, LLC. Also known as unilateral moderation, this kind of moderation system is often seen onInternet forums. A group of people are chosen by the site's administrators (usually on a long-term basis) to act as delegates, enforcing the community rules on their behalf. Thesemoderatorsare given special privileges to delete or edit others' contributions and/or exclude people based on theire-mail addressorIP address, and generally attempt to remove negative contributions throughout the community.[2] Commercial Content Moderation is a term coined bySarah T. Robertsto describe the practice of "monitoring and vettinguser-generated content(UGC) forsocial mediaplatforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context".[3] The content moderation industry is estimated to be worth US$9 billion. While no official numbers are provided, there are an estimates 10,000 content moderators forTikTok; 15,000 forFacebookand 1,500 forTwitteras of 2022.[4] Theglobal value chainof content moderation typically includes social media platforms, largeMNEfirms and the content moderation suppliers. The social media platforms (e.g Facebook, Google) are largely based in the United States, Europe and China. The MNEs (e.gAccenture,Foiwe) are usually headquartered in the global north or India while suppliers of content moderation are largely located inglobal southerncountries like India and the Philippines.[5]: 79–81 While at one time this work may have been done by volunteers within the online community, for commercial websites this is largely achieved throughoutsourcingthe task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of thesocial media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap.[6] Employees work by viewing, assessing and deleting disturbing content.[7]Wiredreported in 2014, they may suffer psychological damage.[8][9][10][2][11]In 2017, the Guardian reportedsecondary traumamay arise, with symptoms similar toPTSD.[12]Some large companies such as Facebook offer psychological support[12]and increasingly rely on the use ofartificial intelligenceto sort out the most graphic and inappropriate content, but critics claim that it is insufficient.[13]In 2019, NPR called it a job hazard.[14]Non-disclosure agreementsare the norm when content moderators are hired. This makes moderators more hesitant to speak up about working conditions or organize.[4] Psychological hazards including stress andpost-traumatic stress disorder, combined with theprecarityofalgorithmic managementand low wages make content moderation extremely challenging.[15]: 123The number of tasks completed, for examplelabeling contentas copyright violation, deleting a post containing hate-speech or reviewing graphic content are quantified for performance andquality assurance.[4] In February 2019, an investigative report byThe Vergedescribed poor working conditions atCognizant's office inPhoenix, Arizona.[16]Cognizant employees tasked with content moderation for Facebook developedmental healthissues, includingpost-traumatic stress disorder, as a result of exposure tographic violence,hate speech, andconspiracy theoriesin the videos they were instructed to evaluate.[16][17]Moderators at the Phoenix office reporteddrug abuse,alcohol abuse, andsexual intercoursein the workplace, and feared retaliation fromterminatedworkers who threatened to harm them.[16][18]In response, a Cognizant representative stated the company would examine the issues in the report.[16] The Vergepublished a follow-up investigation of Cognizant'sTampa, Florida, office in June 2019.[19][20]Employees in the Tampa location described working conditions that were worse than the conditions in the Phoenix office.[19][21][22]Similarly, Meta's outsourced moderation company in Kenya and Ghana reported mental illness, self-harm, attempted suicide, poor working conditions, low pay, and retaliation for advocating for better working conditions.[23] Moderators were required to sign non-disclosure agreements with Cognizant to obtain the job, although three former workers broke the agreements to provide information toThe Verge.[19][24]In the Tampa office, workers reported inadequate mental health resources.[19][25]As a result of exposure to videos depicting graphic violence,animal abuse, andchild sexual abuse, some employees developedpsychological traumaand post-traumatic stress disorder.[19][26]In response to negative coverage related to its content moderation contracts, a Facebook director indicated that Facebook is in the process of developing a "global resiliency team" that would assist its contractors.[19] Facebookhad increased the number of content moderators from 4,500 to 7,500 in 2017 due to legal requirements and othercontroversies. In Germany, Facebook was responsible for removing hate speech within 24 hours of when it was posted.[27]In late 2018, Facebook created anoversight boardor an internal "Supreme Court" to decide what content remains and what content is removed.[14] According toFrances Haugen, the number of Facebook employees responsible for content moderation was much smaller as of 2021.[28] Social media site Twitterhas a suspension policy. Between August 2015 and December 2017, it suspended over 1.2 million accounts for terrorist content to reduce the number of followers and amount of content associated with the Islamic State.[29]Following the acquisition of Twitter by Elon Musk in October 2022, content rules have been weakened across the platform in an attempt to prioritize free speech.[30]However, the effects of this campaign have been called into question.[31][32] User moderation allows any user to moderate any other user's contributions. Billions of people are currently making decisions on what to share, forward or give visibility to on a daily basis.[33]On a large site with a sufficiently large active population, this usually works well, since relatively small numbers of troublemakers are screened out by the votes of the rest of the community. User moderation can also be characterized by reactive moderation. This type of moderation depends on users of a platform or site to report content that is inappropriate and breachescommunity standards. In this process, when users are faced with an image or video they deem unfit, they can click the report button. The complaint is filed and queued for moderators to look at.[34] 150 content moderators, who contracted forMeta,ByteDanceandOpenAIgathered inNairobi, Kenyato launch the first African Content Moderators Union on 1 May 2023. This union was launched 4 years after Daniel Motaung was fired and retaliated against for organizing a union atSama, which contracts for Facebook.[35]
https://en.wikipedia.org/wiki/Content_moderation
AnInternet filterissoftwarethat restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over theInternetvia theWeb,Email, or other means. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide (seeInternet censorship), or they can, for example, be applied by anInternet service providerto its clients, by an employer to its personnel,by a schoolto its students, by a library to its visitors, by a parent to a child's computer, or by anindividual user to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some filter software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities. The term "content control" is used on occasion byCNN,[1]Playboymagazine,[2]theSan Francisco Chronicle,[3]andThe New York Times.[4]However, several other terms, including "content filtering software", "web content filter", "filtering proxy servers", "secure web gateways", "censorware", "content security and control", "web filteringsoftware", "content-censoring software", and "content-blockingsoftware", are often used. "Nannyware" has also been used in both product marketing and by the media. Industry research companyGartneruses"secure web gateway"(SWG) to describe the market segment.[5] Companies that make products that selectivelyblockWeb sites do not refer to these products as censorware, and prefer terms such as "Internet filter" or "URL Filter"; in the specialized case of software specifically designed to allow parents to monitor and restrict the access of their children, "parental control software" is also used. Some products log all sites that a user accesses and rates them based on content type for reporting to an "accountability partner" of the person's choosing, and the termaccountability softwareis used. Internet filters, parental control software, and/or accountability software may also be combined into one product. Those critical of such software, however, use the term "censorware" freely: consider the Censorware Project, for example.[6]The use of the term "censorware" in editorials criticizing makers of such software is widespread and covers many different varieties and applications:Xeni Jardinused the term in a 9 March 2006 editorial inThe New York Times,when discussing the use of American-made filtering software to suppress content in China; in the same month a high school student used the term to discuss the deployment of such software in his school district.[7][8] In general, outside of editorial pages as described above, traditional newspapers do not use the term "censorware" in their reporting, preferring instead to use less overtly controversial terms such as "content filter", "content control", or "web filtering";The New York TimesandThe Wall Street Journalboth appear to follow this practice. On the other hand, Web-based newspapers such asCNETuse the term in both editorial and journalistic contexts, for example "Windows Live to Get Censorware."[9] Filters can be implemented in many different ways: by software on a personal computer, via network infrastructure such asproxy servers,DNSservers, orfirewallsthat provide Internet access. No solution provides complete coverage, so most companies deploy a mix of technologies to achieve the proper content control in line with their policies. The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies. Internet service providers(ISPs) that block material containingpornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming totheir personal beliefs. Content filtering software can, however, also be used to blockmalwareand other content that is or contains hostile, intrusive, or annoying material includingadware,spam,computer viruses,worms,trojan horses, andspyware. Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions toonline pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number ofaccountability softwareproducts are marketed asself-censorshiporaccountability software. These are often promoted by religious media and atreligious gatherings.[17] Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over-blocking, or over-censoring. Overblocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along withporn-related material because of theScunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change toArcadia University.[18]Another example was the filtering ofHorniman Museum.[19]As well, over-blocking may encourage users to bypass the filter entirely. Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place.[20] Many[21]would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support forpropaganda. Many[22]would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, theFirst Amendment to the United States Constitutionhas been cited in calls to criminalise forced internet censorship. (Seesection below) In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment.[23] In 1996 the US Congress passed theCommunications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 theSupreme Courtruled in their favor.[24]Part of the civil liberties argument, especially from groups like theElectronic Frontier Foundation,[25]was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary.[26] In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol"license agreement.[27]They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets. Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid.[28] TheMotion Picture Associationsuccessfully obtained a UK ruling enforcing ISPs to use content-control software to preventcopyright infringementby their subscribers.[29] Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites[30][31](including the Web site of the Vatican), many political sites, and homosexuality-related sites.[32]X-Stopwas shown to block sites such as theQuakerweb site, theNational Journal of Sexual Orientation Law,The Heritage Foundation, and parts ofThe Ethical Spectacle.[33]CYBERsitter blocks out sites likeNational Organization for Women.[34]Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use.[35]Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company,[36]has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org.[37] Content labeling may be considered another form of content-control software. In 1994, theInternet Content Rating Association(ICRA) — now part of theFamily Online Safety Institute— developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site. ICRA labels come in a variety of formats.[38]These include the World Wide Web Consortium'sResource Description Framework(RDF) as well asPlatform for Internet Content Selection(PICS) labels used byMicrosoft'sInternet ExplorerContent Advisor.[39] ICRA labels are an example of self-labeling. Similarly, in 2006 theAssociation of Sites Advocating Child Protection (ASACP)initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in theUnited Stateswere going to have the effect of forcing adult companies to label their content.[40]The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by awide variety of content-control software. TheVoluntary Content Rating(VCR) system was devised bySolid Oak Softwarefor theirCYBERsitterfiltering software, as an alternative to the PICS system, which some critics deemed too complex. It employsHTMLmetadatatags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified,matureandadult, making the specification extremely simple. The Australian Internet Safety Advisory Body has information about "practical advice on Internet safety, parental control and filters for the protection of children, students and families" that also includes public libraries.[41] NetAlert, the software made available free of charge by the Australian government, was allegedly cracked by a 16-year-old student, Tom Wood, less than a week after its release in August 2007. Wood supposedly bypassed the $84 million filter in about half an hour to highlight problems with the government's approach to Internet content filtering.[42] The Australian Government has introduced legislation that requires ISPs to "restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia" that was due to commence from 20 January 2008, known asCleanfeed.[43] Cleanfeed is a proposed mandatory ISP level content filtration system. It was proposed by theBeazleyledAustralian Labor Partyopposition in a 2006 press release, with the intention of protecting children who were vulnerable due to claimed parental computer illiteracy. It was announced on 31 December 2007 as a policy to be implemented by theRuddALP government, and initial tests inTasmaniahave produced a 2008 report. Cleanfeed is funded in the current budget, and is moving towards an Expression of Interest for live testing with ISPs in 2008. Public opposition and criticism have emerged, led by theEFAand gaining irregular mainstream media attention, with a majority of Australians reportedly "strongly against" its implementation.[44]Criticisms include its expense, inaccuracy (it will be impossible to ensure only illegal sites are blocked) and the fact that it will be compulsory, which can be seen as an intrusion on free speech rights.[44]Another major criticism point has been that although the filter is claimed to stop certain materials, the underground rings dealing in such materials will not be affected. The filter might also provide a false sense of security for parents, who might supervise children less while using the Internet, achieving the exact opposite effect.[original research?]Cleanfeed is a responsibility ofSenator Conroy'sportfolio. InDenmarkit is stated policy that it will "prevent inappropriate Internet sites from being accessed from children's libraries across Denmark".[45]"'It is important that every library in the country has the opportunity to protect children against pornographic material when they are using library computers. It is a main priority for me as Culture Minister to make sure children can surf the net safely at libraries,' states Brian Mikkelsen in a press-release of the Danish Ministry of Culture."[46] Many libraries in the UK such as theBritish Library[47]andlocal authoritypublic libraries[48]apply filters to Internet access. According to research conducted by the Radical Librarians Collective, at least 98% of public libraries apply filters; including categories such as "LGBT interest", "abortion" and "questionable".[49]Some public libraries blockPayday loanwebsites[50] The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through theChildren's Internet Protection Act(CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessingage-inappropriatecontent while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request. Many legal scholars believe that a number of legal cases, in particularReno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment.[51]The Children's Internet Protection Act [CIPA] and the June 2003 caseUnited States v. American Library Associationfound CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however. In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter.[52]In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron." The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: "It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains, it is reasonable to impose restrictions on Internet access in order to maintain an environment that is conducive to study and contemplative thought." The case returned to federal court. In March 2007, Virginia passed a law similar to CIPA that requires public libraries receiving state funds to use content-control software. Like CIPA, the law requires libraries to disable filters for an adult library user when requested to do so by the user.[53] Content filtering in general can "be bypassed entirely by tech-savvy individuals." Blocking content on a device "[will not]…guarantee that users won't eventually be able to find a way around the filter."[54]Content providers may changeURLsorIP addressesto circumvent filtering. Individuals with technical expertise may use a different method by employing multiple domains or URLs that direct to a shared IP address where restricted content is present. This strategy doesn't circumventIP packet filtering, however can evadeDNS poisoningandweb proxies. Additionally, perpetrators may use mirrored websites that avoid filters.[55] Some software may be bypassed successfully by using alternative protocols such asFTPortelnetorHTTPS, conducting searches in a different language, using aproxy serveror a circumventor such asPsiphon. Also cached web pages returned by Google or other searches could bypass some controls as well. Web syndication services may provide alternate paths for content. Some of the more poorly designed programs can be shut down by killing their processes: for example, inMicrosoft Windowsthrough the WindowsTask Manager, or inMac OS Xusing Force Quit orActivity Monitor. Numerous workarounds and counters to workarounds from content-control software creators exist.Googleservices are often blocked by filters, but these may most often be bypassed by usinghttps://in place ofhttp://since content filtering software is not able to interpret content under secure connections (in this case SSL).[needs update] An encryptedVPNcan be used as means of bypassing content control software, especially if the content control software is installed on an Internet gateway or firewall. Other ways to bypass a content control filter include translation sites andestablishing a remote connectionwith an uncensored device.[56] Some ISPs offerparental controloptions. Some offer security software which includes parental controls.Mac OS X v10.4offers parental controls for several applications (Mail,Finder,iChat,Safari&Dictionary). Microsoft'sWindows Vistaoperating system also includes content-control software. Content filtering technology exists in two major forms:application gatewayorpacket inspection. For HTTP access the application gateway is called aweb-proxyor just a proxy. Such web-proxies can inspect both the initial request and the returned web page using arbitrarily complex rules and will not return any part of the page to the requester until a decision is made. In addition they can make substitutions in whole or for any part of the returned result. Packet inspection filters do not initially interfere with the connection to the server but inspect the data in the connection as it goes past, at some point the filter may decide that the connection is to be filtered and it will then disconnect it by injecting a TCP-Reset or similar faked packet. The two techniques can be used together with the packet filter monitoring a link until it sees an HTTP connection starting to an IP address that has content that needs filtering. The packet filter then redirects the connection to the web-proxy which can perform detailed filtering on the website without having to pass through all unfiltered connections. This combination isquite popularbecause it can significantly reduce the cost of the system. There are constraints to IP level packet-filtering, as it may result in rendering all web content associated with a particular IP address inaccessible. This may result in the unintentional blocking of legitimate sites that share the same IP address or domain. For instance, university websites commonly employ multiple domains under oneIP address. Moreover, IP level packet-filtering can be surpassed by using a distinct IP address for certain content while still being linked to the same domain or server.[57] Gateway-based content control software may be more difficult to bypass than desktop software as the user does not have physical access to the filtering device. However, many of the techniques in theBypassing filterssection still work.
https://en.wikipedia.org/wiki/Internet_filter
Incomputer science, askip list(orskiplist) is aprobabilisticdata structurethat allowsO(log⁡n){\displaystyle O(\log n)}average complexityfor search as well asO(log⁡n){\displaystyle O(\log n)}average complexity for insertion within anordered sequenceofn{\displaystyle n}elements. Thus it can get the best features of a sortedarray(for searching) while maintaining alinked list-like structure that allows insertion, which is not possible with a static array. Fast search is made possible by maintaining a linked hierarchy of subsequences, with each successive subsequence skipping over fewer elements than the previous one (see the picture below on the right). Searching starts in the sparsest subsequence until two consecutive elements have been found, one smaller and one larger than or equal to the element searched for. Via the linked hierarchy, these two elements link to elements of the next sparsest subsequence, where searching is continued until finally searching in the full sequence. The elements that are skipped over may be chosen probabilistically[2]or deterministically,[3]with the former being more common. A skip list is built in layers. The bottom layer1{\displaystyle 1}is an ordinary orderedlinked list. Each higher layer acts as an "express lane" for the lists below, where an element in layeri{\displaystyle i}appears in layeri+1{\displaystyle i+1}with some fixed probabilityp{\displaystyle p}(two commonly used values forp{\displaystyle p}are1/2{\displaystyle 1/2}or1/4{\displaystyle 1/4}). On average, each element appears in1/(1−p){\displaystyle 1/(1-p)}lists, and the tallest element (usually a special head element at the front of the skip list) appears in all the lists. The skip list containslog1/p⁡n{\displaystyle \log _{1/p}n\,}(i.e. logarithm base1/p{\displaystyle 1/p}ofn{\displaystyle n}) lists. A search for a target element begins at the head element in the top list, and proceeds horizontally until the current element is greater than or equal to the target. If the current element is equal to the target, it has been found. If the current element is greater than the target, or the search reaches the end of the linked list, the procedure is repeated after returning to the previous element and dropping down vertically to the next lower list. The expected number of steps in each linked list is at most1/p{\displaystyle 1/p}, which can be seen by tracing the search path backwards from the target until reaching an element that appears in the next higher list or reaching the beginning of the current list. Therefore, the totalexpectedcost of a search is1plog1/p⁡n{\displaystyle {\tfrac {1}{p}}\log _{1/p}n}which isO(log⁡n){\displaystyle O(\log n)\,}, whenp{\displaystyle p}is a constant. By choosing different values ofp{\displaystyle p}, it is possible to trade search costs against storage costs. The elements used for a skip list can contain more than one pointer since they can participate in more than one list. Insertions and deletions are implemented much like the corresponding linked-list operations, except that "tall" elements must be inserted into or deleted from more than one linked list. O(n){\displaystyle O(n)}operations, which force us to visit every node in ascending order (such as printing the entire list), provide the opportunity to perform a behind-the-scenes derandomization of the level structure of the skip-list in an optimal way, bringing the skip list toO(log⁡n){\displaystyle O(\log n)}search time. (Choose the level of the i'th finite node to be 1 plus the number of times it is possible to repeatedly divide i by 2 before it becomes odd. Also, i=0 for the negative infinity header as there is the usual special case of choosing the highest possible level for negative and/or positive infinite nodes.) However this also allows someone to know where all of the higher-than-level 1 nodes are and delete them. Alternatively, the level structure could be made quasi-random in the following way: Like the derandomized version, quasi-randomization is only done when there is some other reason to be running anO(n){\displaystyle O(n)}operation (which visits every node). The advantage of this quasi-randomness is that it doesn't give away nearly as much level-structure related information to anadversarial useras the de-randomized one. This is desirable because an adversarial user who is able to tell which nodes are not at the lowest level can pessimize performance by simply deleting higher-level nodes. (Bethea and Reiter however argue that nonetheless an adversary can use probabilistic and timing methods to force performance degradation.[4]) The search performance is still guaranteed to be logarithmic. It would be tempting to make the following "optimization": In the part which says "Next, for eachith...", forget about doing a coin-flip for each even-odd pair. Just flip a coin once to decide whether to promote only the even ones or only the odd ones. Instead ofO(nlog⁡n){\displaystyle O(n\log n)}coin flips, there would only beO(log⁡n){\displaystyle O(\log n)}of them. Unfortunately, this gives the adversarial user a 50/50 chance of being correct upon guessing that all of the even numbered nodes (among the ones at level 1 or higher) are higher than level one. This is despite the property that there is a very low probability of guessing that a particular node is at levelNfor some integerN. A skip list does not provide the same absolute worst-case performance guarantees as more traditionalbalanced treedata structures, because it is always possible (though with very low probability[5]) that the coin-flips used to build the skip list will produce a badly balanced structure. However, they work well in practice, and the randomized balancing scheme has been argued to be easier to implement than the deterministic balancing schemes used in balanced binary search trees. Skip lists are also useful inparallel computing, where insertions can be done in different parts of the skip list in parallel without any global rebalancing of the data structure. Such parallelism can be especially advantageous for resource discovery in an ad-hocwireless networkbecause a randomized skip list can be made robust to the loss of any single node.[6] As described above, a skip list is capable of fastO(log⁡n){\displaystyle O(\log n)}insertion and removal of values from a sorted sequence, but it has only slowO(n){\displaystyle O(n)}lookups of values at a given position in the sequence (i.e. return the 500th value); however, with a minor modification the speed ofrandom accessindexed lookups can be improved toO(log⁡n){\displaystyle O(\log n)}. For every link, also store the width of the link. The width is defined as the number of bottom layer links being traversed by each of the higher layer "express lane" links. For example, here are the widths of the links in the example at the top of the page: Notice that the width of a higher level link is the sum of the component links below it (i.e. the width 10 link spans the links of widths 3, 2 and 5 immediately below it). Consequently, the sum of all widths is the same on every level (10 + 1 = 1 + 3 + 2 + 5 = 1 + 2 + 1 + 2 + 3 + 2). To index the skip list and find the i'th value, traverse the skip list while counting down the widths of each traversed link. Descend a level whenever the upcoming width would be too large. For example, to find the node in the fifth position (Node 5), traverse a link of width 1 at the top level. Now four more steps are needed but the next width on this level is ten which is too large, so drop one level. Traverse one link of width 3. Since another step of width 2 would be too far, drop down to the bottom level. Now traverse the final link of width 1 to reach the target running total of 5 (1+3+1). This method of implementing indexing is detailed in"A skip list cookbook"by William Pugh[7] Skip lists were first described in 1989 byWilliam Pugh.[8] To quote the author: Skip lists are a probabilistic data structure that seem likely to supplant balanced trees as the implementation method of choice for many applications. Skip list algorithms have the same asymptotic expected time bounds as balanced trees and are simpler, faster and use less space. List of applications and frameworks that use skip lists: Skip lists are also used in distributed applications (where the nodes represent physical computers, and pointers represent network connections) and for implementing highly scalable concurrentpriority queueswith less lock contention,[17]or evenwithout locking,[18][19][20]as well aslock-freeconcurrent dictionaries.[21]There are also several US patents for using skip lists to implement (lockless) priority queues and concurrent dictionaries.[22]
https://en.wikipedia.org/wiki/Skip_list
Incomputer science, aperfect hash functionhfor a setSis ahash functionthat maps distinct elements inSto a set ofmintegers, with nocollisions. In mathematical terms, it is aninjective function. Perfect hash functions may be used to implement alookup tablewith constant worst-case access time. A perfect hash function can, as anyhash function, be used to implementhash tables, with the advantage that nocollision resolutionhas to be implemented. In addition, if the keys are not in the data and if it is known that queried keys will be valid, then the keys do not need to be stored in the lookup table, saving space. Disadvantages of perfect hash functions are thatSneeds to be known for the construction of the perfect hash function. Non-dynamic perfect hash functions need to be re-constructed ifSchanges. For frequently changingSdynamic perfect hash functionsmay be used at the cost of additional space.[1]The space requirement to store the perfect hash function is inO(n)wherenis the number of keys in the structure. The important performance parameters for perfect hash functions are the evaluation time, which should be constant, the construction time, and the representation size. A perfect hash function with values in a limited range can be used for efficient lookup operations, by placing keys fromS(or other associated values) in alookup tableindexed by the output of the function. One can then test whether a key is present inS, or look up a value associated with that key, by looking for it at its cell of the table. Each such lookup takesconstant timein theworst case.[2]With perfect hashing, the associated data can be read or written with a single access to the table.[3] The important performance parameters for perfect hashing are the representation size, the evaluation time, the construction time, and additionally the range requirementmn{\displaystyle {\frac {m}{n}}}(average number of buckets per key in the hash table).[4]The evaluation time can be as fast asO(1), which is optimal.[2][4]The construction time needs to be at leastO(n), because each element inSneeds to be considered, andScontainsnelements. This lower bound can be achieved in practice.[4] The lower bound for the representation size depends onmandn. Letm= (1+ε)nandha perfect hash function. A good approximation for the lower bound islog⁡e−εlog⁡1+εε{\displaystyle \log e-\varepsilon \log {\frac {1+\varepsilon }{\varepsilon }}}Bits per element. For minimal perfect hashing,ε = 0, the lower bound islog e ≈ 1.44bits per element.[4] A perfect hash function for a specific setSthat can be evaluated in constant time, and with values in a small range, can be found by arandomized algorithmin a number of operations that is proportional to the size of S. The original construction ofFredman, Komlós & Szemerédi (1984)uses a two-level scheme to map a setSofnelements to a range ofO(n)indices, and then map each index to a range of hash values. The first level of their construction chooses a large primep(larger than the size of theuniversefrom whichSis drawn), and a parameterk, and maps each elementxofSto the index Ifkis chosen randomly, this step is likely to have collisions, but the number of elementsnithat are simultaneously mapped to the same indexiis likely to be small. The second level of their construction assigns disjoint ranges ofO(ni2)integers to each indexi. It uses a second set of linear modular functions, one for each indexi, to map each memberxofSinto the range associated withg(x).[2] AsFredman, Komlós & Szemerédi (1984)show, there exists a choice of the parameterksuch that the sum of the lengths of the ranges for thendifferent values ofg(x)isO(n). Additionally, for each value ofg(x), there exists a linear modular function that maps the corresponding subset ofSinto the range associated with that value. Bothk, and the second-level functions for each value ofg(x), can be found inpolynomial timeby choosing values randomly until finding one that works.[2] The hash function itself requires storage spaceO(n)to storek,p, and all of the second-level linear modular functions. Computing the hash value of a given keyxmay be performed in constant time by computingg(x), looking up the second-level function associated withg(x), and applying this function tox. A modified version of this two-level scheme with a larger number of values at the top level can be used to construct a perfect hash function that mapsSinto a smaller range of lengthn+o(n).[2] A more recent method for constructing a perfect hash function is described byBelazzougui, Botelho & Dietzfelbinger (2009)as "hash, displace, and compress". Here a first-level hash functiongis also used to map elements onto a range ofrintegers. An elementx∈Sis stored in the BucketBg(x).[4] Then, in descending order of size, each bucket's elements are hashed by a hash function of a sequence of independent fully random hash functions(Φ1, Φ2, Φ3, ...), starting withΦ1. If the hash function does not produce any collisions for the bucket, and the resulting values are not yet occupied by other elements from other buckets, the function is chosen for that bucket. If not, the next hash function in the sequence is tested.[4] To evaluate the perfect hash functionh(x)one only has to save the mapping σ of the bucket indexg(x)onto the correct hash function in the sequence, resulting inh(x) = Φσ(g(x)).[4] Finally, to reduce the representation size, the (σ(i))0 ≤ i < rare compressed into a form that still allows the evaluation inO(1).[4] This approach needs linear time innfor construction, and constant evaluation time. The representation size is inO(n), and depends on the achieved range. For example, withm= 1.23nBelazzougui, Botelho & Dietzfelbinger (2009)achieved a representation size between 3.03 bits/key and 1.40 bits/key for their given example set of 10 million entries, with lower values needing a higher computation time. The space lower bound in this scenario is 0.88 bits/key.[4] The use ofO(n)words of information to store the function ofFredman, Komlós & Szemerédi (1984)is near-optimal: any perfect hash function that can be calculated in constant time requires at least a number of bits that is proportional to the size ofS.[5] For minimal perfect hash functions the information theoretic space lower bound is bits/key.[4] For perfect hash functions, it is first assumed that the range ofhis bounded bynasm= (1+ε)n. With the formula given byBelazzougui, Botelho & Dietzfelbinger (2009)and for auniverseU⊇S{\displaystyle U\supseteq S}whose size|U| =utends towards infinity, the space lower bounds is bits/key, minuslog(n)bits overall.[4] Using a perfect hash function is best in situations where there is a frequently queried large set,S, which is seldom updated. This is because any modification of the setSmay cause the hash function to no longer be perfect for the modified set. Solutions which update the hash function any time the set is modified are known asdynamic perfect hashing,[1]but these methods are relatively complicated to implement. A minimal perfect hash function is a perfect hash function that mapsnkeys tonconsecutive integers – usually the numbers from0ton− 1or from1ton. A more formal way of expressing this is: Letjandkbe elements of some finite setS. Thenhis a minimal perfect hash function if and only ifh(j) =h(k)impliesj=k(injectivity) and there exists an integerasuch that the range ofhisa..a+ |S| − 1. It has been proven that a general purpose minimal perfect hash scheme requires at leastlog2⁡e≈1.44{\displaystyle \log _{2}e\approx 1.44}bits/key.[4]Assuming thatS{\displaystyle S}is a set of sizen{\displaystyle n}containing integers in the range[1,2o(n)]{\displaystyle [1,2^{o(n)}]}, it is known how to efficiently construct an explicit minimal perfect hash function fromS{\displaystyle S}to{1,2,…,n}{\displaystyle \{1,2,\ldots ,n\}}that uses spacenlog2⁡e+o(n){\displaystyle n\log _{2}e+o(n)}bits and that supports constant evaluation time.[6]In practice, there are minimal perfect hashing schemes that use roughly 1.56 bits/key if given enough time.[7] A hash function isk-perfect if at mostkelements fromSare mapped onto the same value in the range. The "hash, displace, and compress" algorithm can be used to constructk-perfect hash functions by allowing up tokcollisions. The changes necessary to accomplish this are minimal, and are underlined in the adapted pseudocode below: A minimal perfect hash functionFisorder preservingif keys are given in some ordera1,a2, ...,anand for any keysajandak,j<kimpliesF(aj) < F(ak).[8]In this case, the function value is just the position of each key in the sorted ordering of all of the keys. A simple implementation of order-preserving minimal perfect hash functions with constant access time is to use an (ordinary) perfect hash function to store a lookup table of the positions of each key. This solution usesO(nlog⁡n){\displaystyle O(n\log n)}bits, which is optimal in the setting where the comparison function for the keys may be arbitrary.[9]However, if the keysa1,a2, ...,anare integers drawn from a universe{1,2,…,U}{\displaystyle \{1,2,\ldots ,U\}}, then it is possible to construct an order-preserving hash function using onlyO(nlog⁡log⁡log⁡U){\displaystyle O(n\log \log \log U)}bits of space.[10]Moreover, this bound is known to be optimal.[11] While well-dimensioned hash tables have amortized average O(1) time (amortized average constant time) for lookups, insertions, and deletion, most hash table algorithms suffer from possible worst-case times that take much longer. A worst-case O(1) time (constant time even in the worst case) would be better for many applications (includingnetwork routerandmemory caches).[12]: 41 Few hash table algorithms support worst-case O(1) lookup time (constant lookup time even in the worst case). The few that do include: perfect hashing;dynamic perfect hashing;cuckoo hashing;hopscotch hashing; andextendible hashing.[12]: 42–69 A simple alternative to perfect hashing, which also allows dynamic updates, iscuckoo hashing. This scheme maps keys to two or more locations within a range (unlike perfect hashing which maps each key to a single location) but does so in such a way that the keys can be assigned one-to-one to locations to which they have been mapped. Lookups with this scheme are slower, because multiple locations must be checked, but nevertheless take constant worst-case time.[13]
https://en.wikipedia.org/wiki/Minimal_perfect_hash_function
Incryptography, azero-knowledge proof(also known as aZK prooforZKP) is a protocol in which one party (the prover) can convince another party (the verifier) that some given statement is true, without conveying to the verifier any informationbeyondthe mere fact of that statement's truth.[1]The intuition underlying zero-knowledge proofs is that it is trivial to prove possession of the relevant information simply by revealing it; the hard part is to prove this possession without revealing this information (or any aspect of it whatsoever).[2] In light of the fact that one should be able to generate a proof of some statementonlywhen in possession of certain secret information connected to the statement, the verifier, even after having become convinced of the statement's truth, should nonetheless remain unable to prove the statement to further third parties. Zero-knowledge proofs can be interactive, meaning that the prover and verifier exchange messages according to some protocol, or noninteractive, meaning that the verifier is convinced by a single prover message and no other communication is needed. In thestandard model, interaction is required, except for trivial proofs ofBPPproblems.[3]In thecommon random stringandrandom oraclemodels,non-interactive zero-knowledge proofsexist. TheFiat–Shamir heuristiccan be used to transform certain interactive zero-knowledge proofs into noninteractive ones.[4][5][6] There is a well-known story presenting the fundamental ideas of zero-knowledge proofs, first published in 1990 byJean-Jacques Quisquaterand others in their paper "How to Explain Zero-Knowledge Protocols to Your Children".[7]The two parties in the zero-knowledge proof story arePeggyas the prover of the statement, andVictor, the verifier of the statement. In this story, Peggy has uncovered the secret word used to open a magic door in a cave. The cave is shaped like a ring, with the entrance on one side and the magic door blocking the opposite side. Victor wants to know whether Peggy knows the secret word; but Peggy, being a very private person, does not want to reveal her knowledge (the secret word) to Victor or to reveal the fact of her knowledge to the world in general. They label the left and right paths from the entrance A and B. First, Victor waits outside the cave as Peggy goes in. Peggy takes either path A or B; Victor is not allowed to see which path she takes. Then, Victor enters the cave and shouts the name of the path he wants her to use to return, either A or B, chosen at random. Providing she really does know the magic word, this is easy: she opens the door, if necessary, and returns along the desired path. However, suppose she did not know the word. Then, she would only be able to return by the named path if Victor were to give the name of the same path by which she had entered. Since Victor would choose A or B at random, she would have a 50% chance of guessing correctly. If they were to repeat this trick many times, say 20 times in a row, her chance of successfully anticipating all of Victor's requests would be reduced to 1 in 220, or 9.54×10−7. Thus, if Peggy repeatedly appears at the exit Victor names, then he can conclude that it is extremely probable that Peggy does, in fact, know the secret word. One side note with respect to third-party observers: even if Victor is wearing a hidden camera that records the whole transaction, the only thing the camera will record is in one case Victor shouting "A!" and Peggy appearing at A or in the other case Victor shouting "B!" and Peggy appearing at B. A recording of this type would be trivial for any two people to fake (requiring only that Peggy and Victor agree beforehand on the sequence of As and Bs that Victor will shout). Such a recording will certainly never be convincing to anyone but the original participants. In fact, even a person who was present as an observer at the original experiment should be unconvinced, since Victor and Peggy could have orchestrated the whole "experiment" from start to finish. Further, if Victor chooses his As and Bs by flipping a coin on-camera, this protocol loses its zero-knowledge property; the on-camera coin flip would probably be convincing to any person watching the recording later. Thus, although this does not reveal the secret word to Victor, it does make it possible for Victor to convince the world in general that Peggy has that knowledge—counter to Peggy's stated wishes. However, digital cryptography generally "flips coins" by relying on apseudo-random number generator, which is akin to a coin with a fixed pattern of heads and tails known only to the coin's owner. If Victor's coin behaved this way, then again it would be possible for Victor and Peggy to have faked the experiment, so using a pseudo-random number generator would not reveal Peggy's knowledge to the world in the same way that using a flipped coin would. Peggy could prove to Victor that she knows the magic word, without revealing it to him, in a single trial. If both Victor and Peggy go together to the mouth of the cave, Victor can watch Peggy go in through A and come out through B. This would prove with certainty that Peggy knows the magic word, without revealing the magic word to Victor. However, such a proof could be observed by a third party, or recorded by Victor and such a proof would be convincing to anybody. In other words, Peggy could not refute such proof by claiming she colluded with Victor, and she is therefore no longer in control of who is aware of her knowledge. Imagine your friend "Victor" is red-greencolour-blind(while you are not) and you have two balls: one red and one green, but otherwise identical. To Victor, the balls seem completely identical. Victor is skeptical that the balls are actually distinguishable. You want toprove to Victor that the balls are in fact differently coloured, but nothing else. In particular, you do not want to reveal which ball is the red one and which is the green. Here is the proof system: You give the two balls to Victor and he puts them behind his back. Next, he takes one of the balls and brings it out from behind his back and displays it. He then places it behind his back again and then chooses to reveal just one of the two balls, picking one of the two at random with equal probability. He will ask you, "Did I switch the ball?" This whole procedure is then repeated as often as necessary. By looking at the balls' colours, you can, of course, say with certainty whether or not he switched them. On the other hand, if the balls were the same colour and hence indistinguishable, your ability to determine whether a switch occurred would be no better than random guessing. Since the probability that you would have randomly succeeded at identifying each switch/non-switch is 50%, the probability of having randomly succeeded atallswitch/non-switches approaches zero. Over multiple trials, the success rate wouldstatistically convergeto 50%, and you could not achieve a performance significantly better than chance. If you and your friend repeat this "proof" multiple times (e.g. 20 times), your friend should become convinced that the balls are indeed differently coloured. The above proof iszero-knowledgebecause your friend never learns which ball is green and which is red; indeed, he gains no knowledge about how to distinguish the balls.[8] One well-known example of a zero-knowledge proof is the "Where's Wally" example. In this example, the prover wants to prove to the verifier that they know where Wally is on a page in aWhere's Wally?book, without revealing his location to the verifier.[9] The prover starts by taking a large black board with a small hole in it, the size of Wally. The board is twice the size of the book in both directions, so the verifier cannot see where on the page the prover is placing it. The prover then places the board over the page so that Wally is in the hole.[9] The verifier can now look through the hole and see Wally, but cannot see any other part of the page. Therefore, the prover has proven to the verifier that they know where Wally is, without revealing any other information about his location.[9] This example is not a perfect zero-knowledge proof, because the prover does reveal some information about Wally's location, such as his body position. However, it is a decent illustration of the basic concept of a zero-knowledge proof. A zero-knowledge proof of some statement must satisfy three properties: The first two of these are properties of more generalinteractive proof systems. The third is what makes the proof zero-knowledge.[10] Zero-knowledge proofs are not proofs in the mathematical sense of the term because there is some small probability, thesoundness error, that a cheating prover will be able to convince the verifier of a false statement. In other words, zero-knowledge proofs are probabilistic "proofs" rather than deterministic proofs. However, there are techniques to decrease the soundness error to negligibly small values (for example, guessing correctly on a hundred or thousand binary decisions has a 1/2100or 1/21000soundness error, respectively. As the number of bits increases, the soundness error decreases toward zero). A formal definition of zero-knowledge must use some computational model, the most common one being that of aTuring machine. LetP,V, andSbe Turing machines. Aninteractive proof systemwith(P,V)for a languageLis zero-knowledge if for anyprobabilistic polynomial time(PPT) verifierV^{\displaystyle {\hat {V}}}there exists a PPT simulatorSsuch that: whereViewV^{\displaystyle {\hat {V}}}[P(x)↔V^{\displaystyle {\hat {V}}}(x,z)]is a record of the interactions betweenP(x)andV(x,z). The proverPis modeled as having unlimited computation power (in practice,Pusually is aprobabilistic Turing machine). Intuitively, the definition states that an interactive proof system(P,V)is zero-knowledge if for any verifierV^{\displaystyle {\hat {V}}}there exists an efficient simulatorS(depending onV^{\displaystyle {\hat {V}}}) that can reproduce the conversation betweenPandV^{\displaystyle {\hat {V}}}on any given input. The auxiliary stringzin the definition plays the role of "prior knowledge" (including the random coins ofV^{\displaystyle {\hat {V}}}). The definition implies thatV^{\displaystyle {\hat {V}}}cannot use any prior knowledge stringzto mine information out of its conversation withP, because ifSis also given this prior knowledge then it can reproduce the conversation betweenV^{\displaystyle {\hat {V}}}andPjust as before.[citation needed] The definition given is that of perfect zero-knowledge. Computational zero-knowledge is obtained by requiring that the views of the verifierV^{\displaystyle {\hat {V}}}and the simulator are onlycomputationally indistinguishable, given the auxiliary string.[citation needed] These ideas can be applied to a more realistic cryptography application. Peggy wants to prove to Victor that she knows thediscrete logarithmof a given value in a givengroup.[11] For example, given a valuey, a largeprimep, and a generatorg{\displaystyle g}, she wants to prove that she knows a valuexsuch thatgx≡y(modp), without revealingx. Indeed, knowledge ofxcould be used as a proof of identity, in that Peggy could have such knowledge because she chose a random valuexthat she did not reveal to anyone, computedy=gxmodp, and distributed the value ofyto all potential verifiers, such that at a later time, proving knowledge ofxis equivalent to proving identity as Peggy. The protocol proceeds as follows: in each round, Peggy generates a random numberr, computesC=grmodpand discloses this to Victor. After receivingC, Victor randomly issues one of the following two requests: he either requests that Peggy discloses the value ofr, or the value of(x+r) mod (p− 1). Victor can verify either answer; if he requestedr, he can then computegrmodpand verify that it matchesC. If he requested(x+r) mod (p− 1), then he can verify thatCis consistent with this, by computingg(x+r) mod (p− 1)modpand verifying that it matches(C·y) modp. If Peggy indeed knows the value ofx, then she can respond to either one of Victor's possible challenges. If Peggy knew or could guess which challenge Victor is going to issue, then she could easily cheat and convince Victor that she knowsxwhen she does not: if she knows that Victor is going to requestr, then she proceeds normally: she picksr, computesC=grmodp, and disclosesCto Victor; she will be able to respond to Victor's challenge. On the other hand, if she knows that Victor will request(x+r) mod (p− 1), then she picks a random valuer′, computesC′ ≡gr′· (gx)−1modp, and disclosesC′to Victor as the value ofCthat he is expecting. When Victor challenges her to reveal(x+r) mod (p− 1), she revealsr′, for which Victor will verify consistency, since he will in turn computegr′modp, which matchesC′ ·y, since Peggy multiplied by themodular multiplicative inverseofy. However, if in either one of the above scenarios Victor issues a challenge other than the one she was expecting and for which she manufactured the result, then she will be unable to respond to the challenge under the assumption of infeasibility of solving the discrete log for this group. If she pickedrand disclosedC=grmodp, then she will be unable to produce a valid(x+r) mod (p− 1)that would pass Victor's verification, given that she does not knowx. And if she picked a valuer′that poses as(x+r) mod (p− 1), then she would have to respond with the discrete log of the value that she disclosed – but Peggy does not know this discrete log, since the valueCshe disclosed was obtained through arithmetic with known values, and not by computing a power with a known exponent. Thus, a cheating prover has a 0.5 probability of successfully cheating in one round. By executing a large-enough number of rounds, the probability of a cheating prover succeeding can be made arbitrarily low. To show that the above interactive proof gives zero knowledge other than the fact that Peggy knowsx, one can use similar arguments as used in the above proof of completeness and soundness. Specifically, a simulator, say Simon, who does not knowx, can simulate the exchange between Peggy and Victor by the following procedure. Firstly, Simon randomly flips a fair coin. If the result is "heads", then he picks a random valuer, computesC=grmodp, and disclosesCas if it is a message from Peggy to Victor. Then Simon also outputs a message "request the value ofr" as if it is sent from Victor to Peggy, and immediately outputs the value ofras if it is sent from Peggy to Victor. A single round is complete. On the other hand, if the coin flipping result is "tails", then Simon picks a random numberr′, computesC′ =gr′·y−1modp, and disclosesC′as if it is a message from Peggy to Victor. Then Simon outputs "request the value of(x+r) mod (p− 1)" as if it is a message from Victor to Peggy. Finally, Simon outputs the value ofr′as if it is the response from Peggy back to Victor. A single round is complete. By the previous arguments when proving the completeness and soundness, the interactive communication simulated by Simon is indistinguishable from the true correspondence between Peggy and Victor. The zero-knowledge property is thus guaranteed. The following scheme is due toManuel Blum.[12] In this scenario, Peggy knows aHamiltonian cyclefor a largegraphG. Victor knowsGbut not the cycle (e.g., Peggy has generatedGand revealed it to him.) Finding a Hamiltonian cycle given a large graph is believed to be computationally infeasible, since its corresponding decision version is known to beNP-complete. Peggy will prove that she knows the cycle without simply revealing it (perhaps Victor is interested in buying it but wants verification first, or maybe Peggy is the only one who knows this information and is proving her identity to Victor). To show that Peggy knows this Hamiltonian cycle, she and Victor play several rounds of a game: It is important that the commitment to the graph be such that Victor can verify, in the second case, that the cycle is really made of edges fromH. This can be done by, for example, committing to every edge (or lack thereof) separately. If Peggy does know a Hamiltonian cycle inG, then she can easily satisfy Victor's demand for either the graph isomorphism producingHfromG(which she had committed to in the first step) or a Hamiltonian cycle inH(which she can construct by applying the isomorphism to the cycle inG). Peggy's answers do not reveal the original Hamiltonian cycle inG. In each round, Victor will learn onlyH's isomorphism toGor a Hamiltonian cycle inH. He would need both answers for a singleHto discover the cycle inG, so the information remains unknown as long as Peggy can generate a distinctHevery round. If Peggy does not know of a Hamiltonian cycle inG, but somehow knew in advance what Victor would ask to see each round, then she could cheat. For example, if Peggy knew ahead of time that Victor would ask to see the Hamiltonian cycle inH, then she could generate a Hamiltonian cycle for an unrelated graph. Similarly, if Peggy knew in advance that Victor would ask to see the isomorphism then she could simply generate an isomorphic graphH(in which she also does not know a Hamiltonian cycle). Victor could simulate the protocol by himself (without Peggy) because he knows what he will ask to see. Therefore, Victor gains no information about the Hamiltonian cycle inGfrom the information revealed in each round. If Peggy does not know the information, then she can guess which question Victor will ask and generate either a graph isomorphic toGor a Hamiltonian cycle for an unrelated graph, but since she does not know a Hamiltonian cycle forG, she cannot do both. With this guesswork, her chance of fooling Victor is2−n, wherenis the number of rounds. For all realistic purposes, it is infeasibly difficult to defeat a zero-knowledge proof with a reasonable number of rounds in this way. Different variants of zero-knowledge can be defined by formalizing the intuitive concept of what is meant by the output of the simulator "looking like" the execution of the real proof protocol in the following ways: There are various types of zero-knowledge proofs: Zero-knowledge proof schemes can be constructed from various cryptographic primitives, such ashash-based cryptography,pairing-based cryptography,multi-party computation, orlattice-based cryptography. Research in zero-knowledge proofs has been motivated byauthenticationsystems where one party wants to prove its identity to a second party via some secret information (such as a password) but does not want the second party to learn anything about this secret. This is called a "zero-knowledgeproof of knowledge". However, a password is typically too small or insufficiently random to be used in many schemes for zero-knowledge proofs of knowledge. Azero-knowledge password proofis a special kind of zero-knowledge proof of knowledge that addresses the limited size of passwords.[citation needed] In April 2015, the one-out-of-many proofs protocol (aSigma protocol) was introduced.[14]In August 2021,Cloudflare, an American web infrastructure and security company, decided to use the one-out-of-many proofs mechanism for private web verification using vendor hardware.[15] One of the uses of zero-knowledge proofs within cryptographic protocols is to enforce honest behavior while maintaining privacy. Roughly, the idea is to force a user to prove, using a zero-knowledge proof, that its behavior is correct according to the protocol.[16][17]Because of soundness, we know that the user must really act honestly in order to be able to provide a valid proof. Because of zero knowledge, we know that the user does not compromise the privacy of its secrets in the process of providing the proof.[citation needed] In 2016, thePrinceton Plasma Physics LaboratoryandPrinceton Universitydemonstrated a technique that may have applicability to futurenuclear disarmamenttalks. It would allow inspectors to confirm whether or not an object is indeed a nuclear weapon without recording, sharing, or revealing the internal workings, which might be secret.[18] Zero-knowledge proofs were applied in theZerocoinand Zerocash protocols, which culminated in the birth ofZcoin[19](later rebranded asFiroin 2020)[20]andZcashcryptocurrencies in 2016. Zerocoin has a built-in mixing model that does not trust any peers or centralised mixing providers to ensure anonymity.[19]Users can transact in a base currency and can cycle the currency into and out of Zerocoins.[21]The Zerocash protocol uses a similar model (a variant known as anon-interactive zero-knowledge proof)[22]except that it can obscure the transaction amount, while Zerocoin cannot. Given significant restrictions of transaction data on the Zerocash network, Zerocash is less prone to privacy timing attacks when compared to Zerocoin. However, this additional layer of privacy can cause potentially undetected hyperinflation of Zerocash supply because fraudulent coins cannot be tracked.[19][23] In 2018, Bulletproofs were introduced. Bulletproofs are an improvement from non-interactive zero-knowledge proofs where a trusted setup is not needed.[24]It was later implemented into theMimblewimbleprotocol (which the Grin and Beam cryptocurrencies are based upon) andMonero cryptocurrency.[25]In 2019, Firo implemented the Sigma protocol, which is an improvement on the Zerocoin protocol without trusted setup.[26][14]In the same year, Firo introduced the Lelantus protocol, an improvement on the Sigma protocol, where the former hides the origin and amount of a transaction.[27] Zero-knowledge proofs by their nature can enhance privacy in identity-sharing systems, which are vulnerable to data breaches and identity theft. When integrated to adecentralized identifiersystem, ZKPs add an extra layer of encryption on DID documents.[28] Zero-knowledge proofs were first conceived in 1985 byShafi Goldwasser,Silvio Micali, andCharles Rackoffin their paper "The Knowledge Complexity of Interactive Proof-Systems".[16]This paper introduced the IP hierarchy of interactive proof systems (seeinteractive proof system) and conceived the concept ofknowledge complexity, a measurement of the amount of knowledge about the proof transferred from the prover to the verifier. They also gave the first zero-knowledge proof for a concrete problem, that of decidingquadratic nonresiduesmodm. Together with a paper byLászló BabaiandShlomo Moran, this landmark paper invented interactive proof systems, for which all five authors won the firstGödel Prizein 1993. In their own words, Goldwasser, Micali, and Rackoff say: Of particular interest is the case where this additional knowledge is essentially 0 and we show that [it] is possible to interactively prove that a number is quadratic non residue modmreleasing 0 additional knowledge. This is surprising as no efficient algorithm for deciding quadratic residuosity modmis known whenm’s factorization is not given. Moreover, all knownNPproofs for this problem exhibit the prime factorization ofm. This indicates that adding interaction to the proving process, may decrease the amount of knowledge that must be communicated in order to prove a theorem. The quadratic nonresidue problem has both anNPand aco-NPalgorithm, and so lies in the intersection of NP and co-NP. This was also true of several other problems for which zero-knowledge proofs were subsequently discovered, such as an unpublished proof system by Oded Goldreich verifying that a two-prime modulus is not aBlum integer.[29] Oded Goldreich,Silvio Micali, andAvi Wigdersontook this one step further, showing that, assuming the existence of unbreakable encryption, one can create a zero-knowledge proof system for the NP-completegraph coloring problemwith three colors. Since every problem in NP can be efficiently reduced to this problem, this means that, under this assumption, all problems in NP have zero-knowledge proofs.[30]The reason for the assumption is that, as in the above example, their protocols require encryption. A commonly cited sufficient condition for the existence of unbreakable encryption is the existence ofone-way functions, but it is conceivable that some physical means might also achieve it. On top of this, they also showed that thegraph nonisomorphism problem, thecomplementof thegraph isomorphism problem, has a zero-knowledge proof. This problem is in co-NP, but is not currently known to be in either NP or any practical class. More generally,Russell ImpagliazzoandMoti Yungas well as Ben-Or et al. would go on to show that, also assuming one-way functions or unbreakable encryption, there are zero-knowledge proofs forallproblems in IP = PSPACE, or in other words, anything that can be proved by an interactive proof system can be proved with zero knowledge.[31][32] Not liking to make unnecessary assumptions, many theorists sought a way to eliminate the necessity ofone way functions. One way this was done was withmulti-prover interactive proof systems(seeinteractive proof system), which have multiple independent provers instead of only one, allowing the verifier to "cross-examine" the provers in isolation to avoid being misled. It can be shown that, without any intractability assumptions, all languages in NP have zero-knowledge proofs in such a system.[33] It turns out that, in an Internet-like setting, where multiple protocols may be executed concurrently, building zero-knowledge proofs is more challenging. The line of research investigating concurrent zero-knowledge proofs was initiated by the work ofDwork,Naor, andSahai.[34]One particular development along these lines has been the development ofwitness-indistinguishable proofprotocols. The property of witness-indistinguishability is related to that of zero-knowledge, yet witness-indistinguishable protocols do not suffer from the same problems of concurrent execution.[35] Another variant of zero-knowledge proofs arenon-interactive zero-knowledge proofs. Blum, Feldman, and Micali showed that a common random string shared between the prover and the verifier is enough to achieve computational zero-knowledge without requiring interaction.[5][6] The most popular interactive ornon-interactive zero-knowledge proof(e.g., zk-SNARK) protocols can be broadly categorized in the following four categories: Succinct Non-Interactive ARguments of Knowledge (SNARK), Scalable Transparent ARgument of Knowledge (STARK), Verifiable Polynomial Delegation (VPD), and Succinct Non-interactive ARGuments (SNARG). A list of zero-knowledge proof protocols and libraries is provided below along with comparisons based ontransparency,universality,plausible post-quantum security, andprogramming paradigm.[36]A transparent protocol is one that does not require any trusted setup and uses public randomness. A universal protocol is one that does not require a separate trusted setup for each circuit. Finally, a plausibly post-quantum protocol is one that is not susceptible to known attacks involving quantum algorithms. While zero-knowledge proofs offer a secure way to verify information, the arithmetic circuits that implement them must be carefully designed. If these circuits lack sufficient constraints, they may introduce subtle yet critical security vulnerabilities. One of the most common classes of vulnerabilities in these systems is under-constrained logic, where insufficient constraints allow a malicious prover to produce a proof for an incorrect statement that still passes verification. A 2024 systematization of known attacks found that approximately 96% of documented circuit-layer bugs in SNARK-based systems were due to under-constrained circuits.[56] These vulnerabilities often arise during the translation of high-level logic into low-level constraint systems, particularly when using domain-specific languages such as Circom or Gnark. Recent research has demonstrated that formally proving determinism – ensuring that a circuit's outputs are uniquely determined by its inputs – can eliminate entire classes of these vulnerabilities.[57]
https://en.wikipedia.org/wiki/Zero-knowledge_proof#Zero-Knowledge_Proof_protocols
VMEbus(Versa Module Eurocard[1]bus) is acomputer busstandard physically based onEurocardsizes. In 1979, during development of theMotorola 68000CPU, one of their engineers, Jack Kister, decided to set about creating a standardized bus system for 68000-based systems.[2]The Motorola team brainstormed for days to select the name VERSAbus. VERSAbus cards were large,370 by 230 mm (14+1⁄2by9+1⁄4in), and usededge connectors.[3]Only a few products adopted it, including theIBM System 9000instrument controller and theAutomatixrobot and machine vision systems. Kister was later joined by John Black, who refined the specifications and created theVERSAmoduleproduct concept. A young engineer working for Black,Julie Keaheydesigned the first VERSAmodule card, the VERSAbus Adaptor Module, used to run existing cards on the new VERSAbus.Sven RauandMax Loeselof Motorola-Europe added a mechanical specification to the system, basing it on theEurocardstandard that was then late in the standardization process. The result was first known as VERSAbus-E but was later renamed toVMEbus, forVERSAmodule Eurocard bus(although some refer to it asVersa Module Europa).[3] At this point, a number of other companies involved in the 68000's ecosystem agreed to use the standard, including Signetics, Philips, Thomson, and Mostek. Soon it was officially standardized by theIECas the IEC 821 VMEbus and by ANSI and IEEE as ANSI/IEEE 1014-1987. The original standard was a16-bitbus, designed to fit within the existing EurocardDINconnectors. However, there have been several updates to the system to allow wider bus widths. The currentVME64includes a full64-bitbus in 6U-sized cards and32-bitin 3U cards. The VME64 protocol has a typical performance of 40MB/s.[3]Other associated standards have added hot-swapping (plug-and-play) inVME64x, smaller 'IP' cards that plug into a single VMEbus card, and various interconnect standards for linking VME systems together. In the late 1990s, synchronous protocols proved to be favourable. The research project was called VME320. The VITA Standards Organization called for a new standard for unmodified VME32/64 backplanes.[3]The new 2eSST protocol was approved in ANSI/VITA 1.5 in 1999. Over the years, many extensions have been added to the VME interface, providing 'sideband' channels of communication in parallel to VME itself. Some examples are IP Module, RACEway Interlink, SCSA, Gigabit Ethernet on VME64x Backplanes, PCI Express, RapidIO, StarFabric and InfiniBand. VMEbus was also used to develop closely related standards,VXIbusandVPX. The VMEbus had a strong influence on many later computer buses such asSTEbus. The architectural concepts of the VMEbus are based on VERSAbus,[3]developed in the late 1970s by Motorola. This was later renamed "VME", short for Versa Module European, by Lyman (Lym) Hevle, then a VP with the Motorola Microsystems Operation. (He was later the founder of the VME Marketing Group, itself subsequently renamed to VME International Trade Association, or VITA). John Black of Motorola, Craig MacKenna of Mostek and Cecil Kaplinsky of Signetics developed the first draft of the VMEbus specification. In October 1981, at the System '81 trade show in Munich, West Germany, Motorola, Mostek, Signetics/Philips, and Thomson CSF announced their joint support of the VMEbus. They also placed Revision A of the specification in the public domain. In 1985, Aitech developed, under contract forUS ArmyTACOM, the first conduction-cooled 6U VMEbus board. Although electrically providing a compliant VMEbus protocol interface, mechanically, this board was not interchangeable for use in air-cooled lab VMEbus development chassis. In late 1987, a technical committee was formed under VITA under the direction of IEEE to create the first military, conduction-cooled 6U× 160mm, fully electrically and mechanically compatible, VMEbus board co-chaired by Dale Young (DY4 Systems) and Doug Patterson (Plessey Microsystems, then Radstone Technology). ANSI/IEEE-1101.2-1992 was later ratified and released in 1992 and remains in place as the conduction-cooled, international standard for all 6U VMEbus products. In 1989, John Peters of Performance Technologies Inc. developed the initial concept of VME64: multiplexing address and data lines (A64/D64) on the VMEbus. The concept was demonstrated the same year and placed in the VITA Technical Committee in 1990 as a performance enhancement to the VMEbus specification. In 1993, new activities began on the base-VME architecture, involving the implementation of high-speedserialandparallelsub-buses for use as I/O interconnections and data mover subsystems. These architectures can be used as message switches, routers and small multiprocessor parallel architectures. VITA's application for recognition as an accredited standards developer organization of ANSI was granted in June 1993. Numerous other documents ( including mezzanine, P2 and serial bus standards) have been placed with VITA as the Public Domain Administrator of these technologies. In many ways the VMEbus is equivalent or analogous to the pins of the68000run out onto abackplane. However, one of the key features of the 68000 is a flat32-bitmemory model, free ofmemory segmentationand other "anti-features". The result is that, while VME is very 68000-like, the 68000 is generic enough to make this not an issue in most cases. Like the 68000, VME uses separate 32-bit data and address buses. The 68000 address bus is actually 24-bit and the data bus 16-bit (although it is 32/32 internally) but the designers were already looking towards a full 32-bit implementation. In order to allow both bus widths, VME uses two different Eurocard connectors, P1 and P2. P1 contains three rows of 32 pins each, implementing the first 24 address bits, 16 data bits and all of the control signals. P2 contains one more row, which includes the remaining 8 address bits and 16 data bits. A block transfer protocol allows several bus transfers to occur with a single address cycle. In block transfer mode, the first transfer includes an address cycle and subsequent transfers require only data cycles. The slave is responsible for ensuring that these transfers use successive addresses. Bus masters can release the bus in two ways. With Release When Done (RWD), the master releases the bus when it completes a transfer and must re-arbitrate for the bus before every subsequent transfer. With Release On Request (ROR), the master retains the bus by continuing to assert BBSY* between transfers. ROR allows the master to retain control over the bus until a Bus Clear (BCLR*) is asserted by another master that wishes to arbitrate for the bus. Thus a master that generates bursts of traffic can optimizeitsperformance by arbitrating for the bus on only the first transfer of each burst. This decrease in transfer latency comes at the cost of somewhat higher transfer latency for other masters. Address modifiers are used to divide the VME bus address space into several distinct sub-spaces. The address modifier is a 6 bit wide set of signals on the backplane. Address modifiers specify the number of significant address bits, the privilege mode (to allow processors to distinguish between bus accesses by user-level or system-level software), and whether or not the transfer is a block transfer. Below is an incomplete table of address modifiers: On the VME bus, all transfers areDMAand every card is a master or slave. In most bus standards, there is a considerable amount of complexity added in order to support various transfer types and master/slave selection. For instance, with theISA bus, both of these features had to be added alongside the existing "channels" model, whereby all communications was handled by the hostCPU. This makes VME considerably simpler at a conceptual level while being more powerful, though it requires more complex controllers on each card. When developing and/or troubleshooting the VME bus, examination of hardware signals can be very important.Logic analyzersandbus analyzersare tools that collect, analyze, decode, store signals so people can view the high-speed waveforms at their leisure. VITA offers a comprehensive FAQ to assist with the front end design and development of VME systems. Computers using VMEbus include: Seen looking into backplane socket.[5][6] P1 P2 P2 rows a and c can be used by a secondary bus, for example theSTEbus.
https://en.wikipedia.org/wiki/VMEbus
Incomputer architecture, abus(historically also called adata highway[1]ordatabus) is a communication system that transfersdatabetween components inside acomputeror between computers.[2]It encompasses bothhardware(e.g., wires,optical fiber) andsoftware, includingcommunication protocols.[3]At its core, a bus is a shared physical pathway, typically composed of wires, traces on a circuit board, orbusbars, that allows multiple devices to communicate. To prevent conflicts and ensure orderly data exchange, buses rely on acommunication protocolto manage which device can transmit data at a given time. Buses are categorized based on their role, such assystem buses(also known as internal buses, internal data buses, or memory buses) connecting theCPUandmemory.Expansion buses, also calledperipheral buses, extend the system to connect additional devices, includingperipherals. Examples of widely used buses includePCI Express(PCIe) for high-speed internal connections andUniversal Serial Bus(USB) for connecting external devices. Modern buses utilize bothparallelandserial communication, employing advanced encoding methods to maximize speed and efficiency. Features such asdirect memory access(DMA) further enhance performance by allowing data transfers directly between devices and memory without requiring CPU intervention. Anaddress busis a bus that is used to specify aphysical address. When aprocessororDMA-enabled device needs to read or write to a memory location, it specifies that memory location on the address bus (the value to be read or written is sent on the data bus). The width of the address bus determines the amount of memory a system can address. For example, a system with a32-bitaddress bus can address232(4,294,967,296) memory locations. If each memory location holds one byte, the addressable memory space is about4GB. Early processors used a wire for each bit of the address width. For example, a 16-bit address bus had 16 physical wires making up the bus. As the buses became wider and lengthier, this approach became expensive in terms of the number of chip pins and board traces. Beginning with theMostek4096DRAM, address multiplexing implemented withmultiplexersbecame common. In a multiplexed address scheme, the address is sent in two equal parts on alternate bus cycles. This halves the number of address bus signals required to connect to the memory. For example, a 32-bit address bus can be implemented by using 16 lines and sending the first half of the memory address, immediately followed by the second half memory address. Typically two additional pins in the control bus – row-address strobe (RAS) and column-address strobe (CAS) – are used to tell the DRAM whether the address bus is currently sending the first half of the memory address or the second half. Accessing an individual byte frequently requires reading or writing the full bus width (aword) at once. In these instances the least significant bits of the address bus may not even be implemented - it is instead the responsibility of the controlling device to isolate the individual byte required from the complete word transmitted. This is the case, for instance, with theVESA Local Buswhich lacks the two least significant bits, limiting this bus toaligned32-bit transfers. Historically, there were also some examples of computers that were only able to address words –word machines. Thememory busis the bus that connects themain memoryto thememory controllerin computer systems. Originally, general-purpose buses likeVMEbusand theS-100 buswere used, but to reducelatency, modern memory buses are designed to connect directly to DRAM chips, and thus are defined by chip standards bodies such asJEDEC. Examples are the various generations ofSDRAM, and serial point-to-point buses likeSLDRAMandRDRAM. Buses can beparallel buses, which carrydata wordsin parallel on multiple wires, orserial buses, which carry data in bit-serial form. The addition of extra power and control connections,differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in1-WireandUNI/O. As data rates increase, the problems oftiming skew, power consumption, electromagnetic interference andcrosstalkacross parallel buses become more and more difficult to circumvent. One partial solution to this problem has been todouble pumpthe bus. Often, a serial bus can be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk.USB,FireWire, andSerial ATAare examples of this.Multidropconnections do not work well for fast serial buses, so most modern serial buses usedaisy-chainor hub designs. The transition from parallel to serial buses was allowed byMoore's lawwhich allowed for the incorporation ofSerDesin integrated circuits which are used in computers.[4] Networkconnections such asEthernetare not generally regarded as buses, although the difference is largely conceptual rather than practical. An attribute generally used to characterize a bus is that power is provided by the bus for the connected hardware. This emphasizes thebusbarorigins of bus architecture as supplying switched or distributed power. This excludes, as buses, schemes such as serialRS-232, parallelCentronics,IEEE 1284interfaces and Ethernet, since these devices also needed separate power supplies.Universal Serial Busdevices may use the bus supplied power, but often use a separate power source. This distinction is exemplified by atelephonesystem with a connectedmodem, where theRJ11connection and associated modulated signalling scheme is not considered a bus, and is analogous to anEthernetconnection. A phone line connection scheme is not considered to be a bus with respect to signals, but theCentral Officeuses buses withcross-bar switchesfor connections between phones. However, this distinction‍—‌that power is provided by the bus‍—‌is not the case in manyavionic systems, where data connections such asARINC 429,ARINC 629,MIL-STD-1553B(STANAG 3838), and EFABus (STANAG 3910) are commonly referred to asdata busesor, sometimes,databuses. Suchavionic data busesare usually characterized by having severalLine Replaceable Items/Units(LRI/LRUs) connected to a common, sharedmedia. They may, as with ARINC 429, besimplex, i.e. have a single source LRI/LRU or, as with ARINC 629, MIL-STD-1553B, and STANAG 3910, beduplex, allow all the connected LRI/LRUs to act, at different times (half duplex), as transmitters and receivers of data.[5] The frequency or the speed of a bus is measured in Hz such as MHz and determines how many clock cycles there are per second; there can be one or more data transfers per clock cycle. If there is a single transfer per clock cycle it is known asSingle Data Rate(SDR), and if there are two transfers per clock cycle it is known asDouble Data Rate(DDR) although the use of signalling other than SDR is uncommon outside of RAM. An example of this is PCIe which uses SDR.[6]Within each data transfer there can be multiple bits of data. This is described as the width of a bus which is the number of bits the bus can transfer per clock cycle and can be synonymous with the number of physical electrical conductors the bus has if each conductor transfers one bit at a time.[7][8][9]The data rate in bits per second can be obtained by multiplying the number of bits per clock cycle times the frequency times the number of transfers per clock cycle.[10][11]Alternatively a bus such asPCIecan use modulation or encoding such asPAM4[12][13][14]which groups 2 bits into symbols which are then transferred instead of the bits themselves, and allows for an increase in data transfer speed without increasing the frequency of the bus. The effective or real data transfer speed/rate may be lower due to the use of encoding that also allows for error correction such as 128/130b (b for bit) encoding.[15][16][17]The data transfer speed is also known as the bandwidth.[18][19] The simplestsystem bushas completely separate input data lines, output data lines, and address lines. To reduce cost, most microcomputers have a bidirectional data bus, re-using the same wires for input and output at different times.[20] Some processors use a dedicated wire for each bit of the address bus, data bus, and the control bus. For example, the 64-pinSTEbusis composed of 8 physical wires dedicated to the 8-bit data bus, 20 physical wires dedicated to the 20-bit address bus, 21 physical wires dedicated to the control bus, and 15 physical wires dedicated to various power buses. Bus multiplexing requires fewer wires, which reduces costs in many early microprocessors and DRAM chips. One common multiplexing scheme,address multiplexing, has already been mentioned. Another multiplexing scheme re-uses the address bus pins as the data bus pins,[20]an approach used byconventional PCIand the8086. The variousserial busescan be seen as the ultimate limit of multiplexing, sending each of the address bits and each of the data bits, one at a time, through a single pin (or a single differential pair). Over time, several groups of people worked on various computer bus standards, including the IEEE Bus Architecture Standards Committee (BASC), the IEEE Superbus study group, the open microprocessor initiative (OMI), the open microsystems initiative (OMI), the Gang of Nine that developedEISA, etc.[citation needed] Earlycomputerbuses were bundles of wire that attachedcomputer memoryand peripherals. Anecdotally termed thedigit trunkin the early AustralianCSIRACcomputer,[21]they were named after electrical power buses, orbusbars. Almost always, there was one bus for memory, and one or more separate buses for peripherals. These were accessed by separate instructions, with completely different timings and protocols. One of the first complications was the use ofinterrupts. Early computer programs performedI/Obywaiting in a loopfor the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others. High-end systems introduced the idea ofchannel controllers, which were essentially small computers dedicated to handling the input and output of a given bus.IBMintroduced these on theIBM 709in 1958, and they became a common feature of their platforms. Other high-performance vendors likeControl Data Corporationimplemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance. To provide modularity, memory and I/O buses can be combined into a unifiedsystem bus.[22]In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them. Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with adaisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling. Digital Equipment Corporation(DEC) further reduced cost for mass-producedminicomputers, andmapped peripheralsinto the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in theUnibusof thePDP-11around 1969.[23] Earlymicrocomputerbus systems were essentially a passivebackplaneconnected directly or through buffer amplifiers to the pins of theCPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devicesinterruptedthe CPU by signaling on separate CPU pins. For instance, adisk drivecontroller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the memory location that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with theS-100 busin theAltair 8800computer system. In some instances, most notably in theIBM PC, although similar physical architecture can be employed, instructions to access peripherals (inandout) and memory (movand others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus. These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock. Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter await state, or work at a slower clock frequency temporarily,[24]to talk to other devices in the computer. While acceptable inembedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers. Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each addedexpansion cardrequires manyjumpersin order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers. Second-generation bus systems likeNuBusaddressed some of these problems. They typically separated the computer into twoaddress spaces, the CPU and memory on one side, and the various peripheral devices on the other. Abus controlleraccepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the peripheral bus. Devices on the bus could talk to each other with no CPU intervention. This led to much better performance but also required the cards to be much more complex. These buses also often addressed speed issues by being bigger in terms of the size of the data path, moving from 8-bitparallel busesin the first generation, to 16 or 32-bit in the second, as well as adding software setup (later standardized asPlug-n-play) to supplant or replace the jumpers. However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was thatvideo cardsquickly outran even the newer bus systems likePCI, and computers began to includeAGPjust to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the newPCI Expressbus. An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems likeSCSIandIDEwere introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices.[citation needed] Third-generation buses have been emerging into the market since about 2001, includingHyperTransportandInfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses, as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third-generation buses tend to look more like anetworkthan the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once. Buses such asWishbonehave been developed by theopen source hardwaremovement in an attempt to further remove legal and patent constraints from computer design. TheCompute Express Link(CXL) is anopen standardinterconnectfor high-speedCPU-to-device and CPU-to-memory, designed to accelerate next-generationdata centerperformance.[25] Manyfield busesare serial data buses (not to be confused with the parallel data bus section of asystem busorexpansion card), several of which use theRS-485electrical characteristics and then specify their own protocol and connector: Other serial buses include:
https://en.wikipedia.org/wiki/Computer_bus
Common Hardware Reference Platform(CHRP) is a standardsystem architectureforPowerPC-based computer systems published jointly byIBMandApplein 1995. Like its predecessorPReP, it was conceptualized as a design to allow various operating systems to run on an industry standard hardware platform, and specified the use ofOpen FirmwareandRTASfor machine abstraction purposes. UnlikePReP, CHRP incorporated elements of thePower Macintosharchitecture and was intended to support theclassic Mac OSandNetWare, in addition to the four operating systems that had been ported to PReP at the time (Windows NT,OS/2,Solaris, andAIX). CHRP did not receive industry-wide adoption, however. The only systems to ship with actual CHRP hardware are certain members of IBM'sRS/6000series runningAIX, and small amount of Motorola PowerStack workstations.[1]Mac OS 8 contains support for CHRP[2]andNew WorldPower Macintosh computers are partially based on CHRP and PReP. Power.orghas a newPower Architecture Platform Reference(PAPR) that provides the foundation for development ofPower ISA-based computers running the Linux operating system. The PAPR was released fourth quarter of 2006.
https://en.wikipedia.org/wiki/Common_Hardware_Reference_Platform
TheOpenPOWER Foundationis a collaboration aroundPower ISA-based products initiated byIBMand announced as the "OpenPOWER Consortium" on August 6, 2013.[5]IBM's focus is to open up technology surrounding their Power Architecture offerings, such as processor specifications, firmware, and software with a liberal license, and will be using a collaborative development model with their partners.[6][7] The goal is to enable the server vendor ecosystem to build its own customized server, networking, and storage hardware for futuredata centersandcloud computing.[8] The governing body around thePower ISAinstruction setis now the OpenPOWER Foundation: IBM allows its patents to be royalty-free for Compliant implementations.[9]Processors based on IBM'sIPcan now be fabricated on any foundry and mixed with other hardware products of the integrator's choice. On August 20, 2019, IBM announced that the OpenPOWER Foundation would become part of theLinux Foundation.[10] IBM is using the word "open" to describe this project in three ways:[7] OpenPower Foundation also releases Documentation on the Power Architecture.[11] Some relevant documents are thePower ISAandPower Architecture Platform Reference. IBM is looking to offer thePOWER8chip technology and other future iterations under the OpenPOWER initiative[6]but they are also making previous designs available for licensing.[12]Partners are required to contribute intellectual property to the OpenPOWER Foundation to be able to gain high level status. The POWER8 processor architecture incorporates facilities to integrate it more easily into custom designs. The generic memory controllers are designed to evolve with future technologies, and the newCAPI(Coherent Accelerator Processor Interface) expansion bus is built to integrate easily with external coprocessors likeGPUs,ASICsandFPGAs. Nvidia is contributing their fast interconnect technology,NVLink, that will enable tight coupling of Nvidia'sPascalbased graphics processors into future POWER processors.[13] In August 2019, IBM released the tinyMicrowattprocessor core implementing the Power ISA v.3.0 and to be used as a reference design for OpenPOWER. It's entirely open source and published onGitHub.[14]Later,Chiselwattjoined in as a second open source implementation.[15] In June 2020, IBM released the high performanceA2I coreunder a similar open source license.[16]and followed up with theA2O corein September 2020.[17] Libre-SOCis the third, from scratch built, implementation of the Power ISA v.3.0, and the first Libre/Open POWER ISA core outside of IBM. The OpenPOWER initiative will includefirmware, theKVMhypervisor, andlittle endianLinuxoperating system.[6]The foundation has a site onGitHubfor the software they are releasing as open source. As of July 2014, it has released firmware to boot Linux.[18] SUSEincluded support for Power8 in their enterprise Linux distributionSUSE Linux Enterprise Serverversion 12 (release 27 October 2014).[19] Canonical Ltd.supports the architecture inUbuntu Serverfrom version 16.04 LTS.[20] FreeBSDhas also been reported to have preliminary support for the architecture.[21][22] Collabora Onlineis an enterprise-ready edition of LibreOffice with web-based office suite real-time collaboration, support of the OpenPOWER ppc64le architecture was announced in October 2022.[23]It comes with Ubuntu 20.04 packages and Docker images, and is delivered as a part of Nextcloud Enterprise which specialises in sharing files, writing emails, conducting chats and video conferences. Google,Tyan,Nvidia, andMellanoxare founding members of the OpenPOWER Foundation.[6]Nvidia is looking to merge its graphics cores and Mellanox to integrate its high performance interconnects with Power cores. Tyan is said to be working on servers using POWER8[24]and Google sees usingPower processorsin its data centers as a future possibility.[25]Alteraannounced support for OpenPOWER in November 2013 with theirFPGAofferings andOpenCLsoftware.[26] On January 19, 2014, the Suzhou PowerCore Technology Company and the Research Institute of Jiangsu Industrial Technology announced that they will join the OpenPOWER Foundation and license POWER8 technologies to promote and help build systems around and design custom made processors for use inbig dataandcloud computingapplications.[27][28]On February 12, 2014,Samsung Electronicsjoined.[29][30]As of March 2014, additional members areAltera,Fusion-io,Hynix,Micron, Servergy, andXilinx. As of April 2014,Canonical, Chuanghe Mobile,Emulex,Hitachi,Inspur,Jülich Research Centre,Oregon State University, Teamsun, Unisource Technology Inc, andZTEare listed as members at various levels.[31]As of December 2014,Rackspace,Avnet,Lawrence Livermore National Laboratory,Sandia National Laboratories,Tsinghua University,Nallatech,Bull,QLogic, and Bloombase have joined, totaling about 80 members.[32] At the first annual OpenPOWER Summit 2015, the organization announced that there were 113 members, includingWistron,Cirrascale, andPMC-Sierra. As of late 2016, the OpenPOWER foundation has more than 250 members. As of July 2020, the OpenPOWER Foundation reported that it had 350-plus members.[1]
https://en.wikipedia.org/wiki/OpenPOWER_Foundation
Power ISAis areduced instruction set computer(RISC)instruction set architecture(ISA) currently developed by theOpenPOWER Foundation, led byIBM. It was originally developed by IBM and the now-defunctPower.orgindustry group. Power ISA is an evolution of thePowerPCISA, created by the mergers of the core PowerPC ISA and the optional Book E for embedded applications. The merger of these two components in 2006 was led by Power.org founders IBM andFreescale Semiconductor. Prior to version 3.0, the ISA is divided into several categories.Processorsimplement a set of these categories as required for theirtask. Different classes of processors are required to implement certain categories, for example a server-class processor includes the categories:Base,Server,Floating-Point,64-Bit, etc. All processors implement the Base category. Power ISA is a RISCload/store architecture. It has multiple sets ofregisters: Instructions up to version 3.0 have a length of 32 bits, with the exception of the VLE (variable-length encoding) subset that provides for highercode densityfor low-end embedded applications, and version 3.1 which introduced prefixing to create 64-bit instructions. Most instructions aretriadic, i.e. have two source operands and one destination. Single- anddouble-precisionIEEE 754compliant floating-point operations are supported, including additionalfused multiply–add(FMA) and decimal floating-point instructions. There are provisions forsingle instruction, multiple data(SIMD) operations on integer and floating-point data on up to 16 elements in one instruction. Power ISA has support forHarvardcache, i.e.split data and instruction caches, and support for unified caches. Memory operations are strictly load/store, but allow forout-of-order execution. There is also support for bothbig and little-endianaddressing with separate categories for moded and per-page endianness, and support for both32-bitand64-bitaddressing. Different modes of operation include user, supervisor and hypervisor. The Power ISA specification is divided into five parts, called "books": New in version 3 of the Power ISA is that you don't have to implement the entire specification to be compliant. The sprawl of instructions and technologies has made the complete specification unwieldy, so the OpenPOWER Foundation have decided to enable tiered compliancy. These levels include optional and mandatory requirements, however one common misunderstanding is that there is nothing stopping an implementation from being compliant at a lower level but having additional selected functions from higher levels and custom extensions. It is however recommended that an option be provided to disable any added functions beyond the design's declared subset level. A design must be compliant at its declared subset level to make use of the Foundation's protection regarding use ofintellectual property, be itpatentsortrademarks. This is explained in the OpenPOWER EULA.[1] A compliant design must:[2] If the extension is general-purpose enough, the OpenPOWER Foundation asks that implementors submit it as a Request for Comments (RFC) to theOpenPOWER ISA Workgroup. Note that it is not strictly necessary to join the OpenPOWER Foundation to submit RFCs.[3] The EABI specifications predate the announcement and creation of the Compliancy subsets. Regarding the Linux Compliancy subset having VSX (SIMD) optional: in 2003–4, 64-bit EABI v1.9 made SIMD optional,[4]but in July 2015, to improve performance for IBM POWER9 systems, SIMD was made mandatory in EABI v2.0.[5]This discrepancy between SIMD being optional in the Linux Compliancy level but mandatory in EABI v2.0 cannot be rectified without considerable effort: backwards incompatibility forLinux distributionsis not a viable option. At present this leaves new OpenPOWER implementors wishing to run standard Linux distributions having to implement a massive 962 instructions. By contrast, RISC-V RV64GC, the minimum to run Linux, requires only 165.[6] The specification for Power ISA v.2.03[7]is based on the former PowerPC ISA v.2.02[8]inPOWER5+ and the Book E[9]extension of thePowerPCspecification. The Book I included five new chapters regarding auxiliary processing units likeDSPsand theAltiVecextension. The specification for Power ISA v.2.04[10]was finalized in June 2007. It is based on Power ISA v.2.03 and includes changes primarily to theBook III-Spart regardingvirtualization,hypervisorfunctions,logical partitioningandvirtual pagehandling. The specification for Power ISA v.2.05[11]was released in December 2007. It is based on Power ISA v.2.04 and includes changes primarily toBook IandBook III-S, including significant enhancements such as decimal arithmetic (Category: Decimal Floating-Point inBook I) and server hypervisor improvements. The specification for Power ISA v.2.06[12]was released in February 2009, and revised in July 2010.[13]It is based on Power ISA v.2.05 and includes extensions for the POWER7 processor ande500-mc core. One significant new feature is vector-scalar floating-point instructions (VSX).[14]Book III-Ealso includes significant enhancement for the embedded specification regarding hypervisor and virtualisation on single and multi core implementations. The spec was revised in November 2010 to the Power ISA v.2.06 revision B spec, enhancing virtualization features.[13][15] The specification for Power ISA v.2.07[16]was released in May 2013. It is based on Power ISA v.2.06 and includes major enhancements tological partition functions,transactional memory, expanded performance monitoring, new storage control features, additions to the VMX and VSX vector facilities (VSX-2), along withAES[16]: 257[17]andGalois Counter Mode(GCM), SHA-224, SHA-256,[16]: 258SHA-384 and SHA-512[16]: 258(SHA-2) cryptographic extensions andcyclic redundancy check(CRC)algorithms.[18] The spec was revised in April 2015 to the Power ISA v.2.07 B spec.[19][20] The specification for Power ISA v.3.0[21][22]was released in November 2015. It is the first to come out after the founding of the OpenPOWER Foundation and includes enhancements for a broad spectrum of workloads and removes the server and embedded categories while retaining backwards compatibility and adds support for VSX-3 instructions. New functions include 128-bit quad-precision floating-point operations, arandom number generator, hardware-assistedgarbage collectionand hardware-enforced trusted computing. The spec was revised in March 2017 to the Power ISA v.3.0 B spec,[19][23]and revised again to v3.0C in May 2020.[19][24][25]One major change from v3.0 to v3.0B is the removal of support for hardware assisted garbage collection. The key difference between v3.0B and v3.0C is that the Compliancy Levels listed in v3.1 were also added to v3.0C. The specification for Power ISA v.3.1[19][27]was released in May 2020. Mainly giving support for new functions introduced in Power10, but also includes the notion of optionality to the PowerISA specification. Instructions can now be eightbyteslong, "prefixed instructions", compared to the usual four byte "word instructions". A lot of new functions to SIMD and VSX instructions are also added. VSX and the SVP64 extension provide hardware support for 16-bit half precision floats.[28][29] One key benefit of the new 64-bit prefixed instructions is the extension of immediates in branches to 34-bit. The spec was revised in September 2021 to the Power ISA v.3.1B spec.[19][30] The spec was revised in May 2024 to the Power ISA v.3.1C spec.[19][31]
https://en.wikipedia.org/wiki/Power_ISA
Power ISAis areduced instruction set computer(RISC)instruction set architecture(ISA) currently developed by theOpenPOWER Foundation, led byIBM. It was originally developed by IBM and the now-defunctPower.orgindustry group. Power ISA is an evolution of thePowerPCISA, created by the mergers of the core PowerPC ISA and the optional Book E for embedded applications. The merger of these two components in 2006 was led by Power.org founders IBM andFreescale Semiconductor. Prior to version 3.0, the ISA is divided into several categories.Processorsimplement a set of these categories as required for theirtask. Different classes of processors are required to implement certain categories, for example a server-class processor includes the categories:Base,Server,Floating-Point,64-Bit, etc. All processors implement the Base category. Power ISA is a RISCload/store architecture. It has multiple sets ofregisters: Instructions up to version 3.0 have a length of 32 bits, with the exception of the VLE (variable-length encoding) subset that provides for highercode densityfor low-end embedded applications, and version 3.1 which introduced prefixing to create 64-bit instructions. Most instructions aretriadic, i.e. have two source operands and one destination. Single- anddouble-precisionIEEE 754compliant floating-point operations are supported, including additionalfused multiply–add(FMA) and decimal floating-point instructions. There are provisions forsingle instruction, multiple data(SIMD) operations on integer and floating-point data on up to 16 elements in one instruction. Power ISA has support forHarvardcache, i.e.split data and instruction caches, and support for unified caches. Memory operations are strictly load/store, but allow forout-of-order execution. There is also support for bothbig and little-endianaddressing with separate categories for moded and per-page endianness, and support for both32-bitand64-bitaddressing. Different modes of operation include user, supervisor and hypervisor. The Power ISA specification is divided into five parts, called "books": New in version 3 of the Power ISA is that you don't have to implement the entire specification to be compliant. The sprawl of instructions and technologies has made the complete specification unwieldy, so the OpenPOWER Foundation have decided to enable tiered compliancy. These levels include optional and mandatory requirements, however one common misunderstanding is that there is nothing stopping an implementation from being compliant at a lower level but having additional selected functions from higher levels and custom extensions. It is however recommended that an option be provided to disable any added functions beyond the design's declared subset level. A design must be compliant at its declared subset level to make use of the Foundation's protection regarding use ofintellectual property, be itpatentsortrademarks. This is explained in the OpenPOWER EULA.[1] A compliant design must:[2] If the extension is general-purpose enough, the OpenPOWER Foundation asks that implementors submit it as a Request for Comments (RFC) to theOpenPOWER ISA Workgroup. Note that it is not strictly necessary to join the OpenPOWER Foundation to submit RFCs.[3] The EABI specifications predate the announcement and creation of the Compliancy subsets. Regarding the Linux Compliancy subset having VSX (SIMD) optional: in 2003–4, 64-bit EABI v1.9 made SIMD optional,[4]but in July 2015, to improve performance for IBM POWER9 systems, SIMD was made mandatory in EABI v2.0.[5]This discrepancy between SIMD being optional in the Linux Compliancy level but mandatory in EABI v2.0 cannot be rectified without considerable effort: backwards incompatibility forLinux distributionsis not a viable option. At present this leaves new OpenPOWER implementors wishing to run standard Linux distributions having to implement a massive 962 instructions. By contrast, RISC-V RV64GC, the minimum to run Linux, requires only 165.[6] The specification for Power ISA v.2.03[7]is based on the former PowerPC ISA v.2.02[8]inPOWER5+ and the Book E[9]extension of thePowerPCspecification. The Book I included five new chapters regarding auxiliary processing units likeDSPsand theAltiVecextension. The specification for Power ISA v.2.04[10]was finalized in June 2007. It is based on Power ISA v.2.03 and includes changes primarily to theBook III-Spart regardingvirtualization,hypervisorfunctions,logical partitioningandvirtual pagehandling. The specification for Power ISA v.2.05[11]was released in December 2007. It is based on Power ISA v.2.04 and includes changes primarily toBook IandBook III-S, including significant enhancements such as decimal arithmetic (Category: Decimal Floating-Point inBook I) and server hypervisor improvements. The specification for Power ISA v.2.06[12]was released in February 2009, and revised in July 2010.[13]It is based on Power ISA v.2.05 and includes extensions for the POWER7 processor ande500-mc core. One significant new feature is vector-scalar floating-point instructions (VSX).[14]Book III-Ealso includes significant enhancement for the embedded specification regarding hypervisor and virtualisation on single and multi core implementations. The spec was revised in November 2010 to the Power ISA v.2.06 revision B spec, enhancing virtualization features.[13][15] The specification for Power ISA v.2.07[16]was released in May 2013. It is based on Power ISA v.2.06 and includes major enhancements tological partition functions,transactional memory, expanded performance monitoring, new storage control features, additions to the VMX and VSX vector facilities (VSX-2), along withAES[16]: 257[17]andGalois Counter Mode(GCM), SHA-224, SHA-256,[16]: 258SHA-384 and SHA-512[16]: 258(SHA-2) cryptographic extensions andcyclic redundancy check(CRC)algorithms.[18] The spec was revised in April 2015 to the Power ISA v.2.07 B spec.[19][20] The specification for Power ISA v.3.0[21][22]was released in November 2015. It is the first to come out after the founding of the OpenPOWER Foundation and includes enhancements for a broad spectrum of workloads and removes the server and embedded categories while retaining backwards compatibility and adds support for VSX-3 instructions. New functions include 128-bit quad-precision floating-point operations, arandom number generator, hardware-assistedgarbage collectionand hardware-enforced trusted computing. The spec was revised in March 2017 to the Power ISA v.3.0 B spec,[19][23]and revised again to v3.0C in May 2020.[19][24][25]One major change from v3.0 to v3.0B is the removal of support for hardware assisted garbage collection. The key difference between v3.0B and v3.0C is that the Compliancy Levels listed in v3.1 were also added to v3.0C. The specification for Power ISA v.3.1[19][27]was released in May 2020. Mainly giving support for new functions introduced in Power10, but also includes the notion of optionality to the PowerISA specification. Instructions can now be eightbyteslong, "prefixed instructions", compared to the usual four byte "word instructions". A lot of new functions to SIMD and VSX instructions are also added. VSX and the SVP64 extension provide hardware support for 16-bit half precision floats.[28][29] One key benefit of the new 64-bit prefixed instructions is the extension of immediates in branches to 34-bit. The spec was revised in September 2021 to the Power ISA v.3.1B spec.[19][30] The spec was revised in May 2024 to the Power ISA v.3.1C spec.[19][31]
https://en.wikipedia.org/wiki/Power_Architecture
Power Architecture Platform Reference(PAPR) is an initiative fromPower.orgto make a new open computing platform based onPower ISAprocessors. It follows two previous attempts made in the 1990s,PRePandCHRP. The PAPR specification provides the foundation for development of standard server computers. Various operating systems likeLinuxandIBM AIXrely on the PAPR interface to run on Power-based hardware. PAPR is Power.org's move toward what IBM did originally with PReP, in that it defines a common hardware definition and software/firmware platform under a set of requirements. In practice, the PAPR is an extension to theOpen Firmwarespecification. Since 2013, extensions have been done by theOpenPOWER Foundation, which released a slightly reduced public version of the PAPR standard for running Linux on Power hardware (called LoPAPR)[1]. In 2020, LoPAPR was renamed to Linux on Power Architecture Reference (LoPAR)[2]with the release of a new version. In July 2020, the document sources of LoPAR[3]were released on terms of Apache License 2.0 inOpenPOWER FoundationGitHub account, and are accepting pull requests from the community. Wind Riverled the Power.org sub-committee working on an embedded specification known asePAPR,[4]that was ratified in July 2008. In October 2011, an updated specification was released, the ePAPR v1.1 to clarify and add a new chapter on virtualization. Apart from basic concepts like using adevice tree, the ePAPR specification has nothing in common with the variant for servers—for example it defines a completely different set of hypercalls.
https://en.wikipedia.org/wiki/Power_Architecture_Platform_Reference
ThePowerOpen Environment(POE), created in 1991 from theAIM alliance, is anopen standardfor running aUnix-basedoperating systemon thePowerPCcomputer architecture. TheAIM alliancewas announced on October 2, 1991, yielding the historic first technology partnership between Apple and IBM. One of its many lofty goals was to somehow eventually merge Apple's user-friendly graphical interface and desktop applications market with IBM's highly scalable Unix server market, allowing the two companies to enter what Apple believed to be an emerging "general desktop open systems market". This was touched upon by Apple's November 1991 announcement ofA/UX 3.0. The upcomingA/UX 4.0(never actually released) would target the PowerOpen EnvironmentABI, merge features ofIBM'sAIXvariant of Unix intoA/UX, and use theOSF/1kernel from theOpen Software Foundation. A/UX 3.0 would serve as an "important migration path" to this new system, making Unix and System 7 applications compliant with PowerOpen.[1]A/UX 4.0 and AIX were intended to run on a variety of IBM'sPOWERand PowerPC hardware, and on Apple'sPowerPCbased hardware.[2] PowerOpen will be the operating system for PowerPC Mac owners who need to run Unix-based applications. ... Apple agreed to provide IBM with the technology needed to allow standard Macintosh applications—starting with the Finder—to run under the new AIX, much as they do under A/UX today. Apple will apply the PowerOpen label to the new version of A/UX that results from the deal; IBM will do likewise with the new AIX. The need for the POE reduced due to the increasing availability ofUnix-likeoperating systems on PowerPC, such asLinuxdistributionsand AIX. ThePowerOpen Associationwas formed to promote the POE and test for conformance, and disbanded in 1995. That year, other AIM elements disbanded. The POE containsAPIandABIspecifications.[4]The presence of the ABI specification in the POE distinguishes it from other open systems such asPOSIXandXPG4, since it allowsplatform-independent binary compatibility, which is otherwise typically limited to particularhardware. Derived fromAIX, the POE conforms to industry open standards including POSIX, XPG4, andMotif. The POE is hardwarebusindependent. System implementations can range fromlaptop computerstosupercomputers. It requires a multi-user,multitaskingoperating system. It providesnetworkingsupport, anX Window Systemextension, aMacintoshApplication Services extension, and Motif. Macintosh Application Services (MAS) was an Apple software product intended to run existing Mac applications alongside other applications in the X environment, including those written for the 680x0 architecture.[5]Also supporting Mac applications that had been ported to PowerPC, MAS was described as "Apple's key contribution to the PowerOpen alliance" and was demonstrated running Mac applications including aQuickTimemovie on three different workstation platforms. It was an optional component in the PowerOpen architecture.[6]
https://en.wikipedia.org/wiki/PowerOpen_Environment
PowerPC Reference Platform(PReP) was a standardsystem architectureforPowerPC-based computer systems (as well as areference implementation) developed at the same time as the PowerPC processor architecture. Published byIBMin 1994, it allowed hardware vendors to build a machine that could run various operating systems, includingWindows NT,OS/2,Solaris,TaligentandAIX. One of the stated goals of the PReP specification was to leverage standard PC hardware.Apple, wishing to seamlessly transition itsMacintoshcomputers to PowerPC, found this to be particularly problematic. As it appeared no one was particularly happy with PReP, a new standard, theCommon Hardware Reference Platform(CHRP), was developed and published in late 1995, incorporating the elements of both PReP and thePower Macintosharchitecture. Key to CHRP was the requirement forOpen Firmware(also required in PReP-compliant systems delivered after June 1, 1995), which gave vendors greatly improved support during the boot process, allowing the hardware to be far more varied. PReP systems were never popular.[clarification needed]Finding current, readily available operating systems for old PReP hardware can be difficult.DebianandNetBSDstill maintain their respective ports to this architecture, although developer and user activity is extremely low.[clarification needed]TheRTEMSreal-time operating system provides a board support package for PReP which can be run utilizing theQEMUPReP emulator. This provides a convenient development environment for PowerPC-based real-time, embedded systems. Power.org has aPower Architecture Platform Reference(PAPR) that provides the foundation for development ofPower ISA-based computers running theLinuxoperating system. PAPR was released in the fourth quarter of 2006.
https://en.wikipedia.org/wiki/PowerPC_Reference_Platform
The following is alist of PowerPC processors. 32-bit and 64-bit PowerPC processors have been a favorite of embedded computer designers. To keep costs low on high-volume competitive products, the CPU core is usually bundled into a system-on-chip (SOC) integrated circuit. SOCs contain the processor core, cache and the processor's local data on-chip, along with clocking, timers, memory (SDRAM), peripheral (network, serial I/O), and bus (PCI, PCI-X, ROM/Flash bus, I2C) controllers. IBM also offers an open bus architecture (calledCoreConnect) to facilitate connection of the processor core to memory and peripherals in a SOC design. IBM and Motorola have competed along parallel development lines in overlapping markets. A later development was theBook E PowerPC Specification, implemented by both IBM and Freescale Semiconductor, which defines embedded extensions to the PowerPC programming model. Northbridgeor host bridge for PowerPC CPU is anIntegrated Circuit(IC) for interfacing PowerPC CPU with memory, and Southbridge IC. Some Northbridge also provide interface forAccelerated Graphics Ports(AGP) bus,Peripheral Component Interconnect(PCI), PCI-X, PCI Express, orHypertransportbus. Specific Northbridge IC must be used for PowerPC CPU. It is impossible to use Northbridge for Intel or AMD x86 CPU with PowerPC CPU. However it is possible to use certain types of x86Southbridgein PowerPC based motherboards. Example:VIA686B andAMDGeode CS5536. Apple used their own type of northbridges which were customASICsmanufactured byVLSI(laterPhilips),Texas InstrumentsandLucent(later agere systems) List of Northbridge for PowerPC:
https://en.wikipedia.org/wiki/List_of_PowerPC_processors
There are several ways in whichgame consolescan be categorized. One is by itsconsole generation, and another is by itscomputer architecture. Game consoles have long used specialized and customizedcomputerhardwarewith the base in some standardizedprocessorinstruction set architecture. In this case, it isPowerPCandPower ISA, processor architectures initially developed in the early 1990s bythe AIM alliance, i.e.Apple,IBM, andMotorola. Even though these consoles share much in regard toinstruction set architecture, game consoles are still highly specialized computers so it is not common for games to be readilyportableor compatible between devices. OnlyNintendohas kept a level of portability between their consoles, and even there it is not universal. The first devices used standard processors, but later consoles usedbespokeprocessors with special features, primarily developed by or in cooperation withIBMfor the explicit purpose of being in a game console. In this regard, these computers can be considered "embedded". All three major consoles of theseventh generationwere PowerPC based. As of 2019, no PowerPC-based game consoles are currently in production. The most recent release, Nintendo'sWii U, has since been discontinued and succeeded by theNintendo Switch(which uses aNvidiaTegraARMprocessor). TheWii Mini, the last PowerPC-based game console to remain in production, was discontinued in 2017.[citation needed]
https://en.wikipedia.org/wiki/List_of_PowerPC-based_game_consoles
[3] +3DNow!++SSE +SSE2+PowerNow!+AMD64+NX Bit +SSE3-PowerNow!+Cool'n'Quiet+AMD-V +SSE4a+Enhanced 3DNow! +SSE4.1+SSE4.2+AVX+Turbo Core 2.0+IOMMU+AES+CLMUL+FMA4+XOP+CVT16+F16C+ABM+ECC +AVX1.1+Turbo Core 3.0+FMA3+BMI1+TBM+EVP +AVX2+BMI2 GloFo14LP +SHA-FMA4-TBM-XOP-3DNow-3DNow!+-Enhanced 3DNow! (2550–3600 all) (2700–3800 boost) (2150–3100 all) (2900–3100 boost) (2450–3100 all) (2900–3100 boost) GloFo12LP TSMC N7 Desktop:Socket AM4 Desktop:Dual-channelDDR4 Desktop:Socket AM4 Desktop:Dual-channelDDR4 TSMC N6 TSMC N5
https://en.wikipedia.org/wiki/Table_of_AMD_processors
This is an overview ofchipsetssold under theAMDbrand, manufactured before May 2004 by the company itself, before the adoption ofopen platform approachas well as chipsets manufactured byATI Technologiesafter October 2006 as the completion of the ATI acquisition. Cyrix 6x86 (FSB) (Slot A,Socket A),Alpha 21264 (FSB) VIA-VT82C686A (Socket A),Alpha 21264 (FSB) VIA-T82C686B (HT 1.x) AMD-8132 A-Link Express and A-Link Express II are essentiallyPCIe 1.1x4 lanes. SeeComparison of ATI Chipsetsfor the comparison of chipsets sold under theATIbrand for AMD processors, before AMD's acquisition of ATI. (HT 2.0) (HT 2.0) (HT 2.0) (HT 2.0) (HT 3.0) Puma Platform,PowerXpress (HT 3.0) (HT 3.0) A-Link Express III is essentiallyPCIe 2.0x4 lanes. Mobile Chipset,Tigrisplatform Mobile Chipset,Nileplatform Mobile Chipset,Danubeplatform AM3+ socket support AM3+ socket support AM3+ socket support 1Parallel ATA, also known as Enhanced IDE supports up to 2 devices per channel. (nm) (W) 1Parallel ATA, also known as Enhanced IDE supports up to 2 devices per channel. For AMD APU models from 2011 until 2016. AMD marketed their chipsets as Fusion Controller Hubs (FCH), implementing it across their product range in 2017 alongside the release of the Zen architecture. Before then, only APUs used FCHs, while their other CPUs still used a northbridge and southbridge. The Fusion Controller Hubs are similar in function to Intel'sPlatform Controller Hub. AMD's FCH has been discontinued since the release of the Carrizo series of CPUs as it has been integrated into the same die as the rest of the CPU.[9]However, since the release of the Zen architecture, there's still a component called a chipset which only handles relatively low speed I/O such as USB and SATA ports and connects to the CPU with a PCIe connection. In these systems all PCIe connections are routed directly to the CPU.[10]The UMI interface previously used by AMD for communicating with the FCH is replaced with a PCIe connection. Technically the processor can operate without a chipset; it only continues to be present for interfacing with low speed I/O. AMD server CPUs adopt a self containedsystem on chipdesign instead which doesn't require a chipset.[11][12][13][14] (W) Secure Digital: Codename: UMI: There are currently 3 generations of AM4-based chipsets on the market. Models beginning with the numeral "3" are representatives of the first generation, those with "4" the second generation, etc. In addition to their traditional chipsets, AMD offers chipsets with "processor-direct access", exclusively through OEM partners.[18]Enthusiast publicationigor'sLABobtained leaked documents about an AMD "Knoll Activator" that enables "activating... processor I/O and processor features in the absence of an alternative AMD chipset." It is concluded that motherboards with the Knoll Activator would be built with I/O from the processor and low-cost I/O chips.[19] Individual chipset models differ in the number of PCI Express lanes, USB ports, and SATA connectors, as well as supported technologies; the table below shows these differences.[20][21] The 300 series, 400 series, and the B550 chipsets are designed in collaboration withASMediaand the family is codenamed Promontory.[41]The X570 is designed by AMD with IP licensed from ASMedia and other companies and is codenamed Bixby.[42]Network interface controller,Wi-Fi, andBluetoothare provided by external chips connected to the chipset through PCIe or USB. All 300 series chipsets are made using 55nmlithography.[43]The X570 chipset is a repurposed Matisse/Vermeer IO die made using a 14 nm process.[44] Supports both 1st and 2nd generation AMD Ryzen Threadripper processors.[45] Supports 3rd generation AMD Ryzen Threadripper (3960X to 3990X) processors.[48] Although the X399, TRX40 and WRX80 motherboards' CPU sockets use the same number of pins, the sockets are incompatible with each other due to ID pins and no-connects of some pins. Twelve TRX40 motherboards were released at launch in November 2019. The TRX40 chipset does not support theHD Audiointerface on its own, so motherboard vendors must include a USB audio device or a PCIe audio device on TRX40 motherboards to integrate audio codecs.[47] Supports 3rd (3900WX) and 4th generation (5900WX) AMD Ryzen Threadripper Pro processors.[49] Although the X399, TRX40 and WRX80 motherboards' CPU sockets use the same number of pins, the sockets are incompatible with each other due to ID pins and no-connects of some pins.[47]Three WRX80 motherboards were released at launch in March 2021. The WRX80 chipset does not support theHD Audiointerface on its own, so motherboard vendors must include a USB audio device or a PCIe audio device on WRX80 motherboards to integrate audio codecs. AMD uses a single Promontory 21 chipset for all configurations that include a chipset. A single Promontory 21 chip provides four SATA III ports and twelve PCIe 4.0 lanes. Four lanes are reserved for the chipset uplink to the CPU while another four are used to connect to another Promontory 21 chip in a daisy-chained topology for X670, X670E and X870E chipsets.[50] ThesTR5socket has two chipset options available, TRX50 and WRX90: HD audio support is provided by the CPU, rather than by the chipset.[60]
https://en.wikipedia.org/wiki/List_of_AMD_chipsets
APU features table Launched in 2003, the initial platform for mobileAMDprocessors consists of: MMX,SSE,SSE2,Enhanced 3DNow!,NX bit MMX, SSE, SSE2, Enhanced 3DNow!, NX bit MMX, SSE, SSE2, Enhanced 3DNow!, NX bit MMX, SSE, SSE2, Enhanced 3DNow!, NX bit MMX, SSE, SSE2,SSE3, Enhanced 3DNow!, NX bit MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit MMX, SSE, SSE2, Enhanced 3DNow!, NX bit, AMD64 (AMD'sx86-64implementation),PowerNow! MMX, SSE, SSE2, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit,AMD64, PowerNow! Introduced in 2006, theKiteplatformconsists of: MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow! MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!,AMD-V MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V AMD usedKite Refreshas the codenamed for the second-generation AMD mobile platform introduced in February 2007. MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64 MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V ThePumaplatformintroduced in 2008 with June 2008 availability for the third-generation AMD mobile platform consists of: MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow! MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V TheYukonplatformwas introduced on January 8, 2009, with expected April availability for the first AMD Ultrathin Platform targeting theultra-portablenotebook market. MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow!, AMD-V[4] MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64 (AMD's x86-64 implementation), PowerNow! TheCongoplatform[5]was introduced in September 2009, as the second AMD Ultrathin Platform targeting theultra-portablenotebook market. MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, AMD-V MMX, SSE, SSE2, SSE3, Enhanced 3DNow!, NX bit, AMD64, AMD-V, PowerNow! TheTigrisplatform[6]introduced in September 2009 for the AMD Mainstream Notebook Platform consists of: Single-core mobile processor MMX, SSE, SSE2, SSE3,SSE4a,ABM, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V Dual-core mobile processor MMX, SSE, SSE2, SSE3, SSE4a,ABM, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V Dual-core mobile processor MMX, SSE, SSE2, SSE3, SSE4a,ABM, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V Dual-core mobile processor MMX, SSE, SSE2, SSE3, SSE4a,ABM, Enhanced 3DNow!, NX bit, AMD64, PowerNow!, AMD-V TheNileplatform[7][8]introduced on May 12, 2010, for the third AMD Ultrathin Platform consists of: Single-core mobile processor Single-core mobile processorSingle-core mobile processor Dual-core mobile processor Dual-core mobile processor TheDanubeplatform[7][10]introduced on May 12, 2010, for the AMD Mainstream Notebook Platform consists of: Single-core mobile processor Dual-core mobile processor Dual-core mobile processor Dual-core mobile processor Triple-core mobile processors Quad-core mobile processors AMD Ultrathin Platform introduced on January 5, 2011, as the fourth AMD mobile platform targeting theultra-portablenotebook market. It features the 40 nm AMD Ontario (a 9-wattAMD APUfor netbooks and small form factor desktops and devices) and Zacate (an 18-watt TDP APU for ultrathin, mainstream, and value notebooks as well as desktops and all-in-ones) APUs. Both low-power APU versions feature twoBobcatx86 cores and fully supportDirectX11,DirectCompute(Microsoftprogramming interface for GPU computing) andOpenCL(cross-platform programming interface standard for multi-core x86 and accelerated GPU computing). Both also include UVD dedicated hardware acceleration for HD video including 1080p resolutions.[11][12][13][14]This platform consists of: TheSabineplatform[15]introduced on June 30, 2011, for the AMD Mainstream Notebook Platform consists of: 1Unified shaders : Texture mapping units : Render output units AMD Ultrathin Platform introduced on June 6, 2012, as the fourth AMD mobile platform targeting theultra-portablenotebook market. It will feature the 40 nm Zacate (an 18-watt TDP APU for ultrathin, mainstream, and value notebooks as well as desktops and all-in-ones) APUs. This platform consists of: TheComalplatformintroduced on May 15, 2012, for the AMD Mainstream Notebook Platform consists of: 1Config GPU are Unified shaders : Texture mapping units : Render output units 1Config GPU are Unified shaders : Texture mapping units : Render output units Elite Mobility APU: 1Unified shaders : Texture mapping units : Render output units Mainstream APU: 1Unified shaders : Texture mapping units : Render output units AM6410ITJ44JB Common features of Ryzen 3000 notebook APUs: Common features of Ryzen 5000 notebook APUs: Common features of Ryzen 6000 notebook APUs: Common features of Ryzen 7020 notebook APUs: Common features of Ryzen 7030 notebook APUs: Common features of Ryzen 7035 notebook APUs: Common features of Ryzen 7040 notebook APUs: Common features of Ryzen 7045 notebook CPUs:
https://en.wikipedia.org/wiki/List_of_AMD_mobile_processors
Athlonis a family of CPUs designed byAMD, targeted mostly at the desktop market. The name "Athlon" has been largely unused as just "Athlon" since 2001 when AMD started naming its processorsAthlon XP, but in 2008 began referring to single core 64-bit processors from theAMD Athlon X2andAMD Phenomproduct lines. Later the name began being used for someAPUs. APU features table A0750APT3B A0900DMT3B A0950APT3B A0950DMT3B A1000DMT3B A1000DMT3C A1100AMT3B A1300APS3B Cores/threads (GHz) (GHz) Mem. (W) (MB) 10/1 Cores/threads Mem. (W) (MB) Memorysupport (W) (GHz) (MHz) (MB) Cores/threads Cores/threads Cores/threads Graphics (Vega) Common features: Note 1:Athlons use adouble data rate(DDR)front-side bus, (EV-6) meaning that the actual data transfer rate of the bus is twice its physical clock rate. The FSB's true data rate, 200 or 266 MT/s, is used in the tables, and the physical clock rates are 100 and 133 MHz, respectively. The multipliers in the tables above apply to the physical clock rate, not the data transfer rate.
https://en.wikipedia.org/wiki/List_of_AMD_Athlon_processors
TheAthlon XPmicroprocessor fromAMDis a seventh-generation32-bitCPU targeted at the consumer market. AHL1200DHT3B
https://en.wikipedia.org/wiki/List_of_AMD_Athlon_XP_processors
TheAthlon 64microprocessor fromAdvanced Micro Devices(AMD) is an eighth-generationcentral processing unit(CPU). Athlon 64 was targeted at the consumer market. Some features for Athlon 64 processors include:[1] "Code name" (Steppings,Process) (USD) (USD) (USD) (USD) ADA3400DAA4BY (E6) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD) (USD)
https://en.wikipedia.org/wiki/List_of_AMD_Athlon_64_processors
TheAMDAthlon X2processor family consists of processors based on both theAthlon 64 X2and thePhenomprocessor families. The original Athlon X2 processors were low-power Athlon 64 X2Brisbaneprocessors, while newer processors released in Q2 2008 are based on theK10Kumaprocessor.
https://en.wikipedia.org/wiki/List_of_AMD_Athlon_X2_processors
TheSempronis a name used for AMD's low-end CPUs, replacing theDuronprocessor. The name was introduced in 2004, and processors with this name continued to be available for the FM2/FM2+ socket in 2015. SMS3100BQX3LF
https://en.wikipedia.org/wiki/List_of_AMD_Sempron_processors
Turion 64is a family of CPUs designed byAMDfor the mobile computing market.
https://en.wikipedia.org/wiki/List_of_AMD_Turion_processors
Opteronis acentral processing unit(CPU) family within theAMD64line. Designed byAdvanced Micro Devices(AMD) for theservermarket,Opteroncompeted with Intel'sXeon. The Opteron family is succeeded by theZen-basedEpyc, andRyzen Threadripperand Threadripper Pro series. For Socket 940 and Socket 939 Opterons, each chip has a three-digit model number, in the formOpteronXYY. For Socket F and Socket AM2 Opterons, each chip has a four-digit model number, in the formOpteronXZYY. For all Opterons, the first digit (theX) specifies the number of CPUs on the target machine: For Socket F and Socket AM2 Opterons, the second digit (theZ) represents the processor generation. Presently, only2(dual-core), DDR2,3(quad-core) and4(six-core) are used. For all Opterons, the last two digits in the model number (theYY) indicate theclock rate(frequency) of a CPU, a higher number indicating a higher clock rate. This speed indication is comparable to processors of the same generation if they have the same amount of cores. Single-cores and dual-cores have different indications, despite sometimes having the same clock rate. Model number methodology for the AMD Opteron 4000 and 6000 Series processors.AMD Opteron processors are identified by a four digit model number,ZYXX, where:Z– denotes product series Y– denotes series generation XX– communicates a change in product specifications within the series, and is not a relative measure of performance. The suffixHEorEEdenotes a high-efficiency or energy-efficiency model with a lowerthermal design power(TDP) than a standard Opteron. The suffixSEdenotes a top-of-the-line model with a higher TDP than a standard Opteron. APU features table (W) (W) (W) number (W) Date (W) (USD) (W) (USD) (W) (USD) (W) (USD) (W) (USD) (W) (USD) (W) (USD) (W) (USD) (W) (USD) (GHz) (USD) (GHz) (USD) (GHz) (USD) (GHz) (USD) (GHz) (USD) (GHz) (USD) (GHz) (USD) (GHz) (USD) number (GHz) (USD) number (GHz) Date (USD) (GHz) (GHz) (W) (USD) (MB) (GHz) (GHz) (W) (USD) (MB) (GHz) (GHz) (W) (USD) (GHz) (GHz) (W) (USD) (MB) (GHz) (GHz) (W) (USD) (MB) (GHz) (GHz) (W) (USD) number Memorysupport (W) (USD) (GHz) (MB) (GHz) number (USD) number (W) (USD) (GHz) (MB) (GHz) The AMD Opteron A1100 is an enterprise-classARM Cortex-A57-basedSOC. number (W) Date (GHz) (MB)
https://en.wikipedia.org/wiki/List_of_AMD_Opteron_processors