text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
This generationallist of Intel processorsattempts to present all ofIntel'sprocessorsfrom the4-bit4004(1971) to the present high-end offerings. Concise technical data is given for each product.
An iterative refresh of Raptor Lake-S desktop processors, called the 14th generation of Intel Core, was launched on October 17, 2023.[1][2]
CPUs inboldbelow feature ECC memory support when paired with a motherboard based on the W680 chipset according to each respective Intel Ark product page.
Processor
An iterative refresh of Raptor Lake-HX mobile processors, called the 14th generation of Intel Core, was launched on Jan 9, 2024[3]
family
family
family
(threads)
cache
Turbo
for 11th Gen
Processors
All processors are listed in chronological order.
First commercially availablemicroprocessor(single-chip IC processor)
MCS-4 family:
They are ICs with CPU, RAM, ROM (or PROM or EPROM), I/O Ports, Timers & Interrupts
MCS-48family:
MCS-51family:
MCS-151family:
MCS-251family:
Introduced in the third quarter of 1974, thesebit-slicingcomponents usedbipolarSchottky transistors. Each component implemented two bits of aprocessorfunction; packages could be interconnected to build a processor with any desired word length.
Members of the 3000 family:
Bus width 2nbits data/address (depending on numbernof slices used)
Pentium II Xeon(chronological entry)
XScale(chronological entry – non-x86 architecture)
Pentium 4 (not 4EE, 4E, 4F), Itanium, P4-based Xeon, Itanium 2(chronological entries)
Itanium(chronological entry – new non-x86 architecture)
Itanium 2(chronological entry – new non-x86 architecture)
Westmere
Not listed (yet) are several Broadwell-based CPU models:[20]
Note: this list does not say that all processors that match these patterns are Broadwell-based or fit into this scheme. The model numbers may have suffixes that are not shown here.
Many Skylake-based processors are not yet listed in this section: mobile i3/i5/i7 processors (U, H, and M suffixes), embedded i3/i5/i7 processors (E suffix), certain i7-67nn/i7-68nn/i7-69nn.[21]Skylake-based "Core X-series" processors (certain i7-78nn and i9-79nn models) can be found under current models.
Intel discontinued the use of part numbers such as 80486 in the marketing of mainstream x86-architecture processors with the introduction of thePentiumbrand in 1993. However, numerical codes, in the 805xx range, continued to be assigned to these processors for internal and part numbering uses. The following is a list of such product codes in numerical order:
|
https://en.wikipedia.org/wiki/List_of_Intel_processors
|
The following is apartiallist ofIntelCPUmicroarchitectures. The list isincomplete, additional details can be found in Intel'stick–tock model,process–architecture–optimization modelandTemplate:Intel processor roadmap.
|
https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures
|
This article listsx86-compliant microprocessors sold byVIA Technologies, grouped by technical merits: cores within same group have much in common.
|
https://en.wikipedia.org/wiki/List_of_VIA_microprocessor_cores
|
x86-compatible processors have been designed, manufactured and sold by a number of companies, including:
In the past:
In the past:
In the past:
Early Intel x86 CPU designs (up to the 80286) have in the past beensecond-sourcedby the following manufacturers under licence from Intel:[24][25]
Manufacturers that have served as second sources for other x86 CPUs include:
|
https://en.wikipedia.org/wiki/List_of_x86_manufacturers
|
In acomputer, aninterrupt request(orIRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, aninterrupt handler, to run instead. Hardware interrupts are used to handle events such as receiving data from amodemornetwork card, key presses, or mouse movements.
Interrupt lines are often identified by an index with the format ofIRQfollowed by a number. For example, on theIntel 8259family ofprogrammable interrupt controllers(PICs) there are eight interrupt inputs commonly referred to asIRQ0throughIRQ7. Inx86basedcomputer systemsthat use two of thesePICs, the combined set of lines are referred to asIRQ0throughIRQ15. Technically these lines are namedIR0throughIR7, and the lines on theISAbus to which they were historically attached are namedIRQ0throughIRQ15(although historically as the number of hardware devices increased, the total possible number of interrupts was increased by means of cascading requests, by making one of the IRQ numbers cascade to another set or sets of numbered IRQs, handled by one or more subsequent controllers).
Newerx86systems integrate anAdvanced Programmable Interrupt Controller(APIC) that conforms to the Intel APIC Architecture. Each Local APIC typically support up to 255 IRQ lines, with each I/O APIC typically support up to 24 IRQ lines.[1]
During the early years of personal computing, IRQ management was often of user concern. With the introduction ofplug and playdevices this has been alleviated through automatic configuration.[2]
When working with personal computer hardware, installing and removing devices, the system relies on interrupt requests. There are default settings that are configured in the systemBIOSand recognized by the operating system. These default settings can be altered by advanced users. Modernplug and playtechnology has not only reduced the need for concern for these settings, but has also virtually eliminated manual configuration.
Early PCs using the Intel 8086/8088 processors only had a single PIC, and are therefore limited to eight interrupts. This was expanded to two PICs with the introduction of the 286 based PCs.
Typically, on systems using theIntel 8259PIC, 16 IRQs are used. IRQs 0 to 7 are managed by one Intel 8259 PIC, and IRQs 8 to 15 by a second Intel 8259 PIC. The first PIC, the master, is the only one that directly signals the CPU. The second PIC, the slave, instead signals to the master on its IRQ 2 line, and the master passes the signal on to the CPU. There are therefore only 15 interrupt request lines available for hardware.
On APIC withIOAPICsystems, typically there are 24 IRQs available, and the extra 8 IRQs are used to route PCI interrupts, avoiding conflict between dynamically configured PCI interrupts and statically configured ISA interrupts. On early APIC systems with only 16 IRQs or with only Intel 8259 interrupt controllers, PCI interrupt lines were routed to the 16 IRQs using a PIR (PCI interrupt routing) table integrated into the BIOS. Operating systems such asWindows 95 OSR2may use PIR table to process PCI IRQ steering;[3][4]later, the PIR table has been superseded by theACPI_PRT (PCI routing table) protocol. On APIC withMSIsystems, typically there are 224 interrupts available.[5]
The easiest way of viewing this information onWindowsis to useDevice ManagerorSystem Information(msinfo32.exe). OnLinux, IRQ mappings can be viewed by executingcat /proc/interruptsor using theprocinfoutility.
In early IBM-compatiblepersonal computers, anIRQ conflictis a once common hardware error, received when two devices were trying to use the same interrupt request (or IRQ) to signal an interrupt to theProgrammable Interrupt Controller(PIC). The PIC expects interrupt requests from only one device per line, thus more than one device sending IRQ signals along the same line will generally cause an IRQ conflict that can freeze acomputer.
For example, if amodemexpansion cardis added into a system and assigned to IRQ4, which is traditionally assigned to theserial port1, it will likely cause an IRQ conflict. Initially, IRQ 7 was a common choice for the use of asound card, but later IRQ 5 was used when it was found that IRQ 7 would interfere with theprinter port(LPT1). Theserial portsare frequently disabled to free an IRQ line for another device. IRQ 2/9 is the traditional interrupt line for an MPU-401 MIDI port, but this conflicts with theACPIsystem control interrupt (SCI is hardwired to IRQ9 on Intel chipsets);[6]this means ISAMPU-401cards with a hardwired IRQ 2/9, and MPU-401 device drivers with a hardcoded IRQ 2/9, cannot be used in interrupt-driven mode on a system with ACPI enabled.
In some conditions, twoISAdevices could share the same IRQ as long as they were not used simultaneously. To solve this problem, the laterPCI busallows for IRQ sharing.PCI Expressdoes not have physical interrupt lines, and usesMessage Signaled Interrupts(MSI) to theoperating systemsif available.
|
https://en.wikipedia.org/wiki/Interrupt_request
|
Transient execution CPU vulnerabilitiesarevulnerabilitiesin which instructions, most often optimized usingspeculative execution, are executed temporarily by amicroprocessor, without committing their results due to a misprediction or error, resulting in leaking secret data to an unauthorized party. The archetype isSpectre, and transient execution attacks like Spectre belong to the cache-attack category, one of several categories ofside-channel attacks. Since January 2018 many different cache-attack vulnerabilities have been identified.
Modern computers are highly parallel devices, composed of components with very different performance characteristics. If an operation (such as a branch) cannot yet be performed because some earlier slow operation (such as a memory read) has not yet completed, a microprocessor may attempt topredictthe result of the earlier operation and execute the later operationspeculatively, acting as if the prediction were correct. The prediction may be based on recent behavior of the system. When the earlier, slower operation completes, the microprocessor determines whether the prediction was correct or incorrect. If it was correct then execution proceeds uninterrupted; if it was incorrect then the microprocessor rolls back the speculatively executed operations and repeats the original instruction with the real result of the slow operation. Specifically, atransient instruction[1]refers to an instruction processed by error by the processor (incriminating the branch predictor in the case ofSpectre) which can affect the micro-architectural state of the processor, leaving the architectural state without any trace of its execution.
In terms of the directly visible behavior of the computer it is as if the speculatively executed code "never happened". However, this speculative execution may affect the state of certain components of the microprocessor, such as thecache, and this effect may be discovered by careful monitoring of the timing of subsequent operations.
If an attacker can arrange that the speculatively executed code (which may be directly written by the attacker, or may be a suitablegadgetthat they have found in the targeted system) operates on secret data that they are unauthorized to access, and has a different effect on the cache for different values of the secret data, they may be able to discover the value of the secret data.
In early January 2018, it was reported that allIntel processorsmade since 1995[2][3](besidesIntel Itaniumand pre-2013Intel Atom) have been subject to two security flaws dubbedMeltdownandSpectre.[4][5]
The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published.[6][7][8][9]Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th-generation Core platforms, benchmark performance drops of 2–14% have been measured.[10]Meltdown patches may also produce performance loss.[11][12][13]It is believed that "hundreds of millions" of systems could be affected by these flaws.[3][14]More security flaws were disclosed on May 3, 2018,[15]on August 14, 2018, on January 18, 2019, and on March 5, 2020.[16][17][18][19]
At the time, Intel was not commenting on this issue.[20][21]
On March 15, 2018, Intel reported that it will redesign itsCPUs(performance losses to be determined) to protect against theSpectre security vulnerability, and expects to release the newly redesigned processors later in 2018.[22][23]
On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws.[24]
On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws.[25][26]
On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines.[27][28][29]Coffee Lake-series CPUs are even more vulnerable, due to hardware mitigations forSpectre.[citation needed][30]
On March 5, 2020, computer security experts reported another Intel chip security flaw, besides theMeltdownandSpectreflaws, with the systematic nameCVE-2019-0090(or "Intel CSME Bug").[16]This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years".[17][18][19]
In March 2021 AMD security researchers discovered that the Predictive Store Forwarding algorithm inZen 3CPUs could be used by malicious applications to access data it shouldn't be accessing.[31]According to Phoronix there's little performance impact in disabling the feature.[32]
In June 2021, two new vulnerabilities,Speculative Code Store Bypass(SCSB,CVE-2021-0086) andFloating Point Value Injection(FPVI,CVE-2021-0089), affectingallmodern x86-64 CPUs both from Intel and AMD were discovered.[33]In order to mitigate them software has to be rewritten and recompiled. ARM CPUs are not affected by SCSB but some certain ARM architectures are affected by FPVI.[34]
Also in June 2021,MITresearchers revealed thePACMANattack on Pointer Authentication Codes (PAC) inARMv8.3A.[35][36][37]
In August 2021 a vulnerability called "Transient Execution of Non-canonical Accesses" affecting certain AMD CPUs was disclosed.[38][39][40]It requires the same mitigations as the MDS vulnerability affecting certain Intel CPUs.[41]It was assignedCVE-2020-12965. Since most x86 software is already patched against MDS and this vulnerability has the exact same mitigations, software vendors don't have to address this vulnerability.
In October 2021 for the first time ever a vulnerability similar to Meltdown was disclosed[42][43]to be affecting all AMD CPUs however the company doesn't think any new mitigations have to be applied and the existing ones are already sufficient.[44]
In March 2022, a new variant of the Spectre vulnerability calledBranch History Injectionwas disclosed.[45][46]It affects certain ARM64 CPUs[47]and the following Intel CPU families:Cascade Lake,Ice Lake,Tiger LakeandAlder Lake. According to Linux kernel developers AMD CPUs are also affected.[48]
In March 2022, a vulnerability affecting a wide range of AMD CPUs was disclosed underCVE-2021-26341.[49][50]
In June 2022, multipleMMIOIntel CPUs vulnerabilities related to execution invirtual environmentswere announced.[51]The following CVEs were designated:CVE-2022-21123,CVE-2022-21125,CVE-2022-21166.
In July 2022, theRetbleedvulnerability was disclosed affecting Intel Core 6 to 8th generation CPUs and AMD Zen 1, 1+ and 2 generation CPUs. Newer Intel microarchitectures as well as AMD starting with Zen 3 are not affected. The mitigations for the vulnerability decrease the performance of the affected Intel CPUs by up to 39%, while AMD CPUs lose up to 14%.
In August 2022, theSQUIPvulnerability was disclosed affecting Ryzen 2000–5000 series CPUs.[52]According to AMD the existing mitigations are enough to protect from it.[53]
According to a Phoronix review released in October, 2022Zen 4/Ryzen 7000CPUs are not slowed down by mitigations, in fact disabling them leads to a performance loss.[54][55]
In February 2023 a vulnerability affecting a wide range of AMD CPU architectures called "Cross-Thread Return Address Predictions" was disclosed.[56][57][58]
In July 2023 a critical vulnerability in theZen 2AMD microarchitecture calledZenbleedwas made public.[59][1]AMD released a microcode update to fix it.[60]
In August 2023 a vulnerability in AMD'sZen 1,Zen 2,Zen 3, andZen 4microarchitectures calledInception[61][62]was revealed and assignedCVE-2023-20569. According to AMD it is not practical but the company will release a microcode update for the affected products.
Also in August 2023 a new vulnerability calledDownfallorGather Data Samplingwas disclosed,[63][64][65]affecting Intel CPU Skylake, Cascade Lake, Cooper Lake, Ice Lake, Tiger Lake, Amber Lake, Kaby Lake, Coffee Lake, Whiskey Lake, Comet Lake & Rocket Lake CPU families. Intel will release a microcode update for affected products.
TheSLAM[66][67][68][69]vulnerability (Spectre based on Linear Address Masking) reported in 2023 neither has received a corresponding CVE, nor has been confirmed or mitigated against.
In March 2024, a variant of Spectre-V1 attack calledGhostRacewas published.[70]It was claimed it affected all the major microarchitectures and vendors, including Intel, AMD and ARM. It was assignedCVE-2024-2193. AMD dismissed the vulnerability (calling it "Speculative Race Conditions (SRCs)") claiming that existing mitigations were enough.[71]Linux kernel developers chose not to add mitigations citing performance concerns.[72]TheXen hypervisorproject released patches to mitigate the vulnerability but they are not enabled by default.[73]
Also in March 2024, a vulnerability inIntel Atomprocessors calledRegister File Data Sampling(RFDS) was revealed.[74]It was assignedCVE-2023-28746. Its mitigations incur a slight performance degradation.[75]
In April 2024, it was revealed that the BHI vulnerability in certain Intel CPU families could be still exploited in Linux entirely inuser spacewithout using any kernel features or root access despite existing mitigations.[76][77][78]Intel recommended "additional software hardening".[79]The attack was assignedCVE-2024-2201.
In June 2024,SamsungResearch andSeoul National Universityresearchers revealed theTikTagattack against the Memory Tagging Extension inARMv8.5A CPUs. The researchers created PoCs forGoogle Chromeand theLinux kernel.[80][81][82][83]Researchers from VUSec previously revealed ARM's Memory Tagging Extension is vulnerable to speculative probing.[84][85]
In July 2024,UC San Diegoresearchers revealed theIndirectorattack againstIntelAlder LakeandRaptor LakeCPUs leveraging high-precision Branch Target Injection (BTI).[86][87][88]Intel downplayed the severity of the vulnerability and claimed the existing mitigations are enough to tackle the issue.[89]No CVE was assigned.
In January 2025, Georgia Institute of Technology researchers published two whitepapers on Data Speculation Attacks via Load Address Prediction on Apple Silicon (SLAP) and Breaking the Apple M3 CPU via False Load Output Predictions (FLOP).[90][91][92]
Also in January 2025,Armdisclosed a vulnerability (CVE-2024-7881) in which an unprivileged context can trigger a data memory-dependentprefetchengine to fetch data from a privileged location, potentially leading to unauthorized access. To mitigate the issue, Arm recommends disabling the affected prefetcher by setting CPUACTLR6_EL1[41].[93][94]
In May 2025, VUSec released three vulnerabilities extending on Spectre-v2 in various Intel and ARM architectures under the moniker Training Solo.[95][96][97]Mitigations require a microcode update for Intel CPUs and changes in the Linux kernel.
Also in May 2025, ETH Zurich Computer Security Group "COMSEC" disclosed the Branch Privilege Injection vulnerability affecting all Intel x86 architectures starting from the 9th generation (Coffee Lake Refresh) under CVE-2024-45332.[98][99][100]A microcode update is required to mitigate it. It comes with a performance cost up to 8%.
Spectre class vulnerabilities will remain unfixed because otherwise CPU designers will have to disablespeculative executionwhich will entail a massive performance loss.[citation needed]Despite this, AMD has managed to designZen 4such a way its performance isnotaffected by mitigations.[54][55]
*Various CPU microarchitectures not included above are also affected, among them areARM,IBM Power,MIPSand others.[149][150][151][152]
**The 8th generation Coffee Lake architecture in this tablealsoapplies to a wide range of previously released Intel CPUs, not limited to the architectures based onIntel Core,Pentium 4andIntel Atomstarting withSilvermont.[153][154]
|
https://en.wikipedia.org/wiki/Speculative_execution_CPU_vulnerabilities
|
Tick–tockwas a production model adopted in 2007 bychipmanufacturerIntel. Under this model, every new process technology was first used to manufacture adie shrinkof a provenmicroarchitecture(tick), followed by a new microarchitecture on the now-proven process (tock). It was replaced by theprocess–architecture–optimization model, which was announced in 2016 and is like a tick–tock cycle followed by an optimization phase. More generally, tick–tock is an engineering model which refreshes one half of a binary system each release cycle.
Every "tick" represented ashrinkingof the process technology of the previous microarchitecture (with minor changes, commonly to the caches, and rarely introducing new instructions, as withBroadwellin late 2014) and every "tock" designated a new microarchitecture.[1]These occurred roughly every year to 18 months.[1]
Due to the slowing rate of process improvements, in 2014 Intel created a "tock refresh" of atockin the form of a smaller update to the microarchitecture[2]not considered a new generation in and of itself. In March 2016, Intel announced in aForm 10-Kreport that it would always do this in future, deprecating the tick–tock cycle in favor of a three-step process–architecture–optimization model, under which three generations of processors are produced under a single manufacturing process, with the third generation out of three focusing on optimization.[3]
After introducing theSkylake architectureon a 14 nm process in 2015, its first optimization wasKaby Lakein 2016. Intel then announced a second optimization,Coffee Lake, in 2017[4]making a total of four generations at 14 nm[5]before thePalm Covedie shrink to 10 nm in 2018.
WithSilvermontIntel tried to start Tick-Tock in Atom architecture but problems with the10 nm processdid not allow to do this. In the table below instead of Tick-Tock steps Process-Architecture-Optimization are used. There is no official confirmation that Intel uses Process-Architecture-Optimization for Atom but it allows us to understand what changes happened in each generation.
Note: There is further theXeon Phi. It has up to now undergone four development steps with a current top model that got the code nameKnights Landing(shortcut: KNL;[12]the predecessor code names all had the leading termKnightsin their name) that is derived from the Silvermont architecture as used for the Intel Atom series but realized in a shrunk 14 nm (FinFET) technology.[70]In 2018, Intel announced that Knights Landing and all further Xeon Phi CPU models were discontinued.[71]However, Intel'sSierra Forestand subsequent Atom-based Xeon CPUs are likely a spiritual successor to Xeon Phi.
|
https://en.wikipedia.org/wiki/Tick%E2%80%93tock_model
|
Virtual legacy wires(VLW) are transactions over theIntel QuickPath InterconnectandIntel Ultra Path Interconnectinterconnect fabrics that replace a particular set of physical legacy pins on Intel microprocessors. The legacy wires replaced include the INTR, A20M, and SMI legacy signals.[1]
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Virtual_legacy_wires
|
Jossmay refer to:
|
https://en.wikipedia.org/wiki/Joss_(disambiguation)
|
Joosmay refer to:
|
https://en.wikipedia.org/wiki/Joos_(disambiguation)
|
In computing,autonomous peripheral operationis a hardware feature found in somemicrocontrollerarchitectures to off-load certain tasks into embeddedautonomous peripheralsin order to minimizelatenciesand improvethroughputinhard real-timeapplications as well as to save energy inultra-low-powerdesigns.
Forms of autonomous peripherals in microcontrollers were first introduced in the 1990s. Allowing embeddedperipheralsto work independently of theCPUand even interact with each other in certain pre-configurable ways off-loads event-driven communication into the peripherals to help improve thereal-timeperformance due to lowerlatencyand allows for potentially higher datathroughputdue to the added parallelism. Since 2009, the scheme has been improved in newer implementations to continue functioning insleep modesas well, thereby allowing the CPU (and other unaffected peripheral blocks) to remain dormant for longer periods of time in order to save energy. This is partially driven by the emergingIoTmarket.[1]
Conceptually, autonomous peripheral operation can be seen as a generalization of and mixture betweendirect memory access(DMA) andhardware interrupts. Peripherals that issue event signals are calledevent generatorsorproducerswhereas target peripherals are calledevent usersorconsumers. In some implementations, peripherals can be configured to pre-process the incoming data and perform various peripheral-specific functions like comparing, windowing, filtering or averaging in hardware without having to pass the data through the CPU for processing.
Known implementations include:
|
https://en.wikipedia.org/wiki/Autonomous_peripheral_operation
|
Acontrol systemmanages, commands, directs, or regulates the behavior of other devices or systems usingcontrol loops. It can range from a single home heating controller using athermostatcontrolling a domestic boiler to largeindustrial control systemswhich are used for controllingprocessesor machines. The control systems are designed viacontrol engineeringprocess.
For continuously modulated control, afeedback controlleris used to automatically control a process or operation. The control system compares the value or status of theprocess variable(PV) being controlled with the desired value orsetpoint(SP), and applies the difference as a control signal to bring the process variable output of theplantto the same value as the setpoint.
Forsequentialandcombinational logic,software logic, such as in aprogrammable logic controller, is used.[clarification needed]
Fundamentally, there are two types of control loop:open-loop control(feedforward), andclosed-loop control(feedback).
The definition of a closed loop control system according to theBritish Standards Institutionis "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[2]
Aclosed-loop controlleror feedback controller is acontrol loopwhich incorporatesfeedback, in contrast to anopen-loop controllerornon-feedback controller.
A closed-loop controller uses feedback to controlstatesoroutputsof adynamical system. Its name comes from the information path in the system: process inputs (e.g.,voltageapplied to anelectric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured withsensorsand processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.[4]
In the case of linearfeedbacksystems, acontrol loopincludingsensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at asetpoint(SP). An everyday example is thecruise controlon a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. ThePID algorithmin the controller restores the actual speed to the desired speed in an optimum way, with minimal delay orovershoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent.Open-loop control systemsdo not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termedfeedforwardand serves to further improve reference tracking performance.
A common closed-loop controller architecture is thePID controller.
Logic control systems for industrial and commercial machinery were historically implemented by interconnected electricalrelaysandcam timersusingladder logic. Today, most such systems are constructed withmicrocontrollersor more specializedprogrammable logic controllers(PLCs). The notation of ladder logic is still in use as a programming method for PLCs.[6]
Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use ofactuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine.
PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) orstatement lists.[7]
On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domesticthermostatcan be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective.
Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true.
The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."
Measurements from the real world (such as the temperature of a furnace) arefuzzifiedand logic is calculated arithmetic, as opposed toBoolean logic, and the outputs arede-fuzzifiedto control equipment.
When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.[citation needed]
Fuzzy electronicsis an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used indigital electronics.
The range of control system implementation is fromcompact controllersoften with dedicated software for a particular machine or device, todistributed control systemsfor industrial process control for a largephysical plant.
Logic systems and feedback controllers are usually implemented withprogrammable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides manyopen-source hardwaredevices which can be connected to create more complexdata acquisitionand control systems.[8]
|
https://en.wikipedia.org/wiki/Control_system
|
In adistributed computingsystem, afailure detectoris acomputer applicationor asubsystemthat is responsible for the detection ofnodefailures orcrashes.[1]Failure detectors were first introduced in 1996 by Chandra and Toueg in their bookUnreliable Failure Detectors for Reliable Distributed Systems. The book depicts the failure detector as a tool to improveconsensus(the achievement of reliability) andatomic broadcast(the same sequence of messages) in the distributed system. In other words, failure detectors seek errors in theprocess, and thesystemwill maintain a level ofreliability. In practice, after failure detectors spot crashes, the system will ban the processes that are making mistakes to prevent any further serious crashes or errors.[2][3]
In the 21st century, failure detectors are widely used in distributed computing systems to detectapplication errors, such as asoftware applicationstops functioning properly. As the distributed computing projects (seeList of distributed computing projects) become more and more popular, the usage of the failure detects also becomes important and critical.[4][5]
Chandra and Toueg, the co-authors of the bookUnreliable Failure Detectors for Reliable Distributed Systems (1996), approached the concept of detecting failure nodes by introducing the unreliable failure detector.[6]They describe the behavior of a unreliable failure detector in a distributed computing system as: after eachprocessin the system entered a local failure detector component, each localcomponentwill examine a portion of all processes within the system.[5]In addition, each process must also containprogramsthat are currently suspected by failure detectors.[5]
Chandra and Toueg claimed that an unreliable failure detector can still be reliable in detecting the errors made by the system.[6]They generalize unreliable failure detectors to all forms of failure detectors because unreliable failure detectors and failure detectors share the same properties. Furthermore, Chandra and Toueg point out an important fact that the failure detector does not prevent any crashes in the system, even if the crashed program has been suspected previously. The construction of a failure detector is an essential, but a very difficult problem that occurred in the development of thefault-tolerantcomponent in a distributed computer system. As a result, the failure detector was invented because of the need for detecting errors in the massive information transaction in distributed computing systems.[1][3][5]
The classes of failure detectors are distinguished by two important properties:completenessandaccuracy. Completeness means that the failure detectors would find the programs that finally crashed in a process, whereas accuracy means that correct decisions that the failure detectors made in a process.[5]
The degrees of completeness depend on the number of crashed process is suspected by a failure detector in a certain period.[5]
The degrees of accuracy depend on the number of mistakes that a failure detector made in a certain period.[5]
Failure detectors can be categorized in the following eight types:[1][7]
The properties of these failure detectors are described below:[1]
In a nutshell, the properties of failure detectors depend on how fast the failure detector detects actualfailuresand how well it avoids false detection. A perfect failure detector will find all errors without any mistakes, whereas a weak failure detector will not find any errors and make numerous mistakes.[3][8]
Different types of failure detectors can be obtained by changing the properties of failure detectors.[3][6]The first examples show that how to increase completeness of a failure detector, and the second example shows that how to change one type of the failure detector to another.
The following is an example abstracted from the Department of Computer Science at Yale University. It functions by boosting the completeness of a failure detector.[6]
From the example above, if p crashes, then the weak-detector will eventually suspect it. All failure detectors in the system will eventually suspect the p because of the infinite loop created by failure detectors. This example also shows that a weak completeness failure detector can also suspect all crashes eventually.[6]The inspection of crashed programs does not depend on completeness.[5]
The following are correctness arguments to satisfy the algorithm of changing a failure detectorWto a failure detectorS.[1]The failure detectorWis weak in completeness, and the failure detectorSis strong in completeness. They are both weak in accuracy.[6]
If all arguments above are satisfied, the reduction of a weak failure detectorWto a strong failure detectorSwill agree with thealgorithmwithin the distributed computing system.[1]
|
https://en.wikipedia.org/wiki/Failure_detector
|
TheICL Series 39was a range ofmainframeandminicomputercomputersystems released by the UK manufacturerICLin 1985. The original Series 39 introduced the "S3L" (whose corrupt pronunciation resulted in the name "Estriel"[1]: 341) processors andmicrocodes, and a nodal architecture, which is a form ofNon-Uniform Memory Access.
The Series 39 range was based upon theNew Rangeconcept and theVMEoperating system from the company'sICL 2900line, and was introduced as two ranges:
The original Series 39 introduced the "S3L" processors and microcodes, and a nodal architecture (seeICL VME) which is a form ofNon-Uniform Memory Accesswhich allowed nodes to be up to 1,000 metres (3,300 ft) apart.
The Series 39 range introduced Nodal Architecture, a novel implementation ofdistributed shared memorythat can be seen as a hybrid of amultiprocessorsystem and aclusterdesign. Each machine consists of a number ofnodes, and each node contains its own order-code processor and main memory.Virtual machinesare typically located (at any one time) on one node, but have the capability to run on any node and to be relocated from one node to another. Discs and other peripherals are shared between nodes. Nodes are connected using a high-speed optical bus (Macrolan) using multiplefibre opticcables, which is used to provide applications with a virtual shared memory. Memory segments that are marked as shared (public or global segments) are replicated to each node, with updates being broadcast over the inter-node network. Processes which use unshared memory segments (nodal or local) run in complete isolation from other nodes and processes.[2]
Thesemaphore instructionsprove their worth by controlling access to the shared writable memory segments while allowing the contents to be moved around efficiently.
Overall, a well configured Series 39 with VME had an architecture which can provide a significant degree of proofing against disasters, a nod to the abortiveVME/Tideas of the previous decade.
All Series 39 machines were supported by a set of waist height peripheral 'Cabinets' (connected via fibre optic cables via one or more Multi Port Switch Units or MPSU's) providing disk storage capabilities:-
Cabinet 1 was the name given to the DM1 Series 39 Level 30 (and 20/15/25/35 variants) core system.
All Series 39 machines also featured a Node Support Computer (NSC) hosted on their Storage Motherboards - this wasx86 architectureand acted much like today's ILO or DRAC cards onHP/DellServers and allowed Support Staff to manage the Nodes remotely including the ability to completely stop and restart the main Nodes.
In the mid-1980s the Series 39 Level 30 was supplemented by a Level 20 variant which was a forcibly underclocked Level 30 (using wire links on a daughterboard). In the late 80s these were both replaced by Level 15, 25 and 35 variants which also carried various levels of clocking state but featured more memory than their predecessors and could also be fitted with Dual OCP and IOC motherboards for even more computing and I/O capability.
The early 1990s saw upgrades to the Series 39 range. DX System products were introduced to replace the DM1 systems, appearing in product line-ups already in late 1991.[3]: 84The Essex project led to the introduction of the SX System products in 1990 to replace the Estriel ("S3L") systems.[4]These machines featured a new "very sophisticated pipelined processor" design that provided support for the ICL 2900order codeby employing a low-level "implementation order code" known as Picode. Picode is comparable tomicrocodebut operates at much higher level than microcode from earlier machines and at a slightly lower level than ICL 2900 instructions, operating within similar constraints to those applying to conventional machine instructions. Picode instruction sequences are fed intoinstruction pipelinesand provide atomic results, being uninterruptable.[5]
The Series 39 SX and DX products were replaced by the SY and DY products respectively, these comprising theTrimetrarange along with LY products. The SY node architecture abandonedECLin favour ofCMOStechnology, introduced support for symmetric multiprocessing involving up to four instruction processors per node, refined the instruction processing architecture, and provided cheaper multi-node connectivity.[6]
In contrast, the Trimetra DY system sought to use commodity hardware to provide OpenVME support through the use of emulation techniques. ICL's Millennium vision, as realised by Trimetra, entailed the provision of OpenVME in the form of an OpenVME Subsystem (OVS) alongsideMicrosoft Windows NTorSCO UnixWarerunning in a UnixWare/NT Subsystem (UNS). Whereas Trimetra SY and LY (a reduced footprint product based on SY) employed dedicated hardware to provide OVS functionality, alongside a Fujitsu-supplied Intel processor module providing UNS functionality, Trimetra DY offered an approach that supported either OVS or UNS functionality running entirely on an Intel processor system. To provide OVS, an emulator for the SY instruction set, together with input/output functionality and a platform abstraction layer, were deployed on theVxWorksoperating system.[7]
With ICL having identified markets seeking higher-performance Unix or NT systems without a need for OpenVME compatibility, introducing the Trimetra Xtraserver product featuring from four to twelve 200 MHz Pentium Pro processors,[8]Trimetra in turn was replaced by Fujitsu's mainframe platform,Nova, providing the Trimetra architecture on genericUnisysES7000Intel-based server hardware.
Nova itself was phased out in 2007 and replaced withSuperNova, which runs OpenVME on top of Windows Server or Linux, using as few as two CPUs on genericWintelserver hardware.
The transition of the "ICL mainframe" to a pure software product was thus complete, enablingFujitsuto concentrate on VME support and development without having to keep up with hardware technology.
|
https://en.wikipedia.org/wiki/Nodal_architecture
|
Data processing modesorcomputing modesare classifications of different types of computer processing.[1]
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Processing_modes
|
ThePtolemy Projectis an ongoing project aimed at modeling, simulating, and designingconcurrent,real-time,embedded systems. The focus of the Ptolemy Project is on assembling concurrent components. The principal product of the project is the Ptolemy IImodel based designand simulation tool. The Ptolemy Project is conducted in the Industrial Cyber-Physical Systems Center (iCyPhy) in the Department of Electrical Engineering and Computer Sciences of theUniversity of California at Berkeley, and is directed by Prof.Edward A. Lee.
The key underlying principle in the project is the use of well-definedmodels of computationthat govern the interaction between components. A major problem area being addressed is the use of heterogeneous mixtures of models of computation.[1]
The project is named afterClaudius Ptolemaeus, the 2nd century Greek astronomer, mathematician, and geographer.
The Kepler Project, a community-driven collaboration among researchers at three otherUniversity of Californiacampuses has created theKepler scientific workflow systemwhich is based on Ptolemy II.
|
https://en.wikipedia.org/wiki/Ptolemy_Project
|
Real-time data(RTD) is information that is delivered immediately after collection. There is no delay in the timeliness of the information provided. Real-time data is often used for navigation or tracking.[1]Such data is usuallyprocessedusingreal-time computingalthough it can also be stored for later or off-linedata analysis.
Real-time data is not the same asdynamic data. Real-time data can be dynamic (e.g. a variable indicating current location) or static (e.g. a fresh log entry indicating location at a specific time).
Real-time economic data, and otherofficial statistics, are often based on preliminary estimates, and therefore are frequently adjusted as better estimates become available. These later adjusted data are called "revised data".
The terms real-time economic data and real-time economic analysis were coined[2]by Francis
X. Diebold and Glenn D. Rudebusch.[3]MacroeconomistGlenn D. Rudebusch defined real-time analysis as 'the use of sequential information sets that were actually available as history unfolded.'[4]MacroeconomistAthanasios Orphanideshas argued that economic policy rules may have very different effects when based on error-prone real-time data (as they inevitably are in reality) than they would if policy makers followed the same rules but had more accurate data available.[5]
In order to better understand the accuracy of economic data and its effects on economic decisions, some economic organizations, such as theFederal Reserve Bank of St. Louis,Federal Reserve Bank of Philadelphiaand the Euro-Area Business Cycle Network (EABCN), have made databases available that contain both real-time data and subsequent revised estimates of the same data.
Real-time biddingis programmatic real-time auctions that sell digital-ad impressions. Entities on both the buying and selling sides require almost instantaneous access to data in order to make decisions, forcing real-time data to the forefront of their needs.[6]To support these needs, new strategies and technologies, suchDruidhave arisen and are quickly evolving.[7]
|
https://en.wikipedia.org/wiki/Real-time_data
|
Real-time computer graphicsorreal-time renderingis the sub-field ofcomputer graphicsfocused on producing and analyzing images inreal time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-timeimage analysis, but is most often used in reference to interactive3D computer graphics, typically using agraphics processing unit(GPU). One example of this concept is avideo gamethat rapidly renders changing 3D environments to produce an illusion of motion.
Computers have been capable of generating 2D images such as simple lines, images andpolygonsin real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditionalVon Neumann architecture-based systems. An early workaround to this problem was the use ofsprites,2D imagesthat could imitate 3D graphics.
Different techniques forrenderingnow exist, such asray-tracingandrasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience.
The goal of computer graphics is to generatecomputer-generated images, orframes, using certain desired metrics. One such metric is the number offrames generatedin a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely onray tracing. In this process, millions or billions of rays are traced from thecamerato theworldfor detailed rendering—this expensive operation can take hours or days to render a single frame.
Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique ofz-buffertriangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle getspositioned, rotated and scaledon the screen, andrasterizerhardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units calledfragmentsthat are suitable for displaying on adisplay screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, atexturecan be used to "paint" a triangle based on a stored image, and thenshadow mappingcan alter that triangle's colors based on line-of-sight to light sources.
Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modernDirectX/OpenGLclass hardware is capable of generating complex effects, such asshadow volumes,motion blurring, andtriangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actualgameplaygraphics and the pre-renderedcutscenestraditionally found in video games.[1]Cutscenes are typically rendered in real-time—and may beinteractive.[2]Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate.
Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions.
In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow)perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current[when?]Wii remote) typically take much longer to achieve than comparable advancements in display devices.
Another important factor controlling real-time computer graphics is the combination ofphysicsandanimation. These techniques largely dictate what is to be drawn on the screen—especiallywhereto draw objects in the scene. These techniques help realistically imitate real world behavior (thetemporal dimension, not thespatial dimensions), adding to the computer graphics' degree of realism.
Real-time previewing withgraphics software, especially when adjustinglighting effects, can increase work speed.[3]Some parameter adjustments infractal generating softwaremay be made while viewing changes to the image in real time.
Thegraphics rendering pipeline("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics.[4]Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more.
The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry andrasterization.
The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such ascollision detection, speed-up techniques, animation and force feedback, in addition to handling user input.
Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller.
The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation viatransforms, and geometry morphing. Finally, it producesprimitives(points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline.
The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs.[5]Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages.
Before the final model is shown on the output device, the model is transformed onto multiple spaces orcoordinate systems. Transformations move and manipulate objects by altering their vertices.Transformationis the general term for the four specific ways that manipulate the shape or position of a point, line or shape.
In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using aright-handedcoordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right.
Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection areorthographic projection(also called parallel) andperspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight.
Clippingis the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage.
The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage.
The rasterizer stage applies color and turns the graphic elements into pixels or picture elements.
|
https://en.wikipedia.org/wiki/Real-time_computer_graphics
|
Areal-time operating system(RTOS) is anoperating system(OS) forreal-time computingapplications that processes data and events that have critically defined time constraints. A RTOS is distinct from atime-sharingoperating system, such asUnix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in multitasking or multiprogramming environments. All operations must verifiably complete within given time and resource constraints or elsefail safe. Real-time operating systems areevent-drivenandpreemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority.
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application'stask; the variability is "jitter".[1]A "hard" real-time operating system (hard RTOS) has less jitter than a "soft" real-time operating system (soft RTOS); a late answer is a wrong answer in a hard RTOS while a late answer is acceptable in a soft RTOS. The chief design goal is not highthroughput, but rather a guarantee of asoft or hardperformance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadlinedeterministicallyit is a hard real-time OS.[2]
An RTOS has an advanced algorithm forscheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimalinterrupt latencyand minimalthread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.[3]
An RTOS is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
The most common designs are:
Time sharingdesigns switch tasks more often than strictly needed, but give smoothermultitasking, giving the illusion that a process or user has sole use of a machine.
EarlyCPU designsneeded many cycles to switch tasks during which the CPU could do nothing else useful. Because switching took so long, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching.
In typical designs, a task has three states:
Most tasks are blocked or ready most of the time because generally only one task can run at a time perCPUcore. The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state (resource starvation).
Usually, the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled, but the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then adoubly linked listof ready tasks is likely optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority, so that finding the highest priority task to run does not require traversing the list. Instead, inserting a task requires walking the list.
During this search, preemption should not be inhibited. Long critical sections should be divided into smaller pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
Some commonly used RTOS scheduling algorithms are:[4]
A multitasking operating system likeUnixis poor at real-time tasks. The scheduler gives the highest priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical job will have access to enough resources. Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or hardware resource simultaneously.[5]There are three common approaches to resolve this problem:
General-purpose operating systems usually do not allow user programs to mask (disable)interrupts, because the user program could control the CPU for as long as it is made to. Some modern CPUs do not allowuser modecode to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run inkernel modefor greatersystem callefficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
On single-processor systems, an application running in kernel mode and masking interrupts is the lowest overhead method to prevent simultaneous access to a shared resource. While interrupts are masked and the current task does not make a blocking OS call, the current task hasexclusiveuse of the CPU since no other task or interrupt can take control, so thecritical sectionis protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute. Temporarily masking interrupts should only be done when the longest path through the critical section is shorter than the desired maximuminterrupt latency. Typically this method of protection is used only when the critical section is just a few instructions and contains no loops. This method is ideal for protecting hardware bit-mapped registers when the bits are controlled by different tasks.
When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as amutexand OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
A (non-recursive) mutex is eitherlockedor unlocked. When a task has locked the mutex, all other tasks must wait for the mutex to be unlocked by itsowner- the original thread. A task may set a timeout on its wait for a mutex. There are several well-known problems with mutex based designs such aspriority inversionanddeadlocks.
Inpriority inversiona high priority task waits because a low priority task has a mutex, but the lower priority task is not given CPU time to finish its work. A typical solution is to have the task that owns a mutex 'inherit' the priority of the highest waiting task. But this simple approach gets more complex when there are multiple levels of waiting: taskAwaits for a mutex locked by taskB, which waits for a mutex locked by taskC. Handling multiple levels of inheritance causes other code to run in high priority context and thus can cause starvation of medium-priority threads.
In adeadlock, two or more tasks lock mutex without timeouts and then wait forever for the other task's mutex, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks alternately lock two mutex, but in the opposite order. Deadlock is prevented by careful design.
The other approach to resource sharing is for tasks to send messages in an organizedmessage passingscheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp thansemaphoresystems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Since an interrupt handler blocks the highest priority task from running, and since real-time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on. Updates to this catalogue must be strictly controlled. For this reason, it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so. The OS function called from an interrupt handler could find the object database to be in an inconsistent state because of the application's update. There are two major approaches to deal with this problem: the unified architecture and the segmented architecture. RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated. The downside of this is that interrupt latency increases, potentially losing interrupts. The segmented architecture does not make direct OS calls but delegates the OS related work to a separate handler. This handler runs at a higher priority than any thread but lower than the interrupt handlers. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture.[citation needed]
Similarly, theSystem Management Modeon x86 compatible hardware can take a lot of time before it returns control to the operating system.
Memory allocationis more critical in a real-time operating system than in other operating systems.
First, for stability there cannot bememory leaks(memory that is allocated but not freed after use). The device should work indefinitely, without ever needing a reboot.[citation needed]For this reason,dynamic memory allocationis frowned upon.[citation needed]Whenever possible, all required memory allocation is specified statically at compile time.
Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocation and releasing of small chunks of memory, a situation may occur where available memory is divided into several sections and the RTOS cannot allocate a large enough continuous block of memory, although there is enough free memory. Secondly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block,[6]which is unacceptable in a RTOS since memory allocation has to occur within a certain amount of time.
Because mechanical disks have much longer and more unpredictable response times, swapping to disk files is not used for the same reasons as RAM allocation discussed above.
The simplefixed-size-blocks algorithmworks quite well for simpleembedded systemsbecause of its low overhead.
|
https://en.wikipedia.org/wiki/Real-time_operating_system
|
Real-time testingis the process oftestingreal-time computer systems.
Software testing is performed to detect and help correctbugs(errors) in computer software. Testing involves ensuring not only that the software is error-free but that it provides the required functionality to the user. Static and conventional methods of testing can detect bugs, but such techniques may not ensure correct results in real time software systems.Real-time software systems have strict timing constraints and have a deterministic behavior. These systems have to schedule their tasks such that the timing constraints imposed on them are met.
Conventional static way of analysis is not adequate to deal with such timing constraints, hence additional real-time testing is important.[1]
Test case design for real time testing can be proposed in four steps[2]
As testing of real time systems is becoming more important, there are some tools designed for such testing.
Message Sequence Chartsis an internationally accepted standard for capturing requirements.[3]MSC provides a graphical 2-D language often required for collecting requirements through some interaction scenarios.
Specification and Description Languageis a standard used for design and analysis. SDL[4]supports the specification of complex software systems and has been extensively applied across a broad array of domains fromtelecommunications,automation, through to general software development
Testing and Test Control Notationis the only internationally standard testing language. TTCN3[5]provides a broader applicability, as compared to earlier versions of TTCN, which were primarily focused onOSI protocolsonly.
These three standards together are used for testing of real time applications. It is necessary that requirements be satisfied with these models and test cases generated must capture the functional and real time information needed to test systems. Also, the changes in the requirements of design and new information about the real time properties of systems should be fed into models so that its impact can be found out.
To accurately capture the real time properties of a given test system and to ensure that requirements and models are used to generate realistic and enforceable timing information, it is essential that the language itself (TTCN-3) has a well understood and semantically sound model of time.
TTCN-3is the only currently available, internationally standardized testing language. Prior to TTCN3, its earlier versions were having limited functionality and limited scope over OSI protocol. But, TTCN3 is an advanced version and has broader applicability.Characteristics of TTCN3 are:
The reason for using TTCN3 for real time testing is because of its timers. These timers are defined in functiontest suites. There are no any global kind timers used in TTCN3. These timers can be started, stopped and checked using simple functions like timer.start, timer.stop, and timer.read.
Snapshot Semantics is a technique in TTCN3 (also in TTCN2), which deals with the message passed during communication by system to system or implementation under test. When a series of responses are received by system under test, then snapshot is taken and they are evaluated in order of their arrival. So, each time around a set of attributes, a snapshot is taken and only those events are evaluated which are present in snapshot.
But this technique is not efficient as some events and their attribute information might get lost while the snapshot is taken. Some events might get recorded on processing queue, but not on snapshot. Such events can never get processed. Also, if the test executer equipment is not fast enough, then it can not communicate properly with the system under test. So, faults might get generated during such test evaluation.
|
https://en.wikipedia.org/wiki/Real-time_testing
|
Remote diagnosticsis the act ofdiagnosinga given symptom, issue or problem from a distance. Instead of the subject being co-located with the person or system done diagnostics, with remote diagnostics the subjects can be separated by physical distance (e.g.,Earth-Moon). Important information is exchanged either throughwireorwireless.
When limiting to systems, a general accepted definition is:
"To improve reliability of vital or capital-intensive installations and reduce the maintenance costs by avoiding unplanned maintenance, by monitoring the condition of the system remotely."[1]
Remote diagnostics and maintenance refers to both diagnoses of the fault or faults and taking corrective (maintenance) actions, like changing settings to improve performance or prevent problems like breakdown, wear and tear. RDM can replace manpower at location by experts on a central location, in order to save manpower or prevent hazardous situations (space for instance). Increasing globalisation and more and more complicated machinery and software, also creates the wish to remote engineering, so travel over growing distances of experienced and expensive engineering personnel is limited.[2]
|
https://en.wikipedia.org/wiki/Remote_diagnostics
|
The termscheduling analysisinreal-time computingincludes the analysis and testing of theschedulersystem and thealgorithmsused in real-time applications. Incomputer science, real-time scheduling analysis is the evaluation, testing and verification of thescheduling systemand thealgorithmsused in real-time operations. For critical operations, a real-time system must be tested and verified for performance.
A real-time scheduling system is composed of the scheduler, clock and the processing hardware elements. In a real-time system, a process or task has schedulability; tasks are accepted by a real-time system and completed as specified by the task deadline depending on the characteristic of the scheduling algorithm.[1]Modeling and evaluation of a real-time scheduling system concern is on the analysis of the algorithm capability to meet a process deadline. A deadline is defined as the time required for a task to be processed.
For example, in a real-time scheduling algorithm a deadline could be set to five nano-seconds. In a critical operation the task must be processed in the time specified by the deadline (i.e. five nano-seconds). A task in a real-time system must be completed "neither too early nor too late;..".[2]A system is said to be unschedulable when tasks can not meet the specified deadlines.[3]A task can be classified as either a periodic or aperiodic process.[4]
The criteria of a real-time can be classified ashard,firmorsoft. The scheduler set the algorithms for executing tasks according to a specified order.[4]There are multiple mathematical models to represent a scheduling System, most implementations of real-time scheduling algorithm are modeled for the implementation of uniprocessors or multiprocessors configurations. The more challenging scheduling algorithm is found in multiprocessors, it is not always feasible to implement a uniprocessor scheduling algorithm in a multiprocessor.[4]The algorithms used in scheduling analysis "can be classified aspre-emptiveornon-pre-emptive".[1]
A scheduling algorithm defines how tasks are processed by the scheduling system. In general terms, in the algorithm for a real-time scheduling system, each task is assigned a description, deadline and an identifier (indicating priority). The selected scheduling algorithm determines how priorities are assigned to a particular task. A real-time scheduling algorithm can be classified as static or dynamic. For a static scheduler, task priorities are determined before the system runs. A dynamic scheduler determines task priorities as it runs.[4]Tasks are accepted by the hardware elements in a real-time scheduling system from the computing environment and processed in real-time. An output signal indicates the processing status.[5]A task deadline indicates the time set to complete for each task.
It is not always possible to meet the required deadline; hence further verification of the scheduling algorithm must be conducted. Two different models can be implemented using a dynamic scheduling algorithm; a task deadline can be assigned according to the task priority (earliest deadline) or a completion time for each task is assigned by subtracting the processing time from the deadline (least laxity).[4]Deadlines and the required task execution time must be understood in advance to ensure the effective use of the processing elements execution times.
The performance verification and execution of a real-time scheduling algorithm is performed by the analysis of the algorithm execution times. Verification for the performance of a real-time scheduler will require testing the scheduling algorithm under different test scenarios including theworst-case execution time. These testing scenarios include worst case and unfavorable cases to assess the algorithm performance. The time calculations required for the analysis of scheduling systems require evaluating the algorithm at the code level.[4]
Different methods can be applied to testing a scheduling System in a real-time system. Some methods include: input/output verifications and code analysis. One method is by testing each input condition and performing observations of the outputs. Depending on the number of inputs this approach could result in a lot of effort. Another faster and more economical method is a risk based approach where representative critical inputs are selected for testing. This method is more economical but could result in less than optimal conclusions over the validity of the system if the incorrect approach is used. Retesting requirements after changes to the scheduling System are considered in a case by case basis.
Testing and verification of real-time systems should not be limited to input/output and codes verifications but are performed also in running applications using intrusive or non-intrusive methods.
|
https://en.wikipedia.org/wiki/Scheduling_analysis_real-time_systems
|
Asynchronous programming languageis acomputer programming languageoptimized for programmingreactive systems.
Computer systemscan be sorted in three main classes:
Synchronous programming, also calledsynchronous reactive programming(SRP), is a computerprogramming paradigmsupported by synchronous programming languages. The principle of SRP is to make the same abstraction for programming languages as the synchronous abstraction in digital circuits. Synchronous circuits are indeed designed at a high level of abstraction where the timing characteristics of the electronic transistors are neglected. Each gate of the circuit (or, and, ...) is therefore assumed to compute its result instantaneously, each wire is assumed to transmit its signal instantaneously. A synchronous circuit is clocked and at each tick of its clock, it computes instantaneously its output values and the new values of its memory cells (latches) from its input values and the current values of its memory cells. In other words, the circuit behaves as if the electrons were flowing infinitely fast. The first synchronous programming languages were invented in France in the 1980s:Esterel,Lustre, andSIGNAL. Since then, many other synchronous languages have emerged.
The synchronous abstraction makes reasoning about time in a synchronous program a lot easier, thanks to the notion oflogical ticks: a synchronous program reacts to its environment in a sequence of ticks, and computations within a tick are assumed to be instantaneous, i.e., as if the processor executing them were infinitely fast. The statement "a||b" is therefore abstracted as the package "ab" where "a" and "b" are simultaneous. To take a concrete example, the Esterel statement "'every 60 second emit minute" specifies that the signal "minute" is exactly synchronous with the 60-th occurrence of the signal "second". At a more fundamental level, the synchronous abstraction eliminates the non-determinism resulting from the interleaving of concurrent behaviors. This allowsdeterministicsemantics, therefore making synchronous programs amenable to formal analysis,verificationand certified code generation, and usable asformal specificationformalisms.
In contrast, in the asynchronous model of computation, on a sequential processor, the statement "a||b" can be either implemented as "a;b" or as "b;a". This is known as theinterleaving-based non determinism. The drawback with an asynchronous model is that it intrinsically forbids deterministic semantics (e.g., race conditions), which makes formal reasoning such as analysis and verification more complex. Nonetheless, asynchronous formalisms are very useful to model, design and verify distributed systems, because they are intrinsically asynchronous.
Also in contrast are systems with processes that basicallyinteract synchronously. An example would be systems based on theCommunicating sequential processes (CSP)model, which allows deterministic (external) and nondeterministic (internal) choice.
|
https://en.wikipedia.org/wiki/Synchronous_programming_language
|
Stephen J. Mellor(born 1952) is an Americancomputer scientist, developer of the Ward–Mellor method forreal-time computing, theShlaer–Mellor method, andExecutable UML, and signatory to theAgile Manifesto.
Mellor received aBA in computer sciencefrom theUniversity of Essexin 1974, and started working atCERNin Geneva, Switzerland as a programmer inBCPL. In 1977 he became software engineer at theLawrence Berkeley Laboratory, and in 1982 consultant atYourdon, Inc.[1]
At Yourdon in cooperation with Paul Ward they developed theWard–Mellor method, and published the book-seriesStructured Development for Real Time Systemsin 1985.
Together withSally Shlaerhe foundedProject Technologyin 1985. That company was acquired byMentor Graphicsin 2004.[1]Mellor stayed as chief scientist of the Embedded Systems Division at Mentor Graphics for another two years, and is self-employed since 2006.
Since 1998 Mellor has contributed to theObject Management Group, chairing the consortium that added executable actions to theUML, and the specification ofmodel-driven architecture(MDA). He is also chairing the advisory board of theIEEE Softwaremagazine.[2]Since 2013, Mellor has served asCTOfor theIndustrial Internet Consortium.[3]
Articles, a selection:[4]
|
https://en.wikipedia.org/wiki/Stephen_J._Mellor
|
Theworst-case execution time(WCET) of acomputationaltask is the maximum length of time the task could take to execute on a specifichardwareplatform.
Worst case execution time is typically used in reliablereal-time systems, where understanding the worst case timing behaviour of software is important for reliability or correct functional behaviour.
As an example, a computer system that controls the behaviour of an engine in a vehicle might need to respond to inputs within a specific amount of time. One component that makes up the response time is the time spent executing the software – hence if the software worst case execution time can be determined, then the designer of the system can use this with other techniques such asschedulability analysisto ensure that the system responds fast enough.
While WCET is potentially applicable to many real-time systems, in practice an assurance of WCET is mainly used by real-time systems that are related to high reliability or safety. For example, in airborne software some attention to software is required byDO178Csection 6.3.4. The increasing use of software in automotive systems is also driving the need to use WCET analysis of software.
In the design of some systems, WCET is often used as an input toschedulability analysis, although a much more common use of WCET in critical systems is to ensure that the pre-allocated timing budgets in a partition-scheduled system such asARINC 653are not violated.
Since the early days of embedded computing, embedded software developers have either used:
Both of these techniques have limitations. End to end measurements place a high burden on software testing to achieve the longest path; counting instructions is only applicable to simple software and hardware. In both cases, a margin for error is often used to account for untested code, hardware performance approximations or mistakes. A margin of 20% is often used, although there is very little justification used for this figure, save for historical confidence ("it worked last time").
As software and hardware have increased in complexity, they have driven the need for tool support. Complexity is increasingly becoming an issue in both static analysis and measurements. It is difficult to judge how wide the error margin should be and how well tested the software system is. System safety arguments based on a high-water mark achieved during testing are widely used, but become harder to justify as the software and hardware become less predictable.
In the future, it is likely that a requirement for safety critical systems is that they are analyzed using both static and measurement-based approaches.[citation needed]
The problem of finding WCET by analysis is equivalent to thehalting problemand is therefore not solvable in the general. Fortunately, for the kind of systems that engineers typically want to find WCET for, the software is typically well structured, will always terminate and is analyzable.
Most methods for finding a WCET involve approximations (usually a rounding upwards when there are uncertainties) and hence in practice the exact WCET itself is often regarded as unobtainable. Instead, different techniques for finding the WCET produce estimates for the WCET.[1]Those estimates are typically pessimistic, meaning that the estimated WCET is known to be higher than the real WCET (which is usually what is desired). Much work on WCET analysis is on reducing the pessimism in analysis so that the estimated value is low enough to be valuable to the system designer.
WCET analysis usually refers to the execution time of single thread, task or process. However, on modern hardware, especially multi-core, other tasks in the system will impact the WCET of a given task if they share cache, memory lines and other hardware features. Further, task scheduling events such asblockingor to beinterruptionsshould be considered in WCET analysis if they can occur in a particular system. Therefore, it is important to consider the context in which WCET analysis is applied.
There are many automated approaches to calculating WCET beyond the manual techniques above. These include:
A static WCET tool attempts to estimate WCET by examining the computer software without executing it directly on the hardware. Static analysis techniques have dominated research in the area since the late 1980s, although in an industrial setting, end-to-end measurements approaches were the standard practice.
Static analysis tools work at a high-level to determine the structure of aprogram's task, working either on a piece ofsource codeor disassembled binaryexecutable. They also work at a low-level, using timing information about the real hardware that the task will execute on, with all its specific features. By combining those two kinds of analysis, the tool attempts to give an upper bound on the time required to execute a given task on a given hardware platform.
At the low-level, static WCET analysis is complicated by the presence of architectural features that improve the average-case performance of theprocessor: instruction/datacaches,branch predictionandinstruction pipelines, for example. It is possible, but increasingly difficult, to determine tight WCET bounds if these modern architectural features are taken into account in the timing model used by the analysis.
Certification authorities such as theEuropean Aviation Safety Agency, therefore, rely on model validation suites.[citation needed]
Static analysis has resulted in good results for simpler hardware, however a possible limitation of static analysis is that the hardware (the CPU in particular) has reached a complexity which is extremely hard to model. In particular, the modelling process can introduce errors from several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; all leading to cases where the model predicts a different behavior to that observed on real hardware. Typically, where it is not possible to accurately predict a behavior, a pessimistic result is used, which can lead to the WCET estimate being much larger than anything achieved at run-time.
Obtaining tight static WCET estimation is particularly difficult on multi-core processors.
There are a number of commercial and academic tools that implement various forms of static analysis.
Measurement-based and hybrid approaches usually try to measure the execution times of short code segments on the real hardware, which are then combined in a higher level analysis. Tools take into account the structure of the software (e.g. loops, branches), to produce an estimate of the WCET of the larger program. The rationale is that it's hard to test the longest path in complex software, but it is easier to test the longest path in many smaller components of it. A worst case effect needs only to be seen once during testing for the analysis to be able to combine it with other worst case events in its analysis.
Typically, the small sections of software can be measured automatically using techniques such as instrumentation (adding markers to the software) or with hardware support such as debuggers, and CPU hardware tracing modules. These markers result in a trace of execution, which includes both the path taken through the program and the time at which different points were executed. The trace is then analyzed to determine the maximum time that each part of the program has ever taken to execute, what the maximum observed iteration time of each loop is and whether there are any parts of the software that are untested (Code coverage).
Measurement-based WCET analysis has resulted in good results for both simple and complex hardware, although like static analysis it can suffer excessive pessimism in multi-core situations, where the impact of one core on another is hard to define. A limitation of measurement is that it relies on observing the worst-case effects during testing (although not necessarily at the same time). It can be hard to determine if the worst case effects have necessarily been tested.
There are a number of commercial and academic tools that implement various forms of measurement-based analysis.
The most active research groups are in USA (American Michigan University ), Sweden (Mälardalen, Linköping), Germany (Saarbrücken, Dortmund, Braunschweig), France (Toulouse, Saclay, Rennes), Austria (Vienna), UK (University of York and Rapita Systems Ltd), Italy (Bologna), Spain (Cantabria, Valencia), and Switzerland (Zurich). Recently, the topic of code-level timing analysis has found more attention outside of Europe by research groups in the US (North Carolina, Florida), Canada, Australia, Bangladesh(MBI LAB and RDS), Kingdom of Saudi Arabia-UQU(HISE LAB), Singapore and India (IIT Madras, IISc Bangalore).
The first international WCET Tool Challenge took place during the autumn of 2006. It was organized by theUniversity of Mälardalenand sponsored by the ARTIST2 Network of Excellence on Embedded Systems Design. The aim of the Challenge was to inspect and to compare different approaches in analyzing the worst-case execution time. All available tools and prototypes able to determine safe upper bounds for the WCET of tasks have participated. The final results[2]were presented in November 2006 at theISoLA 2006International Symposium inPaphos, Cyprus.
A second Challenge took place in 2008.[3]
|
https://en.wikipedia.org/wiki/Worst-case_execution_time
|
Incomputer science, thedining philosophers problemis an example problem often used inconcurrentalgorithm design to illustratesynchronizationissues and techniques for resolving them.
It was originally formulated in 1965 byEdsger Dijkstraas a student exam exercise, presented in terms of computerscompeting for accesstotape driveperipherals.
Soon after,Tony Hoaregave the problem its present form.[1][2][3][4]
Fivephilosophersdine together at the same table. Each philosopher has their own plate at the table. There is a fork between each plate. The dish served is a kind ofspaghettiwhich has to be eaten with two forks. Each philosopher can only alternately think and eat. Moreover, a philosopher can only eat their spaghetti when they have both a left and right fork. Thus two forks will only be available when their two nearest neighbors are thinking, not eating. After an individual philosopher finishes eating, they will put down both forks.
The problem is how to design a regimen (aconcurrentalgorithm) such that any philosopher will not starve;i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think (an issue ofincomplete information).
The problem was designed to illustrate the challenges of avoidingdeadlock, a system state in which no progress is possible. To see that a proper solution to this problem is not obvious, consider a proposal in which each philosopher is instructed to behave as follows:
With these instructions, the situation may arise where each philosopher holds the fork to their left; in that situation, they will all be stuck forever, waiting for the other fork to be available: it is a deadlock.
Resource starvation,mutual exclusionandlivelockare other types of sequence and access problems.
These four conditionsare necessary for a deadlock to occur:mutual exclusion(no fork can be simultaneously used by multiple philosophers),resource holding(the philosophers hold a fork while waiting for the second),non-preemption(no philosopher can take a fork from another), andcircular wait(each philosopher may be waiting on the philosopher to their left). A solution must negate at least one of those four conditions. In practice, negating mutual exclusion or non-preemption somehow can give a valid solution, but most theoretical treatments assume that those assumptions are non-negotiable, instead attacking resource holding or circular waiting (often both).
Dijkstra's solution negates resource holding; the philosophers atomically pick up both forks or wait, never holding exactly one fork outside of a critical section. To accomplish this, Dijkstra's solution uses onemutex, onesemaphoreper philosopher and onestate variableper philosopher. This solution is more complex than the resource hierarchy solution.[5][4]This is a C++20 version of Dijkstra's solution with changes byAndrew S. Tanenbaum:
The function test() and its use in take_forks() and put_forks() make the Dijkstra solution deadlock-free.
This solution negates circular waiting by assigning apartial orderto the resources (the forks, in this case), and establishes the convention that all resources will be requested in order, and that no two resources unrelated by order will ever be used by a single unit of work at the same time. Here, the resources (forks) will be numbered 1 through 5 and each unit of work (philosopher) will always pick up the lower-numbered fork first, and then the higher-numbered fork, from among the two forks he plans to use. The order in which each philosopher puts down the forks does not matter. In this case, if four of the five philosophers simultaneously pick up their lower-numbered forks, only the highest-numbered fork will remain on the table, so the fifth philosopher will not be able to pick up any fork. Moreover, only one philosopher will have access to that highest-numbered fork, so he will be able to eat using two forks. This can intuitively be thought of as having one "left-handed" philosopher at the table, who – unlike all the other philosophers – takes his fork from the left first.
While the resource hierarchy solution avoids deadlocks, it is not always practical, especially when the list of required resources is not completely known in advance. For example, if a unit of work holds resources 3 and 5 and then determines it needs resource 2, it must release 5, then 3 before acquiring 2, and then it must re-acquire 3 and 5 in that order. Computer programs that access large numbers of database records would not run efficiently if they were required to release all higher-numbered records before accessing a new record, making the method impractical for that purpose.[2]
The resource hierarchy solution is notfair. If philosopher 1 is slow to take a fork, and if philosopher 2 is quick to think and pick its forks back up, then philosopher 1 will never get to pick up both forks. A fair solution must guarantee that each philosopher will eventually eat, no matter how slowly that philosopher moves relative to the others.
The following source code is a C++11 implementation of the resource hierarchy solution for five philosophers. The sleep_for() function simulates the time normally spent with business logic.[6]
For GCC: compile with
Another approach is to guarantee that a philosopher can only pick up both forks or none by introducing an arbitrator to replace circular waiting, e.g., a waiter. In order to pick up the forks, a philosopher must ask permission of the waiter. The waiter gives permission to only one philosopher at a time until the philosopher has picked up both of his forks. Putting down a fork is always allowed. The waiter can be implemented as a mutex.
In addition to introducing a new central entity (the waiter), this approach can result in reduced parallelism: if a philosopher is eating and one of his neighbors is requesting the forks, all other philosophers must wait until this request has been fulfilled even if forks for them are still available.
A solution presented byWilliam Stallings[7]is to allow a maximum ofn-1philosophers to sit down at any time. The last philosopher would have to wait (for example, using a semaphore) for someone to finish dining before he "sits down" and requests access to any fork. This negates circular wait, guaranteeing at least one philosopher may always acquire both forks, allowing the system to make progress.
In 1984,K. Mani ChandyandJ. Misra[8]proposed a different solution to the dining philosophers problem to allow for arbitrary agents (numberedP1, ...,Pn) to contend for an arbitrary number of resources, unlike Dijkstra's solution. It is also completely distributed and requires no central authority after initialization. However, it violates the requirement that "the philosophers do not speak to each other" (due to the request messages).
This solution also allows for a large degree of concurrency, and will solve an arbitrarily large problem.
It also solves the starvation problem. The clean/dirty labels act as a way of giving preference to the most "starved" processes, and a disadvantage to processes that have just "eaten". One could compare their solution to one where philosophers are not allowed to eat twice in a row without letting others use the forks in between. Chandy and Misra's solution is more flexible than that, but has an element tending in that direction.
In their analysis, they derive a system of preference levels from the distribution of the forks and their clean/dirty states. They show that this system may describe adirected acyclic graph, and if so, the operations in their protocol cannot turn that graph into a cyclic one. This guarantees that deadlock cannot occur by negating circular waiting. However, if the system is initialized to a perfectly symmetric state, like all philosophers holding their left side forks, then the graph is cyclic at the outset, and their solution cannot prevent a deadlock. Initializing the system so that philosophers with lower IDs have dirty forks ensures the graph is initially acyclic.
|
https://en.wikipedia.org/wiki/Dining_philosophers_problem
|
TheUnixcommandfuseris used to show whichprocessesare using a specifiedcomputer file,file system, orUnix socket.
For example, to check process IDs and users accessing a USB drive:
The command displays theprocess identifiers(PIDs) of processes using the specified files or file
systems. In the default display mode, each PID is followed by a
letter denoting the type of access:
Only the PIDs are written tostandard output. Additional information is written to standard error. This makes it easier to process the output with computer programs.
The command can also be used to check what processes are using a network port:
The command returns a non-zero code if none of the files are
accessed or in case of a fatal error. If at least one access has succeeded, fuser returns zero.
The output of "fuser" may be useful in diagnosing "resource busy" messages arising when attempting tounmountfilesystems.
POSIXdefines the following options:[1]
psmisc adds the following options, among others:[2]
fuser– Shell and Utilities Reference,The Single UNIX Specification, Version 5 fromThe Open Group
ThisUnix-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Fuser_(Unix)
|
lsofis a command meaning "list open files", which is used in manyUnix-likesystems to report a list of all open files and the processes that opened them. Thisopen sourceutility was developed and supported by Victor A. Abell, the retired Associate Director of thePurdue UniversityComputing Center. It works in and supports several Unix flavors.[4]
A replacement for Linux,lsfd, is included inutil-linux.[5]
In 1985, Cliff Spencer publishes theofilescommand. Itsman pagesays: "ofiles – who has a file open [...] displays the owner and id of any process accessing a specified device". Spencer compiled it for4.2BSDandULTRIX.[6]Moreover, in thenewsgroupnet.unix-wizards, he further remarks:[7]
With all the chatter about dismounting active file systems,
I have posted my program to indicate who is using
a particular filesystem, "ofiles" to net.sources.
In 1988, the commandfstat("file status") appears as part of the4.3BSD-Tahoerelease. Its man page says:[8]
fstatidentifies open files. A file is considered open if a
process has it open, if it is the working directory for a
process, or if it is an active pure text file. If no
options are specified,fstatreports on all open files.
In 1989, in comp.sources.unix, Vic Abell publishes ports of the ofiles and fstat commands from4.3BSD-Tahoeto "DYNIX3.0.1[24] for Sequent Symmetry and Balance,SunOS4.0 andULTRIX2.2".[9][10]Various people had evolved and ported ofiles over the years. Abell contrasted the commands as follows:[10]
Fstat is similar to the ofiles program which I recently submitted. Like
ofiles, fstat identifies open files. It's orientation differs slightly
from that of ofiles: ofiles starts with a file name and paws through the
proc and user structures to identify the file; fstat reads all the proc
and user structures, displaying information in all files, optionally
applying a few filters to the output (including a single file name filter.)
In combination with netstat -aA and grep, fstat will identify the process
associated with a network connection, just as will ofiles.
In 1991, Vic Abell publishes lsof version 1.0 to comp.sources.unix. He notes:[1]
Lsof (for LiSt Open Files) lists files opened by processes on selected
Unix systems. It is my answer to those who regularly ask me when I am
going to make fstat (comp.sources.unix volume 18, number 107) or ofiles
(volume 18, number 57) available onSunOS4.1.1 or the like.
Lsof is a complete redesign of the fstat/ofiles series, based on the SunOS
vnode model. Thus, it has been tested onAIX3.1.[357],HP-UX[78].x,NeXTStep2.[01], SequentDynix3.0.12 and 3.1.2, andSunos4.1 and 4.1.1.
Using available kernel access methods, such as nlist() and kvm_read(),
lsof reads process table entries, user areas and file pointers to reach
the underlying structures that describe files opened by processes.
In 2018, Vic Abbell publishes lsof version 4.92. The same year, he initiates the transfer of responsibility. He writes:[11]
I will reach 80 years of age later this year and I think it's time for me to end my work on general lsof revision releases.
The lsof code is put on Github and maintenance is transferred.[11][12]
Open files in the system include disk files,named pipes, networksocketsand devices opened by all processes. One use for this command is when a disk cannot be unmounted because (unspecified) files are in use. The listing of open files can be consulted (suitably filtered if necessary) to identify the process that is using the files.
To view the port associated with a daemon:
From the above one can see that "sendmail" is listening on its standard port of "25".
One can also list Unix Sockets by usinglsof -U.
The lsof output describes:
For a complete list of options, see the Lsof(8) Linux manual page.[13]
|
https://en.wikipedia.org/wiki/Lsof
|
AFile Control Block(FCB) is a file system structure in which the state of an openfileis maintained. A FCB is managed by the operating system, but it resides in the memory of the program that uses the file, not in operating system memory. This allows a process to have as many files open at one time as it wants, provided it can spare enough memory for an FCB per file.
The FCB originates fromCP/Mand is also present in most variants ofDOS, though only as a backward compatibility measure inMS-DOSversions 2.0 and later. A full FCB is 36 bytes long; in early versions of CP/M, it was 33 bytes. This fixed size, which could not be increased without breaking application compatibility, led to the FCB's eventual demise as the standard method of accessing files.
The meanings of several of the fields in the FCB differ between CP/M and DOS, and also depending on what operation is being performed. The following fields have consistent meanings:[1]
The 20-byte-long field starting at offset 0x0C contained fields which (among others) provided further information about the file:[2]
Further values were used by newer versions of DOS until new information could no longer fit in these 20 bytes. Some preceding "negative offset" bytes were squeezed from reserved spaces in CP/M Zero Page and DOSProgram Segment Prefixfor storing file attributes.[1]
In CP/M,86-DOSandPC DOS1.x/MS-DOS 1.xx, the FCB was the only method of accessing files. Under DOS a few INT 21h subfunctions provided the interface to operate on files using the FCB.[1][3][4]When, with MS-DOS 2, preparations were made to support multiple processes or users,[3][4]use other filesystems[3][4]than FAT or to share files[4]over networks in the future, FCBs were felt to be too small to handle the extra data required for such features[4]and therefore FCBs were seen as inadequate for various future expansion paths.[3]Also, they didn't provide a field to specify sub-directories.[3]Exposing file system related data to user-space was also seen as a security risk.[4]FCBs were thus superseded byfile handles, as used onUNIXand its derivatives.[3]File handles are simply consecutive integer numbers associated with specific open files.
If a program uses the newer file handle API to open a file, the operating system will manage its internal data structure associated with that file in its own memory area. This has the great advantage that these structures can grow in size in later operating system versions without breaking compatibility with application programs; its disadvantage is that, given the rather simplisticmemory managementof DOS, space for as many of these structures as the most "file-hungry" program is likely to use has to be reserved at boot time and cannot be used for any other purpose while the computer is running. Such memory reservation is done using theFILES= directive in theCONFIG.SYSfile. This problem does not occur with FCBs in DOS 1 or in CP/M, since the operating system stores all that it needs to know about an open file inside the FCB and thus does not need to use any per-file memory in operating system memory space. When using FCBs in MS-DOS 3 or later, the FCB format depends on whether SHARE.EXE is loaded and whether the FCB refers to a local or remote file and often refers to a SFT entry. Because of this, the number of FCBs which can be kept open at once in DOS 3 or higher is limited as well, usually to 4; using theFCBS= directive in the CONFIG.SYS file, it may be increased beyond that number if necessary. UnderDR-DOS, both FILES and FCBS come from the same internal pool of available handles structures and are assigned dynamically as needed.[5]
FCBs were supported in all versions of MS-DOS andWindowsuntil the introduction of theFAT32filesystem.Windows 95,Windows 98andWindows Medo not support the use of FCBs on FAT32 drives due to its 32-bit cluster numbers,[4]except to read the volume label. This caused some old DOS applications, includingWordStar, to fail under these versions of Windows.
The FCB interface does not work properly onWindows NT,2000, etc. either – WordStar does not function properly on these operating systems. DOS emulatorsDOSEMUandDOSBoximplement the FCB interface properly, thus they are a way to run older DOS programs that need FCBs on modern operating systems.
A companion data structure used together with the FCB was theDisk Transfer Area(DTA).[2]This is the name given to the buffer where file contents (records) would be read into/written from. File access functions in DOS that used the FCB assumed a fixed location for the DTA, initially pointing to a part of the PSP (see next section); this location could be changed by calling a DOS function, with subsequent file accesses implicitly using the new location.
With the deprecation of the FCB method, the new file access functions which used file handles also provided a means to specify a memory buffer for file contents with every function call, such that maintaining concurrent, independent buffers (either for different files or for the same file) became much more practical.
Every DOS executable started from the shell (COMMAND.COM) was provided with a pre-filled 256-byte long data structure called theProgram Segment Prefix(PSP). Relevant fields within this structure include:[2]
This data structure could be found at the beginning of the data segment whose address was provided by DOS at program start in the DS and ES segment registers. Besides providing the program's command line verbatim at address 0x81, DOS also tried to construct two FCB's corresponding to the first two words in the command line, the purpose being to save work for the programmer in the common case where these words were filenames to operate on. Since these FCB's remained unopened, no problem would ensue even if these command line words did not refer to files.
The initial address for the DTA was set to overlay the area in the PSP (at address 0x80) where the command line arguments were stored, such that a program needed to parse this area for command line arguments before invoking DOS functions that made use of the DTA (such as reading in a file record), unless the program took care to change the address of the DTA to some other memory region (or not use the DTA/FCB functions altogether, which soon became deprecated in favour of file handles).
|
https://en.wikipedia.org/wiki/File_Control_Block
|
Extended file attributesarefile systemfeatures that enable users to associatecomputer fileswithmetadatanot interpreted by the filesystem, whereas regular attributes have a purpose strictly defined by the filesystem (such aspermissionsor records of creation and modification times). Unlikeforks, which can usually be as large as the maximum file size, extended attributes are usually limited in size to a value significantly smaller than the maximum file size. Typical uses include storing the author of a document, thecharacter encodingof a plain-text document, or achecksum,cryptographic hashordigital certificate, anddiscretionary access controlinformation.
InUnix-likesystems, extended attributes are usually abbreviated asxattr.[1]
InAIX, the JFS2 v2 filesystem supports extended attributes, which are accessible using thegeteacommand.[2]Thegetea,[3]setea,[4]listea,[5]statea,[6]andremoveea[7]APIs support fetching, setting, listing, getting information about, and removing extended attributes.
In the now-defunctBeOSand successors likeHaiku, extended file attributes are widely used in base and third-party programs.
TheBe File Systemallows the indexing and querying of attributes, essentially giving the filesystemdatabase-like characteristics. The uses of extended attributes in Be-like systems are varied: For example,TrackerandOpenTracker, the file-managers of BeOS and Haiku respectively, both store the locations of file icons in attributes,[8]Haiku's "Mail" service stores all message content and metadata in extended file attributes,[9]and the MIME types of files are stored in their attributes. Extended file attributes can be viewed and edited in Be-like systems' GUI through the file-manager, often Tracker or derivatives thereof.
InFreeBSD5.0 and later, theUFS1, UFS2, and ZFS filesystems support extended attributes, using theextattr_[10]family of system calls. Any regular file may have a list of extended attributes. Each attribute consists of a name and the associated data. The name must be anull-terminated string, and exists in a namespace identified by a small-integer namespace identifier. Currently, two namespaces exist: user and system. The user namespace has no restrictions with regard to naming or contents. The system namespace is primarily used by the kernel foraccess control listsandmandatory access control.
InLinux, theext2,ext3,ext4,JFS,Squashfs,UBIFS,Yaffs2,ReiserFS,Reiser4,XFS,Btrfs,OrangeFS,Lustre,OCFS2 1.6,ZFS, andF2FS[11]filesystems support extended attributes (abbreviatedxattr) when enabled in the kernel configuration. Any regular file or directory may have extended attributes consisting of a name and associated data. The name must be anull-terminated stringprefixed by anamespaceidentifier and a dot character. Currently, four namespaces exist: user, trusted, security and system. The user namespace has no restrictions with regard to naming or contents. The system namespace is primarily used by the kernel foraccess control lists. The security namespace is used bySELinux, for example.
Support for the extended attribute concept from a POSIX.1e draft[citation needed]that had been withdrawn[12]in 1997 was added to Linux around 2002.[13][14]As of 2016, they are not yet in widespread use by user-space Linux programs, but are used byBeagle,OpenStack Swift,Dropbox,KDE's semantic metadata framework (Baloo),Chromium,Wget,cURL, andSnapcraft.
The Linux kernel allows extended attribute to have names of up to 255 bytes and values of up to 64 KiB,[15]as doXFSandReiserFS, butext2/3/4andbtrfsimpose much smaller limits, requiring all the attributes (names and values) of one file to fit in one "filesystem block" (usually 4 KiB). Per POSIX.1e,[citation needed]the names are required to start with one ofsecurity,system,trusted, anduserplus a period. This defines the four namespaces of extended attributes.[16]
Extended attributes can be accessed and modified using thegetfattrandsetfattrcommands from theattrpackage on most distributions.[17]The APIs are calledgetxattrandsetxattr.
Mac OS X 10.4and later support extended attributes by making use of theHFS+filesystem Attributes FileB*-treefeature which allows for named forks. Although the named forks in HFS+ support arbitrarily large amounts of data through extents, the OS support for extended attributes only supports inline attributes, limiting their size to that which can fit within a single B*-tree node.[citation needed]Any regular file may have a list of extended attributes. HFS+ supports an arbitrary number of named forks, and it is unknown ifmacOSimposes any limit on the number of extended attributes.
Each attribute consists of a name and the associated data. The name is anull-terminatedUnicodestring. No namespace restrictions are present (making this anopen xattrsystem) and the convention is to use a reverse DNS string (similar toUniform Type Identifiers) as the attribute name.
macOS supports listing,[18]getting,[19]setting,[20]and removing[21]extended attributes from files or directories using a Linux-like API. From the command line, these abilities are exposed through thexattrutility.[22]
Since macOS 10.5, files originating from the web are marked withcom.apple.quarantinevia extended file attributes.[23]In some older versions of macOS (such asMac OS X 10.6), user space extended attributes were not preserved on save in commonCocoaapplications (TextEdit, Preview etc.).[citation needed]
Support for extended file attributes was removed from theOpenBSDsource code in 2005 due to a lack of interest inAccess Control Lists.[24]
InOS/2version 1.2 and later, theHigh Performance File Systemwas designed with extended attributes in mind, but support for them was also retro-fitted on theFATfilesystem of DOS.
For compatibility with other operating systems using a FAT partition, OS/2 attributes are stored inside a single file "EA DATA. SF" located in the root directory. This file is normally inaccessible when an operating system supporting extended attributes manages the disk, but can be freely manipulated under, for example, DOS. Files and directories having extended attributes use one or moreclustersinside this file. The logical cluster number of the first used cluster is stored inside the owning file's or directory'sdirectory entry.[25]These two bytes are used for other purposes on the FAT32 filesystem, and hence OS/2 extended attributes cannot be stored on this filesystem.
Parts of OS/2 version 2.0 and later such as theWorkplace Shelluses several standardized extended attributes (also calledEAs) for purposes like identifying the filetype, comments,computer iconsand keywords about the file.
Programs written in the interpreted languageRexxstore an alreadyparsedversion of the code as an extended attribute, to allow faster execution.
Solarisversion 9 and later allows files to have "extended attributes", which are actuallyforks; the maximum size of an "extended attribute" is the same as the maximum size of a file, and they are read and written in the same fashion as files. Internally, they are actually stored and accessed like normal files, so their names cannot contain "/" characters[26]and their ownership and permissions can differ from those of the parent file.
Version 4 of theNetwork File Systemsupports extended attributes in much the same way as Solaris.
OnWindows NT, limited-length extended attributes are supported byFAT,[25]HPFS, andNTFS. This was implemented as part of theOS/2 subsystem. They are notably used by theNFSserver of theInterixPOSIX subsystem in order to implement Unix-like permissions. TheWindows Subsystem for Linuxadded in theWindows 10 Anniversary Updateuses them for similar purposes, storing the Linux file mode, owner, device ID (if applicable), and file times in the extended attributes.[27]
Additionally,NTFScan store arbitrary-length extended attributes in the form ofalternate data streams(ADS), a type ofresource fork. Plugins for the file managerTotal Commander, likeNTFS DescriptionsandQuickSearch eXtendedsupport filtering the file list by or searching for metadata contained in ADS.[28][29]NTFS-3Gsupports mapping ADS to extended attributes inFUSE; it also maps file attributes that way.[30]
|
https://en.wikipedia.org/wiki/Extended_file_attributes
|
Ext2Fsd(short forExt2 File System Driver) is afreeInstallable File Systemdriver written inCfor theMicrosoft Windowsoperating system family. It facilitates read and write access to theext2,ext3andext4file systems.
The driver can be installed onWindows 2000,Windows XP,Windows Server 2003,Windows Vista,Windows 7,Windows 8,[3]Windows 10,Windows Server 2008,Windows Server 2008 R2.[1]Support forWindows NTwas dropped in version 0.30.[4]
The programExt2Mgrcan optionally be installed additionally to manage drive letters and such. Since 2017 the application has effectively been abandonware as its author seemingly disappeared in August, 2020.
The German computer magazinePC-WELTreported frequent program crashes in 2009. The program was not able to access ext3 partitions smoothly. This often led to ablue screen. Crashes of this type can lead to data loss, for example if there is not yet permanently stored data in the main memory. The program could only access ext2 partitions without errors.[5]In 2012,Computerwochewarned that access to ext3 partitions was "not harmless". Data loss may occur.[6]
Source:[1]
On November 2, 2017, a warning was issued with the release of version 0.69:
Don't use Ext2Fsd 0.68 or earlier versions with latest Ubuntu or Debian systems. Ext2Fsd 0.68 cannot process EXT4 with 64-BIT mode enabled, then it could corrupt your data. Very sorry for this disaster issue, I'm working on an improvement.[1]
While it is not very clear whether v0.69 corrects this deficiency, users have reported[7]that Windows 10 prompts them to format the ext4 drive even with the 0.69 version. The known solution is to convert the said ext4 drive to a 32 bit version.[8]
|
https://en.wikipedia.org/wiki/Ext2Fsd
|
Next3is ajournaling file systemforLinuxbased onext3which addssnapshotssupport, yet retains compatibility to the ext3 on-disk format.[2][3]Next3 is implemented asopen-sourcesoftware, licensed under theGPLlicense.
A snapshot is aread-onlycopy of the file system frozen at apoint in time. Versioning file systems like Next3 can internally track old versions of files and make snapshots available through a specialnamespace.
An advantage ofcopy-on-writeis that when Next3 writes new data, the blocks containing the old data can be retained, allowing asnapshotversion of the file system to be maintained. Next3 snapshots are created quickly, since all the data composing the snapshot is already stored; they are also space efficient, since any unchanged data is shared among the file system and its snapshots.[2]
The traditional LinuxLogical Volume Managervolume level snapshots implementation requires that storage space be allocated in advance. Next3 uses Dynamically provisioned snapshots, meaning it does not require pre-allocation of storage space for snapshots, instead allocating space as it is needed. Storage space is conserved by sharing unchanged data among the file system and its snapshots.[4]
Since Next3 aims to be bothforwardandbackward compatiblewith the earlier ext3, all of the on-disk structures are identical to those of ext3.[2]The file system can be mounted for read by existing ext3 implementations with no modification. Because of that, Next3, like ext3, lacks a number of features of more recent designs, such asextents.[citation needed]
When there are no snapshots, Next3 performance is equivalent to ext3 performance. With snapshots, there is a minor overhead per write of metadata block (copy-on-write) and a smaller overhead (~1%) per write of data block (move-on-write).[5]
As of 2011, Next4, a project for porting of Next3 snapshot capabilities to theExt4file system, is mostly completed. The porting is attributed to members of thePune Institute of Computer Technology(PICT) and theChinese Academy of Sciences.[6]
|
https://en.wikipedia.org/wiki/Next3
|
Btrfs(pronounced as "better F S",[9]"butter F S",[13][14]"b-tree F S",[14]or "B.T.R.F.S.") is a computer storage format that combines afile systembased on thecopy-on-write(COW) principle with alogical volume manager(distinct from Linux'sLVM), developed together. It was created by Chris Mason in 2007[15]for use inLinux, and since November 2013, the file system's on-disk format has been declared stable in the Linuxkernel.[16]
Btrfs is intended to address the lack of pooling,snapshots,integrity checking,data scrubbing, and integral multi-device spanning inLinux file systems.[9]Mason, the principal Btrfs author, stated that its goal was "to let [Linux] scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable".[17]
The core data structure of Btrfs—the copy-on-writeB-tree—was originally proposed byIBMresearcher Ohad Rodeh at aUSENIXconference in 2007.[18]Mason, an engineer working onReiserFSforSUSEat the time, joined Oracle later that year and began work on a new file system based on these B-trees.[19]
In 2008, the principal developer of theext3andext4file systems,Theodore Ts'o, stated that although ext4 has improved features, it is not a major advance; it uses old technology and is a stop-gap. Ts'o said that Btrfs is the better direction because "it offers improvements in scalability, reliability, and ease of management".[20]Btrfs also has "a number of the same design ideas thatreiser3/4had".[21]
Btrfs 1.0, with finalized on-disk format, was originally slated for a late-2008 release,[22]and was finally accepted into theLinux kernel mainlinein 2009.[23]SeveralLinux distributionsbegan offering Btrfs as an experimental choice ofroot file systemduring installation.[24][25][26]
In July 2011, Btrfs automaticdefragmentationand scrubbing features were merged into version 3.0 of theLinux kernel mainline.[27]Besides Mason at Oracle, Miao Xie at Fujitsu contributed performance improvements.[28]In June 2012, Mason left Oracle forFusion-io, which he left a year later with Josef Bacik to joinFacebook. While at both companies, Mason continued his work on Btrfs.[29][19]
In 2012, two Linux distributions moved Btrfs from experimental to production or supported status:Oracle Linuxin March,[30]followed bySUSE Linux Enterprisein August.[31]
In 2015, Btrfs was adopted as the default filesystem forSUSE Linux Enterprise Server(SLE) 12.[32]
In August 2017, Red Hat announced in the release notes forRed Hat Enterprise Linux(RHEL) 7.4 that it no longer planned to move Btrfs to a fully supported feature (it's been included as a "technology preview" since RHEL 6 beta) noting that it would remain available in the RHEL 7 release series.[33]Btrfs was removed from RHEL 8 in May 2019.[34]RHEL moved from ext4 in RHEL 6 toXFSin RHEL 7.[35]
In 2020, Btrfs was selected as the default file system forFedora33 for desktop variants.[36]
As of version 6.0 of the Linux kernel, Btrfs implements the following features:[37][38][39]
Btrfs provides acloneoperation thatatomicallycreates a copy-on-write snapshot of afile. Such cloned files are sometimes referred to asreflinks, in light of the proposed associated Linux kernelsystem call.[56]
By cloning, the file system does not create a new link pointing to an existinginode; instead, it creates a new inode that initially shares the same disk blocks with the original file. As a result, cloning works only within the boundaries of the same Btrfs file system, but since version 3.6 of the Linux kernel it may cross the boundaries of subvolumes under certain circumstances.[57][58]The actual data blocks are not duplicated; at the same time, due to the copy-on-write (CoW) nature of Btrfs, modifications to any of the cloned files are not visible in the original file and vice versa.[59]
Cloning should not be confused withhard links, which are directory entries that associate multiple file names with a single file. While hard links can be taken as different names for the same file, cloning in Btrfs provides independent files that initially share all their disk blocks.[59][60]
Support for this Btrfs feature was added in version 7.5 of theGNU coreutils, via the--reflinkoption to thecpcommand.[61][62]
In addition to data cloning (FICLONE), Btrfs also supports out-of-band deduplication viaFIDEDUPERANGE. This functionality allows two files with (even partially) identical data to share storage.[63][10]
A Btrfs subvolume can be thought of as a separate POSIX filenamespace,mountableseparately by passingsubvolorsubvolidoptions to themount(8)utility. It can also be accessed by mounting the top-level subvolume, in which case subvolumes are visible and accessible as its subdirectories.[64]
Subvolumes can be created at any place within the file system hierarchy, and they can also be nested. Nested subvolumes appear as subdirectories within their parent subvolumes, similarly to the way a top-level subvolume presents its subvolumes as subdirectories. Deleting a subvolume is not possible until all subvolumes below it in the nesting hierarchy are deleted; as a result, top-level subvolumes cannot be deleted.[65]
Any Btrfs file system always has a default subvolume, which is initially set to be the top-level subvolume, and is mounted by default if no subvolume selection option is passed tomount. The default subvolume can be changed as required.[65]
A Btrfssnapshotis a subvolume that shares its data (and metadata) with some other subvolume, using Btrfs' copy-on-write capabilities, and modifications to a snapshot are not visible in the original subvolume. Once a writable snapshot is made, it can be treated as an alternate version of the original file system. For example, to roll back to a snapshot, a modified original subvolume needs to be unmounted and the snapshot needs to be mounted in its place. At that point, the original subvolume may also be deleted.[64]
The copy-on-write (CoW) nature of Btrfs means that snapshots are quickly created, while initially consuming very little disk space. Since a snapshot is a subvolume, creating nested snapshots is also possible. Taking snapshots of a subvolume is not a recursive process; thus, if a snapshot of a subvolume is created, every subvolume or snapshot that the subvolume already contains is mapped to an empty directory of the same name inside the snapshot.[64][65]
Taking snapshots of a directory is not possible, as only subvolumes can have snapshots. However, there is a workaround that involves reflinks spread across subvolumes: a new subvolume is created, containing cross-subvolume reflinks to the content of the targeted directory. Having that available, a snapshot of this new volume can be created.[57]
A subvolume in Btrfs is quite different from a traditionalLogical Volume Manager(LVM) logical volume. With LVM, a logical volume is a separateblock device, while a Btrfs subvolume is not and it cannot be treated or used that way.[64]Making dd or LVM snapshots of btrfs leads to data loss if either the original or the copy is mounted while both are on the same computer.[66]
Given any pair of subvolumes (or snapshots), Btrfs can generate a binarydiffbetween them (by using thebtrfs sendcommand) that can be replayed later (by usingbtrfs receive), possibly on a different Btrfs file system. The send–receive feature effectively creates (and applies) a set of data modifications required for converting one subvolume into another.[50][67]
The send/receive feature can be used with regularly scheduled snapshots for implementing a simple form of file systemreplication, or for the purpose of performingincremental backups.[50][67]
Aquota group(orqgroup) imposes an upper limit to the space a subvolume or snapshot may consume. A new snapshot initially consumes no quota because its data is shared with its parent, but thereafter incurs a charge for new files and copy-on-write operations on existing files. When quotas are active, a quota group is automatically created with each new subvolume or snapshot. These initial quota groups are building blocks which can be grouped (with thebtrfs qgroupcommand) into hierarchies to implement quota pools.[52]
Quota groups only apply to subvolumes and snapshots, while having quotas enforced on individual subdirectories, users, or user groups is not possible. However, workarounds are possible by using different subvolumes for all users or user groups that require a quota to be enforced.
As the result of having very little metadata anchored in fixed locations, Btrfs can warp to fit unusual spatial layouts of the backend storage devices. Thebtrfs-converttool exploits this ability to do an in-place conversion of an ext2/3/4 orReiserFSfile system, by nesting the equivalent Btrfs metadata in its unallocated space—while preserving an unmodified copy of the original file system.[68]
The conversion involves creating a copy of the whole ext2/3/4 metadata, while the Btrfs files simply point to the same blocks used by the ext2/3/4 files. This makes the bulk of the blocks shared between the two filesystems before the conversion becomes permanent. Thanks to the copy-on-write nature of Btrfs, the original versions of the file data blocks are preserved during all file modifications. Until the conversion becomes permanent, only the blocks that were marked as free in ext2/3/4 are used to hold new Btrfs modifications, meaning that the conversion can be undone at any time (although doing so will erase any changes made after the conversion to Btrfs).[68]
All converted files are available and writable in the default subvolume of the Btrfs. A sparse file holding all of the references to the original ext2/3/4 filesystem is created in a separate subvolume, which is mountable on its own as a read-only disk image, allowing both original and converted file systems to be accessed at the same time. Deleting this sparse file frees up the space and makes the conversion permanent.[68]
In 4.x versions of the mainline Linux kernel, the in-place ext3/4 conversion was considered untested and rarely used.[68]However, the feature was rewritten from scratch in 2016 forbtrfs-progs4.6.[48]and has been considered stable since then.
In-place conversion from ReiserFS was introduced in September 2017 with kernel 4.13.[69]
When creating a new Btrfs, an existing Btrfs can be used as a read-only "seed" file system.[70]The new file system will then act as a copy-on-write overlay on the seed, as a form ofunion mounting. The seed can be later detached from the Btrfs, at which point the rebalancer will simply copy over any seed data still referenced by the new file system before detaching. Mason has suggested this may be useful for aLive CDinstaller, which might boot from a read-only Btrfs seed on an optical disc, rebalance itself to the target partition on the install disk in the background while the user continues to work, then eject the disc to complete the installation without rebooting.[71]
In his 2009 interview, Mason stated that support for encryption was planned for Btrfs.[9]In the meantime, a workaround for combining encryption with Btrfs is to use a full-disk encryption mechanism such asdm-crypt/LUKSon the underlying devices and to create the Btrfs filesystem on top of that layer.
As of 2020,[update]the developers were working to add keyed hash likeHMAC(SHA256).[72]
Unix systems traditionally rely on "fsck" programs to check and repair filesystems. This functionality is implemented via thebtrfs checkprogram. Since version 4.0 this functionality is deemed relatively stable. However, as of December 2022, the btrfs documentation suggests that its--repairoption be used only if you have been advised by "a developer or an experienced user".[73]As of August 2022, the SLE documentation recommends using a Live CD, performing a backup and only using the repair option as a last resort.[74]
There is another tool, namedbtrfs-restore, that can be used to recover files from an unmountable filesystem, without modifying the broken filesystem itself (i.e., non-destructively).[75][76]
In normal use, Btrfs is mostly self-healing and can recover from broken root trees at mount time, thanks to making periodic data flushes to permanent storage, by default every 30 seconds. Thus, isolated errors will cause a maximum of 30 seconds of filesystem changes to be lost at the next mount.[77]This period can be changed by specifying a desired value (in seconds) with thecommitmount option.[78][79]
Ohad Rodeh's original proposal at USENIX 2007 noted thatB+ trees, which are widely used as on-disk data structures for databases, could not efficiently allow copy-on-write-based snapshots because its leaf nodes were linked together: if a leaf was copied on write, its siblings and parents would have to be as well, as wouldtheirsiblings and parents and so on until the entire tree was copied. He suggested instead a modifiedB-tree(which has no leaf linkage), with arefcountassociated to each tree node but stored in an ad hoc free map structure and certain relaxations to the tree's balancing algorithms to make them copy-on-write friendly. The result would be a data structure suitable for a high-performance object store that could perform copy-on-write snapshots, while maintaining goodconcurrency.[18]
At Oracle later that year, Mason began work on a snapshot-capable file system that would use this data structure almost exclusively—not just for metadata and file data, but also recursively to track space allocation of the trees themselves. This allowed all traversal and modifications to be funneled through a single code path, against which features such as copy on write, checksumming and mirroring needed to be implemented only once to benefit the entire file system.[80]
Btrfs is structured as several layers of such trees, all using the same B-tree implementation. The trees store genericitemssorted by a 136-bit key. The most significant 64 bits of the key are a uniqueobject id. The middle eight bits are an item type field: its use is hardwired into code as an item filter in tree lookups.Objectscan have multiple items of multiple types. The remaining (least significant) 64 bits are used in type-specific ways. Therefore, items for the same object end up adjacent to each other in the tree, grouped by type. By choosing certain key values, objects can further put items of the same type in a particular order.[80][4]
Interior tree nodes are simply flat lists of key-pointer pairs, where the pointer is the logical block number of a child node. Leaf nodes contain item keys packed into the front of the node and item data packed into the end, with the two growing toward each other as the leaf fills up.[80]
Within each directory, directory entries appear asdirectory items, whose least significant bits of key values are aCRC32Chash of their filename. Their data is alocation key, or the key of theinodeitem it points to. Directory items together can thus act as an index for path-to-inode lookups, but are not used for iteration because they are sorted by their hash, effectivelyrandomly permutingthem. This means user applications iterating over and opening files in a large directory would thus generate many more disk seeks between non-adjacent files—a notable performance drain in other file systems with hash-ordered directories such asReiserFS,[81]ext3 (with Htree-indexes enabled[82]) and ext4, all of which haveTEA-hashed filenames. To avoid this, each directory entry has adirectory index item, whose key value of the item is set to a per-directory counter that increments with each new directory entry. Iteration over these index items thus returns entries in roughly the same order as stored on disk.
Files with hard links in multiple directories have multiple reference items, one for each parent directory. Files with multiple hard links in thesamedirectory pack all of the links' filenames into the same reference item. This was a design flaw that limited the number of same-directory hard links to however many could fit in a single tree block. (On the default block size of 4 KiB, an average filename length of 8 bytes and a per-filename header of 4 bytes, this would be less than 350.) Applications which made heavy use of multiple same-directory hard links, such asgit,GNUS,GMameandBackupPCwere observed to fail at this limit.[83]The limit was eventually removed[84](and as of October 2012 has been merged[85]pending release in Linux 3.7) by introducing spilloverextended reference itemsto hold hard link filenames which do not otherwise fit.
File data is kept outside the tree inextents, which are contiguous runs of disk data blocks. Extent blocks default to 4 KiB in size, do not have headers and contain only (possibly compressed) file data. In compressed extents, individual blocks are not compressed separately; rather, the compression stream spans the entire extent.
Files haveextent data itemsto track the extents which hold their contents. The item's key value is the starting byte offset of the extent. This makes for efficient seeks in large files with many extents, because the correct extent for any given file offset can be computed with just one tree lookup.
Snapshots and cloned files share extents. When a small part of a large such extent is overwritten, the resulting copy-on-write may create three new extents: a small one containing the overwritten data, and two large ones with unmodified data on either side of the overwrite. To avoid having to re-write unmodified data, the copy-on-write may instead createbookend extents, or extents which are simply slices of existing extents. Extent data items allow for this by including an offset into the extent they are tracking: items for bookends are those with non-zero offsets.[4]
Theextent allocation treeacts as an allocation map for the file system. Unlike other trees, items in this tree do not have object ids. They represent regions of space: their key values hold the starting offsets and lengths of the regions they represent.
The file system divides its allocated space intoblock groupswhich are variable-sized allocation regions that alternate between preferring metadata extents (tree nodes) and data extents (file contents). The default ratio of data to metadata block groups is 1:2. They are intended to use concepts of theOrlov block allocatorto allocate related files together and resist fragmentation by leaving free space between groups. (Ext3 block groups, however, have fixed locations computed from the size of the file system, whereas those in Btrfs are dynamic and created as needed.) Each block group is associated with ablock group item. Inode items in the file system tree include a reference to their current block group.[4]
Extent itemscontain a back-reference to the tree node or file occupying that extent. There may be multiple back-references if the extent is shared between snapshots. If there are too many back-references to fit in the item, they spill out into individualextent data reference items. Tree nodes, in turn, have back-references to their containing trees. This makes it possible to find which extents or tree nodes are in any region of space by doing a B-tree range lookup on a pair of offsets bracketing that region, then following the back-references. For relocating data, this allows an efficient upwards traversal from the relocated blocks to quickly find and fix all downwards references to those blocks, without having to scan the entire file system. This, in turn, allows the file system to efficiently shrink, migrate, and defragment its storage online.
The extent allocation tree, as with all other trees in the file system, is copy-on-write. Writes to the file system may thus cause a cascade whereby changed tree nodes and file data result in new extents being allocated, causing the extent tree itself to change. To avoid creating afeedback loop, extent tree nodes which are still in memory but not yet committed to disk may be updated in place to reflect new copied-on-write extents.
In theory, the extent allocation tree makes a conventionalfree-space bitmapunnecessary because the extent allocation tree acts as a B-tree version of aBSP tree. In practice, however, an in-memoryred–black treeofpage-sized bitmaps is used to speed up allocations. These bitmaps are persisted to disk (starting in Linux 2.6.37, via thespace_cachemount option[86]) as special extents that are exempt from checksumming and copy-on-write.
CRC-32Cchecksums are computed for both data and metadata and stored aschecksum itemsin achecksum tree. There is room for 256 bits of metadata checksums and up to a full node (roughly 4 KB or more) for data checksums. Btrfs has provisions for additional checksum algorithms to be added in future versions of the file system.[37][87]
There is one checksum item per contiguous run of allocated blocks, with per-block checksums packed end-to-end into the item data. If there are more checksums than can fit, they spill into another checksum item in a new leaf. If the file system detects a checksum mismatch while reading a block, it first tries to obtain (or create) a good copy of this block from another device – if internal mirroring or RAID techniques are in use.[88][89]
Btrfs can initiate an online check of the entire file system by triggering a file system scrub job that is performed in the background. The scrub job scans the entire file system for integrity and automatically attempts to report and repair any bad blocks it finds along the way.[88][90]
Anfsyncrequest commits modified data immediately to stable storage. fsync-heavy workloads (like adatabaseor avirtual machinewhose running OSfsyncsfrequently) could potentially generate a great deal of redundant write I/O by forcing the file system to repeatedly copy-on-write and flush frequently modified parts of trees to storage. To avoid this, a temporary per-subvolumelog treeis created tojournalfsync-triggered copies on write. Log trees are self-contained, tracking their own extents and keeping their own checksum items. Their items are replayed and deleted at the next full tree commit or (if there was a system crash) at the next remount.
Block devicesare divided intophysical chunksof 1 GiB for data and 256 MiB for metadata.[91]Physical chunks across multiple devices can be mirrored or striped together into a singlelogical chunk. These logical chunks are combined into a single logical address space that the rest of the filesystem uses.
Thechunk treetracks this by storing each device therein as adevice itemand logical chunks aschunk map items, which provide a forward mapping from logical to physical addresses by storing their offsets in the least significant 64 bits of their key. Chunk map items can be one of several different types:
Nis the number of block devices still having free space when the chunk is allocated. If N is not large enough for the chosen mirroring/mapping, then the filesystem is effectively out of space.
Defragmentation, shrinking, and rebalancing operations require extents to be relocated. However, doing a simple copy-on-write of the relocating extent will break sharing between snapshots and consume disk space. To preserve sharing, an update-and-swap algorithm is used, with a specialrelocation treeserving as scratch space for affected metadata. The extent to be relocated is first copied to its destination. Then, by following backreferences upward through the affected subvolume's file system tree, metadata pointing to the old extent is progressively updated to point at the new one; any newly updated items are stored in the relocation tree. Once the update is complete, items in the relocation tree are swapped with their counterparts in the affected subvolume, and the relocation tree is discarded.[93]
All the file system's trees—including the chunk tree itself—are stored in chunks, creating a potentialbootstrappingproblem whenmountingthe file system. Tobootstrapinto a mount, a list of physical addresses of chunks belonging to the chunk and root trees are stored in thesuperblock.[94]
Superblock mirrorsare kept at fixed locations:[95]64 KiB into every block device, with additional copies at 64 MiB, 256 GiB and 1 PiB. When a superblock mirror is updated, itsgeneration numberis incremented. At mount time, the copy with the highest generation number is used. All superblock mirrors are updated in tandem, except inSSDmode which alternates updates among mirrors to provide somewear levelling.
|
https://en.wikipedia.org/wiki/Btrfs
|
e2fsprogs(sometimes called thee2fs programs) is a set of utilities for maintaining theext2,ext3andext4file systems. Since those file systems are often the default forLinux distributions, it is commonly considered to be essential software.
Included with e2fsprogs, ordered byASCIIbetical order, are:
Many of these utilities are based on thelibext2fslibrary.
Despite what its name might suggest, e2fsprogs works not only with ext2, but also with ext3 and ext4. Although ext3'sjournalingcapability can reduce the need to usee2fsck, it is sometimes still necessary to help protect against kernel bugs or bad hardware.
As theuserspacecompanion for the ext2, ext3, and ext4 drivers in theLinux kernel, the e2fsprogs are most commonly used withLinux. However, they have been ported to other systems, such asFreeBSDandDarwin.
|
https://en.wikipedia.org/wiki/E2fsprogs
|
Journaled File System(JFS) is a64-bitjournaling file systemcreated byIBM. There are versions forAIX,OS/2,eComStation,ArcaOSandLinuxoperating systems. The latter is available as free software under the terms of the GNU General Public License (GPL).HP-UXhas another, different filesystem named JFS that is actually an OEM version ofVeritas Software'sVxFS.
In the AIX operating system, two generations of JFS exist, which are calledJFS(JFS1) andJFS2respectively.[1]
IBM'sJFSwas originally designed for32-bitsystems.JFS2was designed for64-bitsystems.[2]
In other operating systems, such as OS/2 and Linux, only the second generation exists and is called simplyJFS.[3]This should not be confused with JFS inAIXthat actually refers to JFS1.
IBM introduced JFS with the initial release of AIX version 3.1 in February 1990. This file system, now calledJFS1 on AIX, was the premier file system for AIX over the following decade and was install jndsed in thousands or millions of customers' AIX systems. Historically, the JFS1 file system is very closely tied to the memory manager of AIX,[1]which is a typical design for a file system supporting only one operating system. JFS was one of the first file systems to supportJournaling.
In 1995, work began to enhance the file system to be more scalable and to support machines that had more than one processor. Another goal was to have a more portable file system, capable of running on multiple operating systems. After several years of designing, coding, and testing, the new JFS was first shipped in OS/2 Warp Server for eBusiness in April 1999, and then in OS/2 Warp Client in October 2000. In December 1999, a snapshot of the original OS/2 JFS source was granted to theopen sourcecommunity and work was begun to port JFS toLinux. The first stable release ofJFS for Linuxappeared in June 2001.[3]TheJFS for Linuxproject is maintained by a small group of contributors known as theJFS Core Team.[4]This release of sources also worked to form the basis of a re-port back to OS/2 of the open-source JFS.
In parallel with this effort, some of the JFS development team returned to the AIX Operating System Development Group in 1997 and started to move this new JFS source base to the AIX operating system. In May 2001, a second journaled file system,Enhanced Journaled File System (JFS2), was made available for AIX 5L.[1][3]
Early in 2008 there was speculation that IBM is no longer interested in maintaining JFS and thus it should not be used in production environments.[5]However, Dave Kleikamp, a member of theIBM Linux Technology Centerand JFS Core Team,[4]explained that they still follow changes in theLinux kerneland try to fix potentialsoftware bugs. He went on to add that certain distributions expect a larger resource commitment from them and opt not to support the filesystem.[6]
In 2012,TRIMcommand support forsolid-state driveswas added to JFS.[7]
JFS supports the following features.[8][9]
JFS is ajournaling file system. Rather than adding journaling as an add-on feature like in theext3file system, it was implemented from the start. The journal can be up to 128 MB. JFS journals metadata only, which means that metadata will remain consistent but user files may be corrupted after a crash or power loss. JFS's journaling is similar toXFSin that it only journals parts of theinode.[10]
JFS uses aB+ treeto accelerate lookups in directories. JFS can store 8 entries of a directory in the directory'sinodebefore moving the entries to a B+ tree. JFS also indexes extents in a B+ tree.
JFS dynamically allocates space for diskinodesas necessary. Each inode is 512 bytes. 32 inodes are allocated on a 16 kB Extent.
JFS allocates files as anextent. An extent is a variable-length sequence of Aggregate blocks. An extent may be located in severalallocation groups. To solve this the extents are indexed in a B+ tree for better performance when locating the extent locations.
Compressionis supported only in JFS1 on AIX and uses a variation of theLZ algorithm. Because of highCPU usageand increased free spacefragmentation, compression is not recommended for use other than on a single userworkstationor off-linebackupareas.
JFS normally applies read-shared, write-exclusive locking to files, which avoids data inconsistencies but imposes write serialization at the file level. The CIO option disables this locking. Applications such as relational databases which maintain data consistency themselves can use this option to largely eliminate filesystem overheads.[11]
JFS uses allocation groups. Allocation groups divide the aggregate space into chunks. This allows JFS to use resource allocation policies to achieve great I/O performance. The first policy is to try to cluster disk blocks and disk inodes for related data in the same AG in order to achieve good locality for the disk. The second policy is to distribute unrelated data throughout the file system in an attempt to minimize free-space fragmentation. When there is an open file JFS will lock the AG the file resides in and only allow the open file to grow. This reduces fragmentation as only the open file can write to the AG.
Thesuperblockmaintains information about the entire file system and includes the following fields:
In the Linux operating system, JFS is supported with thekernelmodule (since the kernel version2.4.18pre9-ac4) and the complementaryuserspaceutilities packaged under the nameJFSutils. MostLinux distributionssupport JFS unless it is specifically removed due to space restrictions, such as onlive CDs.[citation needed]
According to benchmarks of the available filesystems for Linux, JFS is fast and reliable, with consistently good performance under different kinds of load.[12]
Actual usage of JFS in Linux is uncommon, however, JFS does have a niche role in Linux: it offers a case-insensitive mount option, unlike most other Linux file systems.[13]
There are also potential problems with JFS, such as its implementation of journal writes. They can be postponed until there is another trigger—potentially indefinitely, which can cause data loss over a theoretically infinite timeframe.[14]
|
https://en.wikipedia.org/wiki/JFS_(file_system)
|
Defaultfile systemused in variousoperating systems.
openSUSE10.2
|
https://en.wikipedia.org/wiki/List_of_default_file_systems
|
0x83(MBR)
Reiser4is acomputerfile system, successor to theReiserFSfile system, developed from scratch byNamesysand sponsored byDARPAas well asLinspire. Reiser4 was named after its former lead developerHans Reiser. As of 2021[update], the Reiser4 patch set is still being maintained,[3][4]but according toPhoronix, it is unlikely to be merged into mainline Linux without corporate backing.[5]
Some of the goals of the Reiser4 file system are:
Some of the more advanced Reiser4 features (such as user-defined transactions) are also not available because of a lack of aVFSAPI for them.
At present Reiser4 lacks a few standard file system features, such as an online repacker (similar to thedefragmentationutilities provided with other file systems). The creators of Reiser4 say they will implement these later, or sooner if someone pays them to do so.[11]
Reiser4 usesB*-treesin conjunction with thedancing treebalancing approach, in which underpopulated nodes will not be merged until a flush to disk except under memory pressure or when a transaction completes. Such a system also allows Reiser4 to create files and directories without having to waste time and space through fixed blocks.
As of 2004[update], synthetic benchmarks performed by Namesys in 2003 show that Reiser4 is 10 to 15 times faster than its most serious competitorext3working on files smaller than 1KiB. Namesys's benchmarks suggest it is typically twice the performance of ext3 for general-purpose filesystem usage patterns.[12]Other benchmarks from 2006 show results of Reiser4 being slower on many operations.[13]Benchmarks conducted in 2013 with Linux Kernel version 3.10 show that Reiser4 is considerably faster in various tests compared to in-kernel filesystemsext4,btrfsandXFS.[14]
Reiser4 has patches for Linux 2.6, 3.x, 4.x and 5.x.,[15][3]but as of 2019[update], Reiser4 has not been merged into the mainlineLinux kernel[3]and consequently is still not supported on manyLinux distributions; however, its predecessor ReiserFS v3 has been widely adopted. Reiser4 is also available fromAndrew Morton's-mmkernel sources, and from the Zen patch set. The Linux kernel developers claim that Reiser4 does not follow the Linux "coding style" by the decision to use its own plugin system,[16]butHans Reisersuggested the decision was made for political reasons.[17]The latest released Reiser4 kernel patches and tools can be downloaded from Reiser4 project page at sourceforge.net.[4]
Hans Reiser was convicted ofmurderon April 28, 2008, leaving the future of Reiser4 uncertain. After his arrest, employees of Namesys were assured they would continue to work and that the events would not slow down the software development in the immediate future. In order to afford increasing legal fees, Hans Reiser announced on December 21, 2006, that he was going to sell Namesys;[18]as of March 26, 2008, it had not been sold, although the website was unavailable. In January 2008, Edward Shishkin, an employee of and programmer for Namesys, was quoted in a CNET interview saying, "Commercial activity of Namesys has stopped." Shishkin and others continued the development of Reiser4,[19]making source code available from Shishkin's web site,[20]later relocated tokernel.org.[21]Since 2008, Namesys employees have received 100% of their sponsored funding fromDARPA.[22][23][24]
In 2010,Phoronixwrote that Edward Shishkin was exploring options to get Reiser4 merged into Linux kernel mainline.[25]As of 2019[update], the file system is still being updated for new kernel releases, but has not been submitted for merging.[3]In 2015,Michael Larabelmentioned it is unlikely to happen without corporate backing,[26]and then he suggested in April 2019 that the main obstacle could be the renaming of Reiser4 to avoid reference to the initial author who was convicted of murder.[3]
Shishkin announced aReiser5filesystem on December 31, 2019.[27]
|
https://en.wikipedia.org/wiki/Reiser4
|
XFSis a high-performance64-bitjournaling file systemcreated bySilicon Graphics, Inc(SGI) in 1993.[7]It was the default file system in SGI'sIRIXoperating system starting with its version 5.3. XFS was ported to theLinux kernelin 2001; as of June 2014, XFS is supported by mostLinux distributions;Red Hat Enterprise Linuxuses it as its default file system.
XFS excels in the execution of parallelinput/output(I/O) operations due to its design, which is based onallocation groups(a type of subdivision of the physical volumes in which XFS is used- also shortened toAGs). Because of this, XFS enables extreme scalability of I/O threads, file system bandwidth, and size of files and of the file system itself when spanning multiple physical storage devices. XFS ensures the consistency of data by employingmetadatajournalingand supportingwrite barriers. Space allocation is performed viaextentswith data structures stored inB+ trees, improving the overall performance of the file system, especially when handling large files.Delayed allocationassists in the prevention of file system fragmentation; onlinedefragmentationis also supported.
Silicon Graphicsbegan development of XFS[8]("X" was meant to be filled in later but never was) in 1993 for itsUNIX System VbasedIRIXoperating system. The file system was released under theGNU General Public License(GPL) in May 1999.[9]
A team led by Steve Lord at SGI ported XFS to Linux,[10]and first support by aLinux distributioncame in 2001. This support gradually became available in almost all Linux distributions.[citation needed]
Initial support for XFS in the Linux kernel came throughpatchesfrom SGI. It merged into theLinux kernel mainlinefor the 2.6 series, and separately merged in February 2004 into the 2.4 series in version 2.4.25,[11]making XFS almost universally available on Linux systems.[12]Gentoo Linuxbecame the firstLinux distributionto introduce an option for XFS as the default filesystem in mid-2002.[13]
FreeBSDaddedread-onlysupport for XFS in December 2005, and in June 2006 introduced experimental write support. However, this was intended only as an aid in migration from Linux, not as a "main" file system. FreeBSD 10 removed support for XFS.[14]
In 2009, version 5.4 of 64-bitRed Hat Enterprise Linux(RHEL) Linux distribution contained the necessary kernel support for the creation and usage of XFS file systems, but lacked the corresponding command-line tools. The tools available fromCentOScould operate for that purpose, and Red Hat also provided them to RHEL customers on request.[15]RHEL 6.0, released in 2010, includes XFS support for a fee as part of Red Hat's "scalable file system add-on".[16]Oracle Linux6, released in 2011, also includes an option for using XFS.[17]
RHEL 7.0, released in June 2014, uses XFS as its default file system,[18]including support for using XFS for the/bootpartition, which previously was not practical due to bugs in theGRUBbootloader.[19]
Linux kernel 4.8 in August 2016 added a new feature, "reverse mapping". This is the foundation for a large set of planned features:snapshots,copy-on-write(COW) data,data deduplication, reflink copies, online data and metadatascrubbing, highly accurate reporting of data loss or bad sectors, and significantly improved reconstruction of damaged or corrupted filesystems. This work required changes to XFS's on-disk format.[20][21]
Linux kernel 5.10, released in December 2020, included the new on-disk format XFS v5. This was a hard break, since the deprecated XFS v4 can not be converted to XFS v5. Data on partitions formatted with XFS v4 has to be backed up to another partition or media in order to restore it after formatting the old partition with XFS v5, which completely wipes all data on it. The support for XFS v4 will be removed from the Linux kernel in September 2030.[22]
XFS v5 introduced "bigtime", to store inode timestamps as a 64-bit nanosecond counter instead of the traditional 32-bit seconds counter. This postpones the previousYear 2038 problemuntil the year 2486.[5]It also introduced metadata checksums.
The Gentoo Handbook,Gentoo Linux's official installation manual, has recommended XFS as the "all-purpose all-platform filesystem" since 28 Jun 2023, succeedingExt4.[23]
XFS is a 64-bit file system[24]and supports a maximum file system size of 8exbibytesminus one byte (263− 1 bytes), but limitations imposed by the host operating system can decrease this limit.32-bitLinux systems limit the size of both the file and file system to 16tebibytes.
In modern computing, journaling is a capability which ensures consistency of data in the file system, despite any power outages or system crash that may occur. XFS provides journaling for file system metadata, where file system updates are first written to a serial journal before the actual disk blocks are updated. The journal is a circular buffer of disk blocks that is not read in normal file system operation.
The XFS journal can be stored within the data section of the file system (as an internal log), or on a separate device to minimize disk contention.
In XFS, the journal primarily contains entries that describe the portions of the disk blocks changed by filesystem operations. Journal updates are performed asynchronously to avoid a decrease in performance speed.
In the event of a system crash, file system operations which occurred immediately prior to the crash can be reapplied and completed as recorded in the journal, which is how data stored in XFS file systems remain consistent. Recovery is performed automatically the first time the file system is mounted after the crash. The speed of recovery is independent of the size of the file system, instead depending on the amount of file system operations to be reapplied.
XFS file systems are internally partitioned intoallocation groups, which are equally sized linear regions within the file system.Filesand directories can span allocation groups. Each allocation group manages its owninodesand free space separately, providing scalability and parallelism so multiple threads and processes can perform I/O operations on the same file system simultaneously.
This architecture helps to optimize parallel I/O performance on systems with multiple processors and/or cores, as metadata updates can also be parallelized. The internal partitioning provided by allocation groups can be especially beneficial when the file system spans multiple physical devices, allowing optimal usage of throughput of the underlying storage components.
If an XFS file system is to be created on a stripedRAIDarray, astripeunitcan be specified when the file system is created. This maximizes throughput by ensuring that data allocations, inode allocations and the internal log (the journal) are aligned with the stripe unit.
Blocks used in files stored on XFS file systems are managed with variable lengthextentswhere one extent describes one or more contiguous blocks. This can shorten the list of blocks considerably, compared to file systems that list all blocks used by a file individually.
Block-oriented file systems manage space allocation with one or more block-oriented bitmaps; in XFS, these structures are replaced with an extent oriented structure consisting of a pair ofB+ treesfor each file system allocation group. One of the B+ trees is indexed by the length of the free extents, while the other is indexed by the starting block of the free extents. This dual indexing scheme allows for the highly efficient allocation of free extents for file system operations.
The file system block size represents the minimum allocation unit. XFS allows file systems to be created with block sizes ranging between 512 bytes and 64 KB, allowing the file system to be tuned for the expected degree of usage. When many small files are expected, a small block size would typically maximize capacity, but for a system dealing mainly with large files, a larger block size can provide a performance efficiency advantage.
XFS makes use oflazy evaluationtechniques for file allocation. When a file is written to the buffer cache, rather than allocating extents for the data, XFS simply reserves the appropriate number of file system blocks for the data held in memory. The actual block allocation occurs only when the data is finally flushed to disk. This improves the chance that the file will be written in a contiguous group of blocks, reducingfragmentationproblems and increasing performance.
XFS provides a 64-bit sparse address space for each file, which allows both for very large file sizes, and for "holes" within files in which no disk space is allocated. As the file system uses an extent map for each file, the file allocation map size is kept small. Where the size of the allocation map is too large for it to be stored within the inode, the map is moved into a B+ tree which allows for rapid access to data anywhere in the 64-bit address space provided for the file.
XFS provides multiple data streams for files; this is made possible by its implementation ofextended attributes. These allow the storage of a number of name/value pairs attached to a file. Names are nul-terminated printable character strings which are up to 256 bytes in length, while their associated values can contain up to 64KBof binary data.
They are further subdivided into two namespaces:rootanduser. Extended attributes stored in the root namespace can be modified only by the superuser, while attributes in the user namespace can be modified by any user with permission to write to the file.
Extended attributes can be attached to any kind of XFS inode, including symbolic links, device nodes, directories, etc. Theattrutility can be used to manipulate extended attributes from the command line, and thexfsdumpandxfsrestoreutilities are aware of extended attributes, and will back up and restore their contents. Many other backup systems do not support working with extended attributes.
For applications requiring high throughput to disk, XFS provides a direct I/O implementation that allows non-cached I/O operations to be applied directly to the userspace. Data is transferred between the buffer of the application and the disk usingDMA, which allows access to the full I/O bandwidth of the underlying disk devices.
XFS does not yet[25]provide direct support for snapshots, as it currently expects the snapshot process to be implemented by the volume manager. Taking a snapshot of an XFS filesystem involves temporarily halting I/O to the filesystem using thexfs_freezeutility, having the volume manager perform the actual snapshot, and then resuming I/O to continue with normal operations. The snapshot can then be mounted read-only for backup purposes.
Releases of XFS in IRIX incorporated an integrated volume manager called XLV. This volume manager has not been ported to Linux, and XFS works with standardLVMin Linux systems instead.
In recent Linux kernels, thexfs_freezefunctionality is implemented in the VFS layer, and is executed automatically when the Volume Manager's snapshot functionality is invoked. This was once a valuable advantage as theext3file system could not be suspended[26]and the volume manager was unable to create a consistent "hot" snapshot to back up a heavily busy database.[27]Fortunately this is no longer the case. Since Linux 2.6.29, the file systems ext3,ext4,GFS2andJFShave the freeze feature as well.[28]
Although the extent-based nature of XFS and the delayed allocation strategy it uses significantly improves the file system's resistance to fragmentation problems, XFS provides a filesystemdefragmentationutility (xfs_fsr, short for XFS filesystem reorganizer) that can defragment the files on a mounted and active XFS filesystem.[29]
XFS provides thexfs_growfsutility to perform online expansion of XFS file systems. XFS filesystems can be grown so long as there is remaining unallocated space on the device holding the filesystem. This feature is typically used in conjunction with volume management, as otherwise thepartitionholding the filesystem will need enlarging separately.
XFS implemented theDMAPIinterface to supportHierarchical Storage Managementin IRIX. As of October 2010, the Linux implementation of XFS supported the required on-disk metadata for DMAPI implementation, but the kernel support was reportedly not usable. For some time, SGI hosted a kernel tree which included the DMAPI hooks, but this support has not been adequately maintained, although kernel developers have stated an intention to bring this support up to date.[30]
The XFS guaranteed-rate I/O system provides an API that allows applications to reserve bandwidth to the filesystem. XFS dynamically calculates the performance available from the underlying storage devices, and will reserve bandwidth sufficient to meet the requested performance for a specified time. This is a feature unique to the XFS file system. Guaranteed rates can be "hard" or "soft", representing a trade off between reliability and performance; however, XFS will only allow "hard" guarantees if the underlying storage subsystem supports it. This facility is used mostly for real-time applications, such as video streaming.
Guaranteed-rate I/O was only supported underIRIX, and required special hardware for that purpose.[31]
|
https://en.wikipedia.org/wiki/XFS
|
ZFS(previouslyZettabyte File System) is afile systemwithvolume managementcapabilities. It began as part of theSun MicrosystemsSolarisoperating systemin 2001. Large parts of Solaris, including ZFS, were published under anopen source licenseasOpenSolarisfor around 5 years from 2005 before being placed under a closed source license whenOracle Corporationacquired Sun in 2009–2010. During 2005 to 2010, the open source version of ZFS wasportedtoLinux,Mac OS X(continued asMacZFS) andFreeBSD. In 2010, theillumosprojectforkeda recent version of OpenSolaris, including ZFS, to continue its development as an open source project. In 2013,OpenZFSwas founded to coordinate the development of open source ZFS.[3][4][5]OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used inUnix-likesystems.[6][7][8]
The management of stored data generally involves two aspects: the physicalvolume managementof one or moreblock storage devices(such ashard drivesandSD cards), including their organization intological block devicesas VDEVs (ZFS Virtual Device)[9]as seen by theoperating system(often involving avolume manager,RAID controller, array manager, or suitabledevice driver); and the management of data and files that are stored on these logical block devices (afile systemor other data storage).
ZFS is unusual because, unlike most other storage systems, it unifies both of these roles andacts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their status, condition, and logical arrangement into volumes) as well as of all the files stored on them. ZFS is designed to ensure (subject to sufficientdata redundancy) that data stored on disks cannot be lost due to physical errors, misprocessing by the hardware oroperating system, orbit rotevents anddata corruptionthat may happen over time. Its complete control of the storage system is used to ensure that every step, whether related to file management ordisk management, is verified, confirmed, corrected if needed, and optimized, in a way that the storage controller cards and separate volume and file systems cannot achieve.
ZFS also includes a mechanism for dataset and pool-levelsnapshotsandreplication, including snapshotcloning, which is described by theFreeBSDdocumentation as one of its "most powerful features" with functionality that "even other file systems with snapshot functionality lack".[10]Very large numbers of snapshots can be taken without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ("live") file system to be fully snapshotted several times an hour in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back "live" or previous file system states can be viewed, even on very large file systems, leading to savings in comparison to formal backup and restore processes.[10]Snapshots can also be cloned to form new independent file systems. ZFS also has the ability to take a pool level snapshot (known as a "checkpoint"), which allows rollback of operations that may affect the entire pool's structure or that add or remove entire datasets.
In 1987,AT&T Corporationand Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time:Berkeley Software Distribution,UNIX System V, andXenix. This became UnixSystem V Release 4(SVR4).[11]The project was released under the nameSolaris, which became the successor toSunOS4 (although SunOS 4.1.xmicro releases wereretroactively namedSolaris 1).[12]
ZFS was designed and implemented by a team at Sun led byJeff Bonwick, Bill Moore,[13]and Matthew Ahrens. It was announced on September 14, 2004,[14]but development started in 2001.[15]Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005[16]and released for developers as part of build 27 ofOpenSolarison November 16, 2005. In June 2006, Sun announced that ZFS was included in the mainstream 6/06 update toSolaris 10.[17]
Solaris was originally developed asproprietary software, but Sun Microsystems was an early commercial proponent ofopen sourcesoftware and in June 2005 released most of the Solariscodebaseunder theCDDLlicense and founded theOpenSolarisopen-sourceproject.[18]In Solaris 10 6/06 ("U2"), Sun added theZFSfile system and frequently updated ZFS with new features during the next 5 years. ZFS wasportedtoLinux,Mac OS X(continued asMacZFS), andFreeBSD, under this open source license.
The name at one point was said to stand for "Zettabyte File System",[19]but by 2006, the name was no longer considered to be an abbreviation.[20]A ZFS file system can store up to 256 quadrillionzettabytes(ZB).
In September 2007,NetAppsued Sun, claiming that ZFS infringed some of NetApp's patents onWrite Anywhere File Layout. Sun counter-sued in October the same year claiming the opposite. The lawsuits were ended in 2010 with an undisclosed settlement.[21]
Ported versions of ZFS began to appear in 2005. After theSun acquisition by Oraclein 2010, Oracle's version of ZFS became closed source, and the development of open-source versions proceeded independently, coordinated byOpenZFSfrom 2013.
Examples of features specific to ZFS include:
One major feature that distinguishes ZFS from otherfile systemsis that it is designed with a focus on data integrity by protecting the user's data on disk againstsilent data corruptioncaused bydata degradation, power surges (voltage spikes), bugs in diskfirmware, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc..
A 1999 study showed that neither any of the then-major and widespread filesystems (such asUFS,Ext,[22]XFS,JFS, orNTFS), nor hardware RAID (which hassome issues with data integrity) provided sufficient protection against data corruption problems.[23][24][25][26]Initial research indicates that ZFS protects data better than earlier efforts.[27][28]It is also faster than UFS[29][30]and can be seen as its replacement.
Within ZFS, data integrity is achieved by using aFletcher-basedchecksum or aSHA-256hash throughout the file system tree.[31]Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved atitspointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating aMerkle tree.[31]In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so that the entire pool self-validates.[31]
When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it "should" be. If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool providesdata redundancy(such as with internalmirroring), assuming that the copy of data is undamaged and with matching checksums.[32]It is optionally possible to provide additional in-pool redundancy by specifyingcopies=2(orcopies=3), which means that data will be stored twice (or three times) on the disk, effectively halving (or, forcopies=3, reducing to one-third) the storage capacity of the disk.[33]Additionally, some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety even with the default copies=1 setting.
If other copies of the damaged data exist or can be reconstructed from checksums andparitydata, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism) and recalculate the checksum—ideally resulting in the reproduction of the originally expected value. If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored.
If there are no copies of the damaged data, ZFS puts the pool in a faulted state,[34]preventing its future use and providing no documented ways to recover pool contents.
Consistency of data held in memory, such as cached data in the ARC, is not checked by default, as ZFS is expected to run on enterprise-quality hardware witherror correcting RAM. However, the capability to check in-memory data exists and can be enabled using "debug flags".[35]
For ZFS to be able to guarantee data integrity, it needs multiple copies of the data or parity information, usually spread across multiple disks. This is typically achieved by using either aRAIDcontroller or so-called "soft" RAID (built into afile system).
While ZFScanwork with hardwareRAIDdevices, it will usually work more efficiently and with greater data protection if it has raw access to all storage devices. ZFS relies on the disk for an honest view to determine the moment data is confirmed as safely written and has numerousalgorithmsdesigned to optimize its use ofcaching,cache flushing, and disk handling.
Disks connected to the system using a hardware, firmware, other "soft" RAID, or any other controller that modifies the ZFS-to-diskI/Opath will affect ZFS performance and data integrity. If a third-party device performs caching or presents drives to ZFS as a single system without thelow levelview ZFS relies upon, there is a much greater chance that the system will performlessoptimally and that ZFS will be less likely to prevent failures, recover from failures more slowly, or lose data due to a write failure. For example, if a hardware RAID card is used, ZFS may not be able to determine the condition of disks, determine if the RAID array is degraded or rebuilding, detect all data corruption, place data optimally across the disks, make selective repairs, control how repairs are balanced with ongoing use, or make repairs that ZFS could usually undertake. The hardware RAID card will interfere with ZFS' algorithms. RAID controllers also usually add controller-dependent data to the drives which prevents software RAID from accessing the user data. In the case of a hardware RAID controller failure, it may be possible to read the data with another compatible controller, but this isn't always possible and a replacement may not be available. Alternate hardware RAID controllers may not understand the original manufacturer's custom data required to manage and restore an array.
Unlike most other systems where RAID cards or similar hardware canoffloadresources and processing to enhance performance and reliability, with ZFS it is strongly recommended that these methodsnotbe used as they typicallyreducethe system's performance and reliability.
If disks must be attached through a RAID or other controller, it is recommended to minimize the amount of processing done in the controller by using a plainHBA (host adapter), a simplefanoutcard, or configure the card inJBODmode (i.e. turn off RAID and caching functions), to allow devices to be attached with minimal changes in the ZFS-to-disk I/O pathway. A RAID card in JBOD mode may still interfere if it has a cache or, depending upon its design, may detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may requireTime-Limited Error Recovery(TLER)/CCTL/ERC-enabled drives to prevent drive dropouts, so not all cards are suitable even with RAID functions disabled.[36]
Instead of hardware RAID, ZFS employs "soft" RAID, offeringRAID-Z(paritybased likeRAID 5and similar) anddisk mirroring(similar toRAID 1). The schemes are highly flexible.
RAID-Z is a data/parity distribution scheme likeRAID-5, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates thewrite hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usualread-modify-writesequence.[37]
As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this.[37]
In addition to handling whole-disk failures, RAID-Z can also detect and correctsilent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor.[37]
RAID-Z and mirroring do not require any special hardware: they do not need NVRAM for reliability, and they do not need write buffering for good performance or data protection. With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks.[promotion?][37]
There are five different RAID-Z modes:striping(similar to RAID 0, offers no redundancy),RAID-Z1(similar to RAID 5, allows one disk to fail),RAID-Z2(similar to RAID 6, allows two disks to fail),RAID-Z3(a RAID 7[a]configuration, allows three disks to fail), and mirroring (similar to RAID 1, allows all but one disk to fail).[39]
The need for RAID-Z3 arose in the early 2000s as multi-terabyte capacity drives became more common. This increase in capacity—without a corresponding increase in throughput speeds—meant that rebuilding an array due to a failed drive could "easily take weeks or months" to complete.[38]During this time, the older disks in the array will be stressed by the additional workload, which could result in data corruption or drive failure. By increasing parity, RAID-Z3 reduces the chance of data loss by simply increasing redundancy.[40]
ZFS has no tool equivalent tofsck(the standard Unix and Linux data checking and repair tool for file systems).[41]Instead, ZFS has a built-inscrubfunction which regularly examines all data and repairs silent corruption and other problems. Some differences are:
The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week.[42][43]
ZFS is a128-bitfile system,[44][16]so it can address 1.84 × 1019times more data than 64-bit systems such asBtrfs. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2128bits of data would require 3×1024TB hard disk drives.[45]
Some theoretical limits in ZFS are:
With Oracle Solaris, the encryption capability in ZFS[47]is embedded into the I/O pipeline. During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline. The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys.[48]A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism.
As of 2019[update]the encryption feature is also fully integrated into OpenZFS 0.8.0 available for Debian and Ubuntu Linux distributions.[49]
There have been anecdotal end-user reports of failures when using ZFS native encryption. An exact cause has not been established.[50][51]
ZFS will automatically allocate data storage across all vdevs in a pool (and all devices in each vdev) in a way that generally maximises the performance of the pool. ZFS will also update its write strategy to take account of new disks added to a pool, when they are added.
As a general rule, ZFS allocates writes across vdevs based on the free space in each vdev. This ensures that vdevs which have proportionately less data already, are given more writes when new data is to be stored. This helps to ensure that as the pool becomes more used, the situation does not develop that some vdevs become full, forcing writes to occur on a limited number of devices. It also means that when data is read (and reads are much more frequent than writes in most uses), different parts of the data can be read from as many disks as possible at the same time, giving much higher read performance. Therefore, as a general rule, pools and vdevs should be managed and new storage added, so that the situation does not arise that some vdevs in a pool are almost full and others almost empty, as this will make the pool less efficient.
Free space in ZFS tends to become fragmented with usage. ZFS does not have a mechanism for defragmenting free space. There are anecdotal end-user reports of diminished performance when high free-space fragmentation is coupled with disk space over-utilization.[52][53]
Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis.
Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to datasets (file system instances or ZVOLs) as needed. Arbitrary storage device types can be added to existing pools to expand their size.[54]
The storage capacity of all vdevs is available to all of the file system instances in the zpool. Aquotacan be set to limit the amount of space a file system instance can occupy, and areservationcan be set to guarantee that space will be available to a file system instance.
ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance versus cost;[55]these are often called "hybrid storage pools".[56]Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such assolid-state drives(SSDs). Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM.
ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usuallysolid-state drives(SSDs)), for a total of four caches.
This becomes crucial if a large number of synchronous writes take place (such as withESXi,NFSand somedatabases),[57]where the client requires confirmation of successful writing before continuing its activity; the SLOG allows ZFS to confirm writing is successful much more quickly than if it had to write to the main store every time, without the risk involved in misleading the client as to the state of data storage. If there is no SLOG device then part of the main data pool will be used for the same purpose, although this is slower.
If the log device itself is lost, it is possible to lose the latest writes, therefore the log device should be mirrored. In earlier versions of ZFS, loss of the log device could result in loss of the entire zpool, although this is no longer the case. Therefore, one should upgrade ZFS if planning to use a separate log device.
A number of other caches, cache divisions, and queues also exist within ZFS. For example, each VDEV has its own data cache, and the ARC cache is divided between data stored by the user andmetadataused by ZFS, with control over the balance between these.
In OpenZFS 0.8 and later, it is possible to configure a Special VDEV class to preferentially store filesystem metadata, and optionally the Data Deduplication Table (DDT), and small filesystem blocks.[58]This allows, for example, to create a Special VDEV on fast solid-state storage to store the metadata, while the regular file data is stored on spinning disks. This speeds up metadata-intensive operations such as filesystem traversal, scrub, and resilver, without the expense of storing the entire filesystem on solid-state storage.
ZFS uses acopy-on-writetransactionalobject model. All block pointers within the filesystem contain a 256-bitchecksumor 256-bithash(currently a choice betweenFletcher-2,Fletcher-4, orSHA-256)[59]of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then anymetadatablocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (seeMerkle signature scheme).
An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing asnapshotversion of the file system to be maintained. ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots.
Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of theCopy-on-writeprinciple.
ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots. Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool.
Dynamicstripingacross all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them.[60]
ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. Ifdata compressionis enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations).[61]
In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems.[citation needed]
Pools and their associated ZFS file systems can be moved between different platform architectures, including systems implementing different byte orders. The ZFS block pointer format stores filesystem metadata in anendian-adaptive way; individual metadata blocks are written with the native byte order of the system writing the block. When reading, if the stored endianness does not match the endianness of the system, the metadata is byte-swapped in memory.
This does not affect the stored data; as is usual inPOSIXsystems, files appear to applications as simple arrays of bytes, so applications creating and reading data remain responsible for doing so in a way independent of the underlying system's endianness.
Data deduplicationcapabilities were added to the ZFS source repository at the end of October 2009,[62]and relevant OpenSolaris ZFS development packages have been available since December 3, 2009 (build 128).
Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage.[63][64][65]An accurate assessment of the memory required for deduplication is made by referring to the number of unique blocks in the pool, and the number of bytes on disk and in RAM ("core") required to store each record—these figures are reported by inbuilt commands such aszpoolandzdb. Insufficient physical memory or lack of ZFS cache can result in virtual memorythrashingwhen using deduplication, which can cause performance to plummet, or result in complete memory starvation.[citation needed]Because deduplication occurs at write-time, it is also very CPU-intensive and this can also significantly slow down a system.
Other storage vendors use modified versions of ZFS to achieve very highdata compression ratios. Two examples in 2012 were GreenBytes[66]and Tegile.[67]In May 2014, Oracle bought GreenBytes for its ZFS deduplication and replication technology.[68]
As described above, deduplication is usuallynotrecommended due to its heavy resource requirements (especially RAM) and impact on performance (especially when writing), other than in specific circumstances where the system and data are well-suited to this space-saving technique.
ZFS does not ship with tools such asfsck, because the file system itself was designed to self-repair. So long as a storage pool had been built with sufficient attention to the design of storage and redundancy of data, basic tools likefsckwere never required. However, if the pool was compromised because of poor hardware, inadequate design or redundancy, or unfortunate mishap, to the point that ZFS was unable tomountthe pool, traditionally, there were no other, more advanced, tools which allowed an end-user to attempt partial salvage of the stored data from a badly corrupted pool.
Modern ZFS has improved considerably on this situation over time, and continues to do so:
Oracle Corporationceased the public development of both ZFS and OpenSolaris after theacquisition of Sun in 2010. Some developers forked the last public release of OpenSolaris as the Illumos project. Because of the significant advantages present in ZFS, it has been ported to several different platforms with different features and commands. For coordinating the development efforts and to avoid fragmentation,OpenZFSwas founded in 2013.
According to Matt Ahrens, one of the main architects of ZFS, over 50% of the original OpenSolaris ZFS code has been replaced in OpenZFS with community contributions as of 2019, making “Oracle ZFS” and “OpenZFS” politically and technologically incompatible.[87]
In January 2010,Oracle Corporationacquired Sun Microsystems, and quickly discontinued the OpenSolaris distribution and the open source development model.[95][96]In August 2010, Oracle discontinued providing public updates to the source code of the Solaris OS/Networking repository, effectively turning Solaris 11 back into aclosed sourceproprietaryoperating system.[97]
In response to the changing landscape of Solaris and OpenSolaris, theillumosproject was launched viawebinar[98]on Thursday, August 3, 2010, as a community effort of some core Solaris engineers to continue developing the open source version of Solaris, and complete the open sourcing of those parts not already open sourced by Sun.[99]illumos was founded as a Foundation, the Illumos Foundation, incorporated in the State of California as a501(c)6trade association. The original plan explicitly stated that illumos would not be a distribution or a fork. However, after Oracle announced discontinuing OpenSolaris, plans were made to fork the final version of the Solaris ON, allowing illumos to evolve into an operating system of its own.[100]As part of OpenSolaris, an open source version of ZFS was therefore integral within illumos.
ZFS was widely used within numerous platforms, as well as Solaris. Therefore, in 2013, the co-ordination of development work on the open source version of ZFS was passed to an umbrella project,OpenZFS. The OpenZFS framework allows any interested parties to collaboratively develop the core ZFS codebase in common, while individually maintaining any specific extra code which ZFS requires to function and integrate within their own systems.
Note: The Solaris version under development by Sun since the release of Solaris 10 in 2005 wascodenamed'Nevada', and was derived from what was theOpenSolariscodebase. 'Solaris Nevada' is the codename for the next-generation Solaris OS to eventually succeed Solaris 10 and this new code was then pulled successively into new OpenSolaris 'Nevada' snapshot builds.[101]OpenSolaris is now discontinued andOpenIndianaforkedfrom it.[102][103]A final build (b134) of OpenSolaris was published by Oracle (2010-Nov-12) as an upgrade path toSolaris 11 Express.
List of Operating Systems, distributions and add-ons that support ZFS, the zpool version it supports, and the Solaris build they are based on (if any):
|
https://en.wikipedia.org/wiki/ZFS
|
Incomputing,BIOS(/ˈbaɪɒs,-oʊs/,BY-oss, -ohss;Basic Input/Output System, also known as theSystem BIOS,ROM BIOS,BIOS ROMorPC BIOS) is a type offirmwareused to provide runtime services foroperating systemsandprogramsand to performhardwareinitialization during thebootingprocess (power-on startup).[1]The firmware comes pre-installed on the computer'smotherboard.
The name originates from theBasicInput/OutputSystem used in theCP/Moperating system in 1975.[2][3]The BIOS firmware was originallyproprietaryto theIBM PC; it wasreverse engineeredby some companies (such asPhoenix Technologies) looking to create compatible systems. Theinterfaceof that original system serves as ade factostandard.
The BIOS in older PCs initializes and tests the system hardware components (power-on self-testor POST for short), and loads aboot loaderfrom a mass storage device which then initializes akernel. In the era ofDOS, the BIOS providedBIOS interrupt callsfor the keyboard, display, storage, and otherinput/output(I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup.[4]
Most BIOS implementations are specifically designed to work with a particular computer ormotherboardmodel, by interfacing with various devices especially systemchipset. Originally, BIOS firmware was stored in aROMchip on the PC motherboard. In later computer systems, the BIOS contents are stored onflash memoryso it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOSrootkits. Furthermore, a BIOS upgrade that fails couldbrickthe motherboard.
Unified Extensible Firmware Interface(UEFI) is a successor to the PC BIOS, aiming to address its technical limitations.[5]UEFI firmware may include legacy BIOS compatibility to maintain compatibility with operating systems and option cards that do not support UEFI native operation.[6][7][8]Since 2020, all PCs for Intel platforms no longer support legacy BIOS.[9]The last version ofMicrosoft Windowsto officially support running on PCs which use legacy BIOS firmware isWindows 10asWindows 11requires a UEFI-compliant system (except for IoT Enterprise editions of Windows 11 sinceversion 24H2[10]).
The term BIOS (Basic Input/Output System) was created byGary Kildall[11][12]and first appeared in theCP/Moperating system in 1975,[2][3][12][13][14][15]describing the machine-specific part of CP/M loaded during boot time that interfaces directly with thehardware.[3](A CP/M machine usually has only a simpleboot loaderin its ROM.)
Versions ofMS-DOS,PC DOSorDR-DOScontain a file called variously "IO.SYS", "IBMBIO.COM", "IBMBIO.SYS", or "DRBIOS.SYS"; this file is known as the "DOS BIOS" (also known as the "DOS I/O System") and contains the lower-level hardware-specific part of the operating system. Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides inROM, it represents the analogue to the "CP/M BIOS".
The BIOS originallyproprietaryto theIBM PChas beenreverse engineeredby some companies (such asPhoenix Technologies) looking to create compatible systems.
With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions. The real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, and therefore was named "CBIOS" (for "Compatibility BIOS"), whereas the "ABIOS" (for "Advanced BIOS") provided new interfaces specifically suited for multitasking operating systems such asOS/2.[16]
The BIOS of the originalIBM PCandXThad no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when thepower-on self-test(POST) had not proceeded to the point of successfully initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and onexpansion cards. Starting around the mid-1990s, it became typical for the BIOS ROM to include a"BIOS configuration utility"(BCU[17]) or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type formerly set usingDIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCs—including theIBM AT—held configuration settings in battery-backed RAM and used a bootable configuration program on floppy disk, not in the ROM, to set the configuration options contained in this memory. The floppy disk was supplied with the computer, and if it was lost the system settings could not be changed. The same applied in general to computers with anEISAbus, for which the configuration program was called an EISA Configuration Utility (ECU).
A modernWintel-compatible computer provides a setup routine essentially unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; the user can configure hardware options using the keyboard and video display. The modern Wintel machine may store the BIOS configuration settings in flash ROM, perhaps the same flash ROM that holds the BIOS itself.
Peripheral cards such as hard disk drivehost bus adaptersandvideo cardshave their own firmware, and BIOS extensionoption ROMcode may be a part of the expansion card firmware; that code provides additional capabilities in the BIOS. Code in option ROMs runs before the BIOS boots the operating system frommass storage. These ROMs typically test and initialize hardware, add new BIOS services, or replace existing BIOS services with their own services. For example, aSCSI controllerusually has a BIOS extension ROM that adds support for hard drives connected through that controller. An extension ROM could in principle contain operating system, or it could implement an entirely different boot process such asnetwork booting. Operation of an IBM-compatible computer system can be completely changed by removing or inserting an adapter card (or a ROM chip) that contains a BIOS extension ROM.
The motherboard BIOS typically contains code for initializing and bootstrapping integrated display and integrated storage. The initialization process can involve the execution of code related to the device being initialized, for locating the device, verifying the type of device, then establishing base registers, settingpointers, establishing interrupt vector tables,[18]selecting paging modes which are ways for organizing availableregistersin devices, setting default values for accessing software routines related tointerrupts,[19]and setting the device's configuration using default values.[20]In addition, plug-in adapter cards such asSCSI,RAID,network interface cards, andvideo cardsoften include their own BIOS (e.g.Video BIOS), complementing or replacing the system BIOS code for the given component. Even devices built into the motherboard can behave in this way; their option ROMs can be a part of the motherboard BIOS.
An add-in card requires an option ROM if the card is not supported by the motherboard BIOS and the card needs to be initialized or made accessible through BIOS services before the operating system can be loaded (usually this means it is required in the boot process). An additional advantage of ROM on some early PC systems (notably including the IBM PCjr) was that ROM was faster than main system RAM. (On modern systems, the case is very much the reverse of this, and BIOS ROM code is usually copied ("shadowed") into RAM so it will run faster.)
Option ROMs normally reside on adapter cards. However, the original PC, and perhaps also the PC XT, have a spare ROM socket on the motherboard (the "system board" in IBM's terms) into which an option ROM can be inserted, and the four ROMs that contain the BASIC interpreter can also be removed and replaced with custom ROMs which can be option ROMs. TheIBM PCjris unique among PCs in having two ROM cartridge slots on the front. Cartridges in these slots map into the same region of the upper memory area used for option ROMs, and the cartridges can contain option ROM modules that the BIOS would recognize. The cartridges can also contain other types of ROM modules, such as BASIC programs, that are handled differently. One PCjr cartridge can contain several ROM modules of different types, possibly stored together in one ROM chip.
The8086and8088start at physical address FFFF0h.[21]The80286starts at physical address FFFFF0h.[22]The80386and later x86 processors start at physical address FFFFFFF0h.[23][24][25]When the system is initialized, the first instruction of the BIOS appears at that address.
If the system has just been powered up or the reset button was pressed ("cold boot"), the fullpower-on self-test(POST) is run. If Ctrl+Alt+Delete was pressed ("warm boot"), a special flag value stored innonvolatile BIOS memory("CMOS") tested by the BIOS allows bypass of the lengthy POST and memory detection.
The POST identifies, tests and initializes system devices such as theCPU,chipset,RAM,motherboard,video card,keyboard,mouse,hard disk drive,optical disc driveand otherhardware, includingintegrated peripherals.
Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it.[26][27]This feature was intended for factory test or diagnostic purposes.
After the motherboard BIOS completes its POST, most BIOS versions search for option ROM modules, also called BIOS extension ROMs, and execute them. The motherboard BIOS scans for extension ROMs in a portion of the "upper memory area" (the part of the x86 real-mode address space at and above address 0xA0000) and runs each ROM found, in order. To discover memory-mapped option ROMs, a BIOS implementation scans the real-mode address space from0x0C0000to0x0F0000on 2KB(2,048 bytes) boundaries, looking for a two-byte ROMsignature: 0x55 followed by 0xAA. In a valid expansion ROM, this signature is followed by a single byte indicating the number of 512-byte blocks the expansion ROM occupies in real memory, and the next byte is the option ROM'sentry point(also known as its "entry offset"). If the ROM has a valid checksum, the BIOS transfers control to the entry address, which in a normal BIOS extension ROM should be the beginning of the extension's initialization routine.
At this point, the extension ROM code takes over, typically testing and initializing the hardware it controls and registeringinterrupt vectorsfor use by post-boot applications. It may use BIOS services (including those provided by previously initialized option ROMs) to provide a user configuration interface, to display diagnostic information, or to do anything else that it requires.
An option ROM should normally return to the BIOS after completing its initialization process. Once (and if) an option ROM returns, the BIOS continues searching for more option ROMs, calling each as it is found, until the entire option ROM area in the memory space has been scanned. It is possible that an option ROM will not return to BIOS, pre-empting the BIOS's boot sequence altogether.
After the POST completes and, in a BIOS that supports option ROMs, after the option ROM scan is completed and all detectedROMmodules with validchecksumshave been called, the BIOS callsinterrupt 19hto start boot processing. Post-boot, programs loaded can also call interrupt 19h to reboot the system, but they must be careful to disable interrupts and other asynchronous hardware processes that may interfere with the BIOS rebooting process, or else the system may hang or crash while it is rebooting.
When interrupt 19h is called, the BIOS attempts to locateboot loadersoftware on a "boot device", such as ahard disk, afloppy disk,CD, orDVD. It loads and executes the first bootsoftwareit finds, giving it control of the PC.[28]
The BIOS uses the boot devices set inNonvolatile BIOS memory(CMOS), or, in the earliest PCs,DIP switches. The BIOS checks each device in order to see if it is bootable by attempting to load the first sector (boot sector). If the sector cannot be read, the BIOS proceeds to the next device. If the sector is read successfully, some BIOSes will also check for the boot sector signature 0x55 0xAA in the last two bytes of the sector (which is 512 bytes long), before accepting a boot sector and considering the device bootable.[b]
When a bootable device is found, the BIOS transfers control to the loaded sector. The BIOS does not interpret the contents of the boot sector other than to possibly check for the boot sector signature in the last two bytes. Interpretation of data structures like partition tables and BIOS Parameter Blocks is done by the boot program in the boot sector itself or by other programs loaded through the boot process.
A non-disk device such as anetwork adapterattempts booting by a procedure that is defined by itsoption ROMor the equivalent integrated into the motherboard BIOS ROM. As such, option ROMs may also influence or supplant the boot process defined by the motherboard BIOS ROM.
With theEl Torito optical media boot standard, the optical drive actually emulates a 3.5" high-density floppy disk to the BIOS for boot purposes. Reading the "first sector" of a CD-ROM or DVD-ROM is not a simply defined operation like it is on a floppy disk or a hard disk. Furthermore, the complexity of the medium makes it difficult to write a useful boot program in one sector. The bootable virtual floppy disk can contain software that provides access to the optical medium in its native format.
If an expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter) in a cooperative way, it can use theBIOS Boot Specification(BBS)APIto register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API.[citation needed]
Also, if an expansion ROM wishes to change the way the system boots unilaterally, it can simply hook interrupt 19h or other interrupts normally called from interrupt 19h, such as interrupt 13h, the BIOS disk service, to intercept the BIOS boot process. Then it can replace the BIOS boot process with one of its own, or it can merely modify the boot sequence by inserting its own boot actions into it, by preventing the BIOS from detecting certain devices as bootable, or both. Before the BIOS Boot Specification was promulgated, this was the only way for expansion ROMs to implement boot capability for devices not supported for booting by the native BIOS of the motherboard.[citation needed]
The user can select the boot priority implemented by the BIOS. For example, most computers have a hard disk that is bootable, but sometimes there is a removable-media drive that has higher boot priority, so the user can cause a removable disk to be booted.
In most modern BIOSes, the boot priority order can be configured by the user. In older BIOSes, limited boot priority options are selectable; in the earliest BIOSes, a fixed priority scheme was implemented, with floppy disk drives first, fixed disks (i.e., hard disks) second, and typically no other boot devices supported, subject to modification of these rules by installed option ROMs. The BIOS in an early PC also usually would only boot from the first floppy disk drive or the first hard disk drive, even if there were two drives installed.
On the originalIBM PCand XT, if no bootable disk was found, the BIOS would try to startROM BASICwith the interrupt call tointerrupt 18h. Since few programs used BASIC in ROM, clone PC makers left it out; then a computer that failed to boot from a disk would display "No ROM BASIC" and halt (in response to interrupt 18h).
Later computers would display a message like "No bootable disk found"; some would prompt for a disk to be inserted and a key to be pressed to retry the boot process. A modern BIOS may display nothing or may automatically enter the BIOS configuration utility when the boot process fails.
The environment for the boot program is very simple: the CPU is in real mode and the general-purpose and segment registers are undefined, except SS, SP, CS, and DL. CS:IP always points to physical address0x07C00. What values CS and IP actually have is not well defined. Some BIOSes use a CS:IP of0x0000:0x7C00while others may use0x07C0:0x0000.[29]Because boot programs are always loaded at this fixed address, there is no need for a boot program to be relocatable. DL may contain the drive number, as used withinterrupt 13h, of the boot device. SS:SP points to a valid stack that is presumably large enough to support hardware interrupts, but otherwise SS and SP are undefined. (A stack must be already set up in order for interrupts to be serviced, and interrupts must be enabled in order for the system timer-tick interrupt, which BIOS always uses at least to maintain the time-of-day count and which it initializes during POST, to be active and for the keyboard to work. The keyboard works even if the BIOS keyboard service is not called; keystrokes are received and placed in the 15-character type-ahead buffer maintained by BIOS.) The boot program must set up its own stack, because the size of the stack set up by BIOS is unknown and its location is likewise variable; although the boot program can investigate the default stack by examining SS:SP, it is easier and shorter to just unconditionally set up a new stack.[30]
At boot time, all BIOS services are available, and the memory below address0x00400contains theinterrupt vector table. BIOS POST has initialized the system timers, interrupt controller(s), DMA controller(s), and other motherboard/chipset hardware as necessary to bring all BIOS services to ready status. DRAM refresh for all system DRAM in conventional memory and extended memory, but not necessarily expanded memory, has been set up and is running. Theinterrupt vectorscorresponding to the BIOS interrupts have been set to point at the appropriate entry points in the BIOS, hardware interrupt vectors for devices initialized by the BIOS have been set to point to the BIOS-provided ISRs, and some other interrupts, including ones that BIOS generates for programs to hook, have been set to a default dummy ISR that immediately returns. The BIOS maintains a reserved block of system RAM at addresses0x00400–0x004FFwith various parameters initialized during the POST. All memory at and above address0x00500can be used by the boot program; it may even overwrite itself.[31][32]
The BIOS ROM is customized to the particular manufacturer's hardware, allowing low-level services (such as reading a keystroke or writing a sector of data to diskette) to be provided in a standardized way to programs, including operating systems. For example, an IBM PC might have either a monochrome or a color display adapter (using different display memory addresses and hardware), but a single, standard, BIOSsystem callmay be invoked to display a character at a specified position on the screen intext modeorgraphics mode.
The BIOS provides a smalllibraryof basic input/output functions to operate peripherals (such as the keyboard, rudimentary text and graphics display functions and so forth). When using MS-DOS, BIOS services could be accessed by an application program (or by MS-DOS) by executing an interrupt 13hinterrupt instructionto access disk functions, or by executing one of a number of other documentedBIOS interrupt callsto accessvideo display,keyboard, cassette, and other device functions.
Operating systemsand executive software that are designed to supersede this basic firmware functionality provide replacement software interfaces to application software. Applications can also provide these services to themselves. This began even in the 1980s underMS-DOS, when programmers observed that using the BIOS video services for graphics display were very slow. To increase the speed of screen output, many programs bypassed the BIOS and programmed the video display hardware directly. Other graphics programmers, particularly but not exclusively in thedemoscene, observed that there were technical capabilities of the PC display adapters that were not supported by the IBM BIOS and could not be taken advantage of without circumventing it. Since the AT-compatible BIOS ran in Intelreal mode, operating systems that ran in protected mode on 286 and later processors required hardware device drivers compatible with protected mode operation to replace BIOS services.
In modern PCs running modernoperating systems(such asWindowsandLinux) theBIOS interrupt callsare used only during booting and initial loading of operating systems. Before the operating system's first graphical screen is displayed, input and output are typically handled through BIOS. A boot menu such as the textual menu of Windows, which allows users to choose an operating system to boot, to boot into thesafe mode, or to use the last known good configuration, is displayed through BIOS and receives keyboard input through BIOS.[4]
Many modern PCs can still boot and run legacy operating systems such as MS-DOS or DR-DOS that rely heavily on BIOS for their console and disk I/O, providing that the system has a BIOS, or a CSM-capable UEFI firmware.
Intelprocessors have reprogrammablemicrocodesince theP6microarchitecture.[33][34][35]AMDprocessors have reprogrammable microcode since theK7microarchitecture. The BIOS contain patches to the processor microcode that fix errors in the initial processor microcode; microcode is loaded into processor'sSRAMso reprogramming is not persistent, thus loading of microcode updates is performed each time the system is powered up. Without reprogrammable microcode, an expensive processor swap would be required;[36]for example, thePentium FDIV bugbecame an expensive fiasco for Intel as it required aproduct recallbecause the original Pentium processor's defective microcode could not be reprogrammed. Operating systems can updatemain processormicrocode also.[37][38]
Some BIOSes contain a software licensing description table (SLIC), a digital signature placed inside the BIOS by theoriginal equipment manufacturer(OEM), for exampleDell. The SLIC is inserted into the ACPI data table and contains no active code.[39][40]
Computer manufacturers that distribute OEM versions of Microsoft Windows and Microsoft application software can use the SLIC to authenticate licensing to the OEM Windows Installation disk and systemrecovery disccontaining Windows software. Systems with a SLIC can be preactivated with an OEM product key, and they verify an XML formatted OEM certificate against the SLIC in the BIOS as a means of self-activating (seeSystem Locked Preinstallation, SLP). If a user performs a fresh install of Windows, they will need to have possession of both the OEM key (either SLP or COA) and the digital certificate for their SLIC in order to bypass activation.[39]This can be achieved if the user performs a restore using a pre-customised image provided by the OEM. Power users can copy the necessary certificate files from the OEM image, decode the SLP product key, then perform SLP activation manually.
Some BIOS implementations allowoverclocking, an action in which theCPUis adjusted to a higherclock ratethan its manufacturer rating for guaranteed capability. Overclocking may, however, seriously compromise system reliability in insufficiently cooled computers and generally shorten component lifespan. Overclocking, when incorrectly performed, may also cause components to overheat so quickly that they mechanically destroy themselves.[41]
Some olderoperating systems, for exampleMS-DOS, rely on the BIOS to carry out most input/output tasks within the PC.[42]
Callingreal modeBIOS services directly is inefficient forprotected mode(andlong mode) operating systems.BIOS interrupt callsare not used by modern multitasking operating systems after they initially load.
In the 1990s, BIOS provided someprotected modeinterfaces forMicrosoft WindowsandUnix-likeoperating systems, such asAdvanced Power Management(APM),Plug and Play BIOS,Desktop Management Interface(DMI),VESA BIOS Extensions(VBE),e820andMultiProcessor Specification(MPS). Starting from the year 2000, most BIOSes provideACPI,SMBIOS,VBEande820interfaces for modern operating systems.[43][44][45][46][47]
Afteroperating systemsload, theSystem Management Modecode is still running in SMRAM. Since 2010, BIOS technology is in a transitional process towardUEFI.[5]
Historically, the BIOS in the IBM PC and XT had no built-in user interface. The BIOS versions in earlier PCs (XT-class) were not software configurable; instead, users set the options viaDIP switcheson the motherboard. Later computers, including most IBM-compatibles with 80286 CPUs, had a battery-backednonvolatile BIOS memory(CMOS RAM chip) that held BIOS settings.[48]These settings, such as video-adapter type, memory size, and hard-disk parameters, could only be configured by running a configuration program from a disk, not built into the ROM. A special "reference diskette" was inserted in anIBM ATto configure settings such as memory size.[49]
Early BIOS versions did not have passwords or boot-device selection options. The BIOS was hard-coded to boot from the first floppy drive, or, if that failed, the first hard disk. Access control in early AT-class machines was by a physical keylock switch (which was not hard to defeat if the computer case could be opened). Anyone who could switch on the computer could boot it.[citation needed]
Later, 386-class computers started integrating the BIOS setup utility in the ROM itself, alongside the BIOS code; these computers usually boot into the BIOS setup utility if a certain key or key combination is pressed, otherwise the BIOS POST and boot process are executed.
A modern BIOS setup utility has atext user interface(TUI) orgraphical user interface(GUI) accessed by pressing a certain key on the keyboard when the PC starts. Usually, the key is advertised for short time during the early startup, for example "Press DEL to enter Setup".
The actual key depends on specific hardware. The settings key is most oftenDelete(Acer,ASRock,AsusPC,ECS,Gigabyte,MSI,Zotac) andF2(Asus motherboard,Dell,Lenovolaptop,Origin PC,Samsung,Toshiba), but it can also beF1(Lenovo desktop) andF10(HP).[50]
Features present in the BIOS setup utility typically include:
A modern BIOS setup screen often features aPC Health Statusor aHardware Monitoringtab, which directly interfaces with a Hardware Monitor chip of the mainboard.[51]This makes it possible to monitor CPU andchassistemperature, the voltage provided by thepower supply unit, as well as monitor andcontrol the speed of the fansconnected to the motherboard.
Once the system is booted, hardware monitoring andcomputer fan controlis normally done directly by the Hardware Monitor chip itself, which can be a separate chip, interfaced throughI²CorSMBus, or come as a part of aSuper I/Osolution, interfaced throughIndustry Standard Architecture(ISA) orLow Pin Count(LPC).[52]Some operating systems, likeNetBSDwithenvsysandOpenBSDwith sysctlhw.sensors, feature integrated interfacing with hardware monitors.
However, in some circumstances, the BIOS also provides the underlying information about hardware monitoring throughACPI, in which case, the operating system may be using ACPI to perform hardware monitoring.[53][54]
In modern PCs the BIOS is stored in rewritableEEPROM[55]orNOR flash memory,[56]allowing the contents to be replaced and modified. This rewriting of the contents is sometimes termedflashing.It can be done by a special program, usually provided by the system's manufacturer, or atPOST, with a BIOS image in a hard drive or USB flash drive. A file containing such contents is sometimes termed "a BIOS image". A BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide improved performance or to support newer hardware. Some computers also support updating the BIOS via an update floppy disk or a special partition on the hard drive.[57]
The original IBM PC BIOS (and cassette BASIC) was stored on mask-programmedread-only memory(ROM) chips in sockets on the motherboard. ROMs could be replaced,[58]but not altered, by users. To allow for updates, many compatible computers used re-programmable BIOS memory devices such asEPROM,EEPROMand laterflash memory(usuallyNOR flash) devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware,Flash BIOSchips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standardultravioleterasable PROM (EPROM) chips. Flash chips are programmed (and re-programmed) in-circuit, while EPROM chips need to be removed from the motherboard for re-programming.[59]BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes.[60]
Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supportedthe year 2000by setting the century bit automatically when the clock rolled past midnight, 31 December 1999.[61]
The first flash chips were attached to theISA bus. Starting in 1998, the BIOS flash moved to theLPCbus, following a new standard implementation known as "firmware hub" (FWH). In 2005, the BIOS flash memory moved to theSPIbus.[62]
The size of the BIOS, and the capacity of the ROM, EEPROM, or other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 32 megabytes. For contrast, the original IBM PC BIOS was contained in an 8 KB mask ROM. Some modern motherboards are including even bigger NANDflash memoryICs on board which are capable of storing whole compact operating systems, such as someLinux distributions. For example, some ASUS notebooks includedSplashtop OSembedded into their NAND flash memory ICs.[63]However, the idea of including an operating system along with BIOS in the ROM of a PC is not new; in the 1980s, Microsoft offered a ROM option for MS-DOS, and it was included in the ROMs of some PC clones such as theTandy 1000 HX.
Another type of firmware chip was found on the IBM PC AT and early compatibles. In the AT, thekeyboard interfacewas controlled by amicrocontrollerwith its own programmable memory. On the IBM AT, that was a 40-pin socketed device, while some manufacturers used an EPROM version of this chip which resembled an EPROM. This controller was also assigned theA20 gatefunction to manage memory above the one-megabyte range; occasionally an upgrade of this "keyboard BIOS" was necessary to take advantage of software that could use upper memory.[citation needed]
The BIOS may contain components such as theMemory Reference Code(MRC), which is responsible for the memory initialization (e.g.SPDandmemory timingsinitialization).[64]: 8[65]
Modern BIOS[66]includesIntel Management EngineorAMD Platform Security Processorfirmware.
IBM published the entire listings of the BIOS for its original PC, PC XT, PC AT, and other contemporary PC models, in an appendix of theIBM PC Technical Reference Manualfor each machine type. The effect of the publication of the BIOS listings is that anyone can see exactly what a definitive BIOS does and how it does it.
In May 1984,Phoenix Software Associatesreleased its first ROM-BIOS. This BIOS enabled OEMs to build essentially fully compatible clones without having to reverse-engineer the IBM PC BIOS themselves, as Compaq had done for thePortable; it also helped fuel the growth in the PC-compatibles industry and sales of non-IBM versions of DOS.[69]The firstAmerican Megatrends(AMI) BIOS was released in 1986.
New standards grafted onto the BIOS are usually without complete public documentation or any BIOS listings. As a result, it is not as easy to learn the intimate details about the many non-IBM additions to BIOS as about the core BIOS services.
Many PC motherboard suppliers licensed the BIOS "core" and toolkit from a commercial third party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customized this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer. Major IBVs includedAmerican Megatrends(AMI),Insyde Software,Phoenix Technologies, and Byosoft. Microid Research andAward Softwarewere acquired byPhoenix Technologiesin 1998; Phoenix later phased out the Award brand name (although Award Software is still credited in newer AwardBIOS versions and in UEFI firmwares).[when?]General Software, which was also acquired by Phoenix in 2007, sold BIOS for embedded systems based on Intel processors.
SeaBIOSis an open-source BIOS implementation.
The open-source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open-sourced counterparts.Open Firmwarewas an early attempt to make an open specification for boot firmware. It was initially endorsed by IEEE in itsIEEE 1275-1994standard but was withdrawn in 2005.[70][71]Later examples include theOpenBIOS,corebootandlibrebootprojects.AMDprovided product specifications for some chipsets using coreboot, andGoogleis sponsoring the project.MotherboardmanufacturerTyanofferscorebootnext to the standard BIOS with theirOpteronline of motherboards.
EEPROMandflash memorychips are advantageous because they can be easily updated by the user; it is customary for hardware manufacturers to issue BIOS updates to upgrade their products, improve compatibility and removebugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (usinghashchecksumsor other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting fromremovable media(floppy, CD or USB flash drive) so the user can try flashing the BIOS again. Somemotherboardshave abackupBIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions.
There are at least five known viruses that attack the BIOS. Two of which were for demonstration purposes. The first one found in the wild wasMebromi, targeting Chinese users.
The first BIOS virus was BIOS Meningitis, which instead of erasing BIOS chips it infected them. BIOS Meningitis was relatively harmless, compared to a virus likeCIH.
The second BIOS virus wasCIH, also known as the "Chernobyl Virus", which was able to erase flash ROM BIOS content on compatible chipsets. CIH appeared in mid-1998 and became active in April 1999. Often, infected computers could no longer boot, and people had to remove the flash ROM IC from the motherboard and reprogram it. CIH targeted the then-widespread Intel i430TX motherboard chipset and took advantage of the fact that theWindows 9xoperating systems, also widespread at the time, allowed direct hardware access to all programs.
Modern systems are not vulnerable to CIH because of a variety of chipsets being used which are incompatible with the Intel i430TX chipset, and also other flash ROM IC types. There is also extra protection from accidental BIOS rewrites in the form of boot blocks which are protected from accidental overwrite or dual and quad BIOS equipped systems which may, in the event of a crash, use a backup BIOS. Also, all modern operating systems such asFreeBSD,Linux,macOS,Windows NT-based Windows OS likeWindows 2000,Windows XPand newer, do not allowuser-modeprograms to have direct hardware access using ahardware abstraction layer.[72]
As a result, as of 2008, CIH has become essentially harmless, at worst causing annoyance by infecting executable files and triggering antivirus software. Other BIOS viruses remain possible, however;[73]since most Windows home users without Windows Vista/7's UAC run all applications with administrative privileges, a modern CIH-like virus could in principle still gain access to hardware without first using an exploit.[citation needed]The operating systemOpenBSDprevents all users from having this access and the grsecurity patch for the Linux kernel also prevents this direct hardware access by default, the difference being an attacker requiring a much more difficult kernel level exploit or reboot of the machine.[citation needed]
The third BIOS virus was a technique presented by John Heasman, principal security consultant for UK-based Next-Generation Security Software. In 2006, at the Black Hat Security Conference, he showed how to elevate privileges and read physical memory, using malicious procedures that replaced normalACPIfunctions stored in flash memory.[74]
The fourth BIOS virus was a technique called "Persistent BIOS infection." It appeared in 2009 at the CanSecWest Security Conference in Vancouver, and at the SyScan Security Conference in Singapore. ResearchersAnibal Sacco[75]and Alfredo Ortega, from Core Security Technologies, demonstrated how to insert malicious code into the decompression routines in the BIOS, allowing for nearly full control of the PC at start-up, even before the operating system is booted. The proof-of-concept does not exploit a flaw in the BIOS implementation, but only involves the normal BIOS flashing procedures. Thus, it requires physical access to the machine, or for the user to be root. Despite these requirements, Ortega underlined the profound implications of his and Sacco's discovery: "We can patch a driver to drop a fully workingrootkit. We even have a little code that can remove or disable antivirus."[76]
Mebromi is atrojanwhich targets computers withAwardBIOS,Microsoft Windows, andantivirus softwarefrom two Chinese companies: Rising Antivirus and Jiangmin KV Antivirus.[77][78][79]Mebromi installs a rootkit which infects theMaster boot record.
In a December 2013 interview with60 Minutes, Deborah Plunkett, Information Assurance Director for the USNational Security Agencyclaimed the NSA had uncovered and thwarted a possible BIOS attack by a foreign nation state, targeting the US financial system.[80]The program cited anonymous sources alleging it was a Chinese plot.[80]However follow-up articles inThe Guardian,[81]The Atlantic,[82]Wired[83]andThe Register[84]refuted the NSA's claims.
Newer Intel platforms haveIntel Boot Guard(IBG) technology enabled, this technology will check the BIOS digital signature at startup, and the IBG public key is fused into thePCH. End users can't disable this function.
Unified Extensible Firmware Interface(UEFI) supplements the BIOS in many new machines. Initially written for theIntel Itanium architecture, UEFI is now available forx86andArmplatforms; the specification development is driven by theUnified EFI Forum, an industryspecial interest group. EFI booting has been supported in onlyMicrosoft Windowsversions supportingGPT,[85]theLinux kernel2.6.1 and later, andmacOSonIntel-based Macs.[86]As of 2014[update], new PC hardware predominantly ships with UEFI firmware. The architecture of the rootkit safeguard can also prevent the system from running the user's own software changes, which makes UEFI controversial as a legacy BIOS replacement in theopen hardwarecommunity. Also,Windows 11requires UEFI to boot,[87]with the exception of IoT Enterprise editions of Windows 11.[10]UEFI is required for devices shipping with Windows 8[88][89]and above.
After the popularity of UEFI in 2010s, the older BIOS that supportedBIOS interrupt callswas renamed to "legacy BIOS".[citation needed]
Other alternatives to the functionality of the "Legacy BIOS" in the x86 world includecorebootandlibreboot.
Some servers and workstations use a platform-independentOpen Firmware(IEEE-1275) based on theForthprogramming language; it is included with Sun'sSPARCcomputers, IBM'sRS/6000line, and otherPowerPCsystems such as theCHRPmotherboards, along with the x86-basedOLPC XO-1.
As of at least 2015,Applehas removed legacy BIOS support from the UEFI monitor inIntel-based Macs. As such, the BIOS utility no longer supports the legacy option, and prints "Legacy mode not supported on this system".
In 2017, Intel announced that it would remove legacy BIOS support by 2020. Since 2019, new Intel platform OEM PCs no longer support the legacy option.[90]
|
https://en.wikipedia.org/wiki/BIOS
|
CPU modes(also calledprocessor modes,CPU states,CPU privilege levelsand other names) are operating modes for thecentral processing unitof mostcomputer architecturesthat place restrictions on the type and scope of operations that can be performed by instructions being executed by the CPU. For example, this design allows anoperating systemto run with more privileges thanapplication softwareby running the operating systems and applications in different modes.[1]
Ideally, only highly trustedkernelcode is allowed to execute in the unrestricted mode; everything else (including non-supervisory portions of the operating system) runs in a restricted mode and must use asystem call(viainterrupt) to request the kernel perform on its behalf any operation that could damage or compromise the system, making it impossible for untrusted programs to alter or damage other programs (or the computing system itself).Device driversare designed to be part of the kernel due to the need for frequentI/Oaccess.
Multiple modes can be implemented, e.g. allowing ahypervisorto run multiple operating system supervisors beneath it, which is the basic design of manyvirtual machinesystems available today.
The unrestricted mode is often calledkernel mode,but many other designations exist (master mode,supervisor mode,privileged mode, etc.). Restricted modes are usually referred to asuser modes,but are also known by many other names (slave mode,problem state,etc.).[2]
Some CPU architectures support more modes than those, often with a hierarchy of privileges. These architectures are often said to havering-based security,wherein the hierarchy of privileges resembles a set of concentric rings, with the kernel mode in the center.Multicshardware was the first significant implementation of ring security, but many other hardware platforms have been designed along similar lines, including theIntel 80286protected mode, and theIA-64as well, though it is referred to by a different name in these cases.
Mode protection may extend to resources beyond the CPU hardware itself. Hardware registers track the current operating mode of the CPU, but additionalvirtual-memoryregisters,page-tableentries, and other data may track mode identifiers for other resources. For example, a CPU may be operating in Ring 0 as indicated by a status word in the CPU itself, but every access to memory may additionally be validated against a separate ring number for the virtual-memory segment targeted by the access, and/or against a ring number for the physical page (if any) being targeted. This has been demonstrated with thePSPhandheld system.
Hardware that meets thePopek and Goldberg virtualization requirementsmakes writing software to efficiently support a virtual machine much simpler. Such a system can run software that "believes" it is running in supervisor mode, but is actually running in user mode.
Several computer systems introduced in the 1960s, such as theIBM System/360,DECPDP-6/PDP-10, theGE-600/Honeywell 6000series, and theBurroughsB5000series andB6500series, support two CPU modes; a mode that grants full privileges to code running in that mode, and a mode that prevents direct access toinput/outputdevices and some other hardware facilities to code running in that mode. The first mode is referred to by names such assupervisor state(System/360),executive mode(PDP-6/PDP-10),master mode(GE-600 series),control mode(B5000 series), andcontrol state(B6500 series). The second mode is referred to by names such asproblem state(System/360),user mode(PDP-6/PDP-10),slave mode(GE-600 series), andnormal state(B6500 series); there are multiple non-control modes in the B5000 series.
RISC-V has three main CPU modes: User Mode (U), Supervisor Mode (S), and Machine Mode (M).[3]Virtualization is supported via an orthogonal CSR setting instead of a fourth mode.
|
https://en.wikipedia.org/wiki/CPU_modes
|
The Linuxbootingprocess involves multiple stages and is in many ways similar to theBSDand otherUnix-style boot processes, from which it derives. Although the Linux booting process depends very much on the computer architecture, those architectures share similar stages and software components,[1]including system startup,bootloaderexecution, loading and startup of aLinux kernelimage, and execution of variousstartup scriptsanddaemons.[2]Those are grouped into 4 steps: system startup, bootloader stage, kernel stage, and init process.[3]
When a Linux system is powered up or reset, its processor will execute a specific firmware/program for system initialization, such as thepower-on self-test, invoking thereset vectorto start a program at a known address in flash/ROM (in embedded Linux devices), then load the bootloader into RAM for later execution.[2]InIBM PC–compatiblepersonal computers(PCs), this firmware/program is either aBIOSor aUEFImonitor, and is stored in the mainboard.[2]In embedded Linux systems, this firmware/program is calledboot ROM.[4][5]After being loaded into RAM, the bootloader (also called first-stage bootloader or primary bootloader) will execute to load the second-stage bootloader[2](also called secondary bootloader).[6]The second-stage bootloader will load the kernel image into memory, decompress and initialize it, and then pass control to this kernel image.[2]The second-stage bootloader also performs several operations on the system such as system hardware check, mounting the root device, loading the necessary kernel modules, etc.[2]Finally, the first user-space process (initprocess) starts, and other high-level system initializations are performed (which involve with startup scripts).[2]
For each of these stages and components, there are different variations and approaches; for example,GRUB,systemd-boot,corebootorDas U-Bootcan be used as bootloaders (historical examples areLILO,SYSLINUXorLoadlin), while the startup scripts can be either traditionalinit-style, or the system configuration can be performed through modern alternatives such assystemdorUpstart.
System startup has different steps based on the hardware that Linux is being booted on.[7]
IBM PC compatiblehardware is one architecture Linux is commonly used on; on these systems, theBIOSorUEFIfirmware plays an important role.
In BIOS systems, the BIOS will respectively perform power-on self test (POST), which is to check the system hardware, then enumerate local device and finally initialize the system.[7]For system initialization, BIOS will start by searching for the bootable device on the system which stores the OS. A bootable device can be storage devices like floppy disk, CD-ROM, USB flash drive, a partition on a hard disk (where a hard disk stores multiple OS, e.g Windows and Fedora), a storage device on local network, etc.[7]A hard disk to boot Linux stores theMaster Boot Record(MBR), which contains the first-stage/primary bootloader in order to be loaded into RAM.[7]
InUEFIsystems, the Linux kernel can be executed directly by UEFI firmware via the EFI boot stub,[8]but usually usesGRUB 2orsystemd-bootas a bootloader.[9][10]
IfUEFI Secure Bootis supported, a "shim" or "Preloader" is often booted by the UEFI before the bootloader or EFI-stub-bearing kernel.[11]Even ifUEFI Secure Bootis disabled this may be present and booted in case it is later enabled. It merely acts to add an extra signing key database providing keys for signature verification of subsequent boot stages without modifying the UEFI key database, and chains to the subsequent boot step the same as the UEFI would have.
The system startup stage on embedded Linux system starts by executing the firmware / program on the on-chipboot ROM, which then load bootloader / operating system from the storage device like eMMC, eUFS, NAND flash, etc.[5]The sequences of system startup are varies by processors[5]but all include hardware initialization and system hardware testing steps.[7]For example in a system with an i.MX7D processor and a bootable device which stores the OS (including U-Boot), the on-chip boot ROM sets up theDDR memorycontroller at first which allows the boot ROM's program to obtain the SoC configuration data from the external bootloader on the bootable device.[5]The on-chip boot ROM then loads the U-Boot into DRAM for the bootloader stage.[12]
The first stage bootloader, which is a part of the MBR, is a 512-byte image containing the vendor-specific program code and a partition table.[6]As mentioned earlier in the introduction part, the first stage bootloader will find and load the second stage bootloader.[6]It does this by searching in the partition table for an active partition.[6]After finding an active partition, first stage bootloader will keep scanning the remaining partitions in the table to ensure that they're all inactive.[6]After this step, the active partition's boot record is read into RAM and executed as the second stage bootloader.[6]The job of the second stage bootloader is to load the Linux kernel image into memory, and optional initial RAM disk.[13]Kernel image isn't an executable kernel, but a"compressed file" of the kernelinstead, compressed into eitherzImage or bzImageformats withzlib.[14]
In x86 PC, first- and second-stage bootloaders are combined into theGRand Unified Bootloader(GRUB), and formerly Linux Loader (LILO).[13]GRUB 2, which is now used, differs from GRUB 1 by being capable of automatic detection of various operating systems and automatic configuration. The stage1 is loaded and executed either by theBIOSfrom theMaster boot record(MBR). The intermediate stage loader (stage1.5, usually core.img) is loaded and executed by the stage1 loader. The second-stage loader (stage2, the /boot/grub/ files) is loaded by the stage1.5 and displays the GRUB startup menu that allows the user to choose an operating system or examine and edit startup parameters. After a menu entry is chosen and optional parameters are given, GRUB loads the linux kernel into memory and passes control to it. GRUB 2 is also capable of chain-loading of another bootloader. InUEFIsystems, the stage1 and stage1.5 usually are the same UEFI application file (such as grubx64.efi forx64UEFI systems).
Beside GRUB, there are some more popular bootloaders:
Historical bootloaders, no longer in common use, include:
The kernel stage occurs after the bootloader stage. TheLinux kernelhandles all operating system processes, such asmemory management, taskscheduling,I/O,interprocess communication, and overall system control. This is loaded in two stages – in the first stage, the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions are set up such as basic memory management, minimal amount of hardware setup.[14]It's worth noting that kernel image is self-decompressed, which is a part of the kernel image's routine.[14]For some platforms (like ARM 64-bit), kernel decompression has to be performed by the bootloader instead, like U-Boot.[17]
For details of those steps, take an example withi386microprocessor. When its bzImage is invoked, functionstart()(of./arch/i386/boot/head.S) is called to do some basic hardware setup then callsstartup_32()(located in./arch/i386/boot/compressed/head.S).[14]startup_32()will do basic setup to environment (stack, etc.), clears theBlock Started by Symbol(BSS) then invokesdecompress_kernel()(located in./arch/i386/boot/compressed/misc.c) to decompress the kernel.[14]Kernel startup is then executed via a differentstartup_32()function located in./arch/i386/kernel/head.S.[14]The startup functionstartup_32()for the kernel (also called the swapper or process 0) establishesmemory management(paging tables and memory paging), detects the type ofCPUand any additional functionality such asfloating pointcapabilities, and then switches to non-architecture specific Linux kernel functionality via a call tostart_kernel()located in./init/main.c.[14]
start_kernel()executes a wide range of initialization functions. It sets upinterrupt handling(IRQs), further configures memory, mounts theinitial RAM disk("initrd") that was loaded previously as the temporary root file system during the bootloader stage.[14]The initrd, which acts as a temporary root filesystem in RAM, allows the kernel to be fully booted and driver modules to be loaded directly from memory, without reliance upon other devices (e.g. a hard disk).[14]initrd contains the necessary modules needed to interface with peripherals,[14]e.g SATA driver, and support a large number of possible hardware configurations.[14]This split of some drivers statically compiled into the kernel and other drivers loaded from initrd allows for a smaller kernel.[14]initramfs, also known as early user space, has been available since version 2.5.46 of the Linux kernel,[18]with the intent to replace as many functions as possible that previously the kernel would have performed during the startup process. Typical uses of early user space are to detect whatdevice driversare needed to load the main user space file system and load them from atemporary filesystem. Many distributions usedracutto generate and maintain the initramfs image.
The root file system is later switched via a call topivot_root()which unmounts the temporary root file system and replaces it with the use of the real one, once the latter is accessible.[14]The memory used by the temporary root file system is then reclaimed.[clarification needed]
Finally,kernel_thread(inarch/i386/kernel/process.c) is called to start the Init process (the first user-space process), and then starts the idle task viacpu_idle().[14]
Thus, the kernel stage initializes devices, mounts the root filesystem specified by the bootloader asread only, and runsInit(/sbin/init) which is designated as the first process run by the system (PID= 1).[19]A message is printed by the kernel upon mounting the file system, and by Init upon starting the Init process.[19]
According toRed Hat, the detailed kernel process at this stage is therefore summarized as follows:[15]
At this point, with interrupts enabled, the scheduler can take control of the overall management of the system, to provide pre-emptive multi-tasking, and the init process is left to continue booting the user environment in user space.
Once the kernel has started, it starts theinitprocess,[20]adaemonwhich thenbootstrapstheuser space, for example by checking and mountingfile systems, and starting up otherprocesses. The init system is the first daemon to start (during booting) and the last daemon to terminate (duringshutdown).
Historically this was the "SysV init", which was just called "init". More recent Linux distributions are likely to use one of the more modern alternatives such assystemd. Below is a summary of the main init processes:
|
https://en.wikipedia.org/wiki/Early_user_space
|
Memory protectionis a way to control memory access rights on a computer, and is a part of most moderninstruction set architecturesandoperating systems. The main purpose of memory protection is to prevent aprocessfrom accessing memory that has not been allocated to it. This prevents a bug ormalwarewithin a process from affecting other processes, or the operating system itself. Protection may encompass all accesses to a specified area of memory, write accesses, or attempts to execute the contents of the area. An attempt to access unauthorized[a]memory results in a hardwarefault, e.g., asegmentation fault,storage violationexception, generally causingabnormal terminationof the offending process. Memory protection forcomputer securityincludes additional techniques such asaddress space layout randomizationandexecutable-space protection.
Segmentationrefers to dividing a computer's memory into segments. A reference to a memory location includes a value that identifies a segment and an offset within that segment. A segment descriptor may limit access rights, e.g., read only, only from certainrings.
Thex86architecture has multiple segmentation features, which are helpful for using protected memory on this architecture.[1]On the x86 architecture, theGlobal Descriptor TableandLocal Descriptor Tablescan be used to reference segments in the computer's memory. Pointers to memory segments on x86 processors can also be stored in the processor's segment registers. Initially x86 processors had 4 segment registers, CS (code segment), SS (stack segment), DS (data segment) and ES (extra segment); later another two segment registers were added – FS and GS.[1]
In paging the memory address space or segment is divided into equal-sized blocks[b]calledpages. Usingvirtual memoryhardware, each page can reside in any location at a suitable boundary of the computer's physical memory, or be flagged as being protected. Virtual memory makes it possible to have a linearvirtual memory address spaceand to use it to access blocks fragmented overphysical memoryaddress space.
Mostcomputer architectureswhich support paging also use pages as the basis for memory protection.
Apage tablemaps virtual memory to physical memory. There may be a single page table, a page table for each process, a page table for each segment, or a hierarchy of page tables, depending on the architecture and the OS. The page tables are usually invisible to the process. Page tables make it easier to allocate additional memory, as each new page can be allocated from anywhere in physical memory. On some systems a page table entry can also designate a page as read-only.
Some operating systems set up a different address space for each process, which provides hard memory protection boundaries.[2]It is impossible for an unprivileged[c]application to access a page that has not been explicitly allocated to it, because every memory address either points to a page allocated to that application, or generates aninterruptcalled apage fault. Unallocated pages, and pages allocated to any other application, do not have any addresses from the application point of view.
A page fault may not necessarily indicate an error. Page faults are not only used for memory protection. The operating system may manage the page table in such a way that a reference to a page that has been previouslypaged outto secondary storage[d]causes a page fault. The operating system intercepts the page fault, loads the required memory page, and the application continues as if no fault had occurred. This scheme, a type ofvirtual memory, allows in-memory data not currently in use to be moved to secondary storage and back in a way which is transparent to applications, to increase overall memory capacity.
On some systems, a request for virtual storage may allocate a block of virtual addresses for which no page frames have been assigned, and the system will only assign and initialize page frames when page faults occur. On some systems aguard pagemay be used, either for error detection or to automatically grow data structures.
On some systems, the page fault mechanism is also used forexecutable space protectionsuch asW^X.
A memory protection key (MPK)[3]mechanism divides physical memory into blocks of a particular size (e.g., 4 KiB), each of which has an associated numerical value called a protection key. Each process also has a protection key value associated with it. On a memory access the hardware checks that the current process's protection key matches the value associated with the memory block being accessed; if not, an exception occurs. This mechanism was introduced in theSystem/360architecture. It is available on today'sSystem zmainframes and heavily used bySystem zoperating systems and their subsystems.
The System/360 protection keys described above are associated with physical addresses. This is different from the protection key mechanism used by architectures such as theHewlett-Packard/IntelIA-64and Hewlett-PackardPA-RISC, which are associated with virtual addresses, and which allow multiple keys per process.
In the Itanium and PA-RISC architectures, translations (TLBentries) havekeys(Itanium) oraccess ids(PA-RISC) associated with them. A running process has several protection key registers (16 for Itanium,[4]4 for PA-RISC[5]). A translation selected by the virtual address has its key compared to each of the protection key registers. If any of them match (plus other possible checks), the access is permitted. If none match, a fault or exception is generated. The software fault handler can, if desired, check the missing key against a larger list of keys maintained by software; thus, the protection key registers inside the processor may be treated as a software-managed cache of a larger list of keys associated with a process.
PA-RISC has 15–18 bits of key; Itanium mandates at least 18. Keys are usually associated withprotection domains, such as libraries, modules, etc.
In the x86, the protection keys[6]architecture allows tagging virtual addresses for user pages with any of 16 protection keys. All the pages tagged with the same protection key constitute a protection domain. A new register contains the permissions associated with each of the protection domain. Load and store operations are checked against both the page table permissions and the protection key permissions associated with the protection domain of the virtual address, and only allowed if both permissions allow the access. The protection key permissions can be set from user space, allowing applications to directly restrict access to the application data without OS intervention. Since the protection keys are associated with a virtual address, the protection domains are per address space, so processes running in different address spaces can each use all 16 domains.
InMulticsand systems derived from it, each segment has aprotection ringfor reading, writing and execution; an attempt by a process with a higher ring number than the ring number for the segment causes a fault. There is a mechanism for safely calling procedures that run in a lower ring and returning to the higher ring. There are mechanisms for a routine running with a low ring number to access a parameter with the larger of its own ring and the caller's ring.
Simulationis the use of amonitoringprogramto interpret the machine code instructions of some computer architectures. Such aninstruction set simulatorcan provide memory protection by using a segmentation-like scheme and validating the target address and length of each instruction in real time before actually executing them. The simulator must calculate the target address and length and compare this against a list of valid address ranges that it holds concerning thethread'senvironment, such as any dynamicmemoryblocks acquired since the thread's inception, plus any valid shared static memory slots. The meaning of "valid" may change throughout the thread's life depending upon context. It may sometimes be allowed to alter a static block of storage, and sometimes not, depending upon the current mode of execution, which may or may not depend on a storage key or supervisor state.[citation needed]
It is generally not advisable to use this method of memory protection where adequate facilities exist on a CPU, as this takes valuable processing power from the computer. However, it is generally used for debugging and testing purposes to provide an extra fine level of granularity to otherwise genericstorage violationsand can indicate precisely which instruction is attempting to overwrite the particular section of storage which may have the same storage key as unprotected storage.
Capability-based addressingis a method of memory protection that is unused in modern commercial computers. In this method,pointersare replaced by protected objects (calledcapabilities) that can only be created usingprivilegedinstructions which may only be executed by the kernel, or some other process authorized to do so.[citation needed]This effectively lets the kernel control which processes may access which objects in memory, with no need to use separate address spaces orcontext switches. Only a few commercial products used capability based security:Plessey System 250,IBM System/38,Intel iAPX 432architectureandKeyKOS. Capability approaches are widely used in research systems such asEROSand Combex DARPA browser. They are used conceptually as the basis for somevirtual machines, most notablySmalltalkandJava. Currently, the DARPA-fundedCHERIproject at University of Cambridge is working to create a modern capability machine that also supports legacy software.
Dynamic tainting is a technique for protecting programs from illegal memory accesses. When memory is allocated, at runtime, this technique taints both the memory and the corresponding pointer using the same taint mark. Taint marks are then suitably propagated while the program executes and are checked every time a memory addressmis accessed through a pointerp; if the taint marks associated withmandpdiffer, the execution is stopped and the illegal access is reported.[7][8]
SPARC M7processors (and higher) implement dynamic tainting in hardware. Oracle markets this feature asSilicon Secured Memory(SSM) (previously branded as Application Data Integrity (ADI)).[9]
ThelowRISCCPU design includes dynamic tainting under the name Tagged Memory.[10]
The protection level of a particular implementation may be measured by how closely it adheres to theprinciple of minimum privilege.[11]
Different operating systems use different forms of memory protection or separation. Although memory protection was common on mostmainframesand manyminicomputersystems from the 1960s, true memory separation was not used inhome computeroperating systems untilOS/2(and inRISC OS) was released in 1987. On prior systems, such lack of protection was even used as a form ofinterprocess communication, by sending apointerbetween processes. It is possible for processes to access System Memory in theWindows 9xfamily of operating systems.[12]
Some operating systems that do implement memory protection include:
OnUnix-likesystems, themprotectsystem callis used to control memory protection.[14]
|
https://en.wikipedia.org/wiki/Memory_protection
|
OS-level virtualizationis anoperating system(OS)virtualizationparadigm in which thekernelallows the existence of multiple isolateduser spaceinstances, includingcontainers(LXC,Solaris Containers, AIXWPARs, HP-UX SRP Containers,Docker,Podman),zones(Solaris Containers),virtual private servers(OpenVZ),partitions,virtual environments(VEs),virtual kernels(DragonFly BSD), andjails(FreeBSD jailandchroot).[1]Such instances may look like real computers from the point of view of programs running in them. Acomputer programrunning on an ordinary operating system can see all resources (connected devices, files and folders,network shares, CPU power, quantifiable hardware capabilities) of that computer. Programs running inside acontainercan only see the container's contents and devices assigned to the container.
OnUnix-likeoperating systems, this feature can be seen as an advanced implementation of the standardchrootmechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often providesresource-managementfeatures to limit the impact of one container's activities on other containers. Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by theLinux kernel, notablyLinux namespacesandcgroups.[2]
Although the wordcontainermost commonly refers to OS-level virtualization, it is sometimes used to refer to fullervirtual machinesoperating in varying degrees of concert with the host OS,[citation needed]such asMicrosoft'sHyper-Vcontainers.[citation needed]For an overview ofvirtualizationsince 1960, seeTimeline of virtualization technologies.
On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include:
The operating system may be able to allow or deny access to such resources based on which program requests them and theuser accountin the context in which it runs. The operating system may also hide those resources, so that when the computer program enumerates them, they do not appear in the enumeration results. Nevertheless, from a programming point of view, the computer program has interacted with those resources and the operating system has managed an act of interaction.
With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, and may even interact with one another.
Containerization has similarities toapplication virtualization: In the latter, only one computer program is placed in an isolated container and the isolation applies to file system only.
Operating-system-level virtualization is commonly used invirtual hostingenvironments, where it is useful for securely allocating finite hardware resources among a large number of mutually-distrusting users. System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on the one server.
Other typical scenarios include separating several programs to separate containers for improved security, hardware independence, and added resource management features.[3]The improved security provided by the use of a chroot mechanism, however, is not perfect.[4]Operating-system-level virtualization implementations capable oflive migrationcan also be used for dynamicload balancingof containers between nodes in a cluster.
Operating-system-level virtualization usually imposes less overhead thanfull virtualizationbecause programs in OS-level virtual partitions use the operating system's normalsystem callinterface and do not need to be subjected toemulationor be run in an intermediatevirtual machine, as is the case with full virtualization (such asVMware ESXi,QEMU, orHyper-V) andparavirtualization(such asXenorUser-mode Linux). This form of virtualization also does not require hardware support for efficient performance.
Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, withLinux, different distributions are fine, but other operating systems such as Windows cannot be hosted. Operating systems using variable input systematics are subject to limitations within the virtualized architecture. Adaptation methods including cloud-server relay analytics maintain the OS-level virtual environment within these applications.[5]
Solarispartially overcomes the limitation described above with itsbranded zonesfeature, which provides the ability to run an environment within a container that emulates an olderSolaris 8or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available onx86-based Solaris systems, providing a complete Linuxuser spaceand support for the execution of Linux applications; additionally, Solaris provides utilities needed to installRed Hat Enterprise Linux3.x orCentOS3.xLinux distributionsinside "lx" zones.[6][7]However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced inIllumos, which is theopen sourceSolaris fork, supporting 32-bitLinux kernels.[8]
Some implementations provide file-levelcopy-on-write(CoW) mechanisms. (Most commonly, a standard file system is shared between partitions, and those partitions that change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.
Linux containers not listed above include:
|
https://en.wikipedia.org/wiki/OS-level_virtualization
|
Inabstract algebra, anelementaof aringRis called aleft zero divisorif there exists a nonzeroxinRsuch thatax= 0,[1]or equivalently if themapfromRtoRthat sendsxtoaxis notinjective.[a]Similarly, an elementaof a ring is called aright zero divisorif there exists a nonzeroyinRsuch thatya= 0. This is a partial case ofdivisibility in rings. An element that is a left or a right zero divisor is simply called azero divisor.[2]An elementathat is both a left and a right zero divisor is called atwo-sided zero divisor(the nonzeroxsuch thatax= 0may be different from the nonzeroysuch thatya= 0). If the ring iscommutative, then the left and right zero divisors are the same.
An element of a ring that is not a left zero divisor (respectively, not a right zero divisor) is calledleft regularorleft cancellable(respectively,right regularorright cancellable).
An element of a ring that is left and right cancellable, and is hence not a zero divisor, is calledregularorcancellable,[3]or anon-zero-divisor. A zero divisor that is nonzero is called anonzero zero divisoror anontrivial zero divisor. A non-zeroring with no nontrivial zero divisors is called adomain.
(1122)(11−1−1)=(−21−21)(1122)=(0000),{\displaystyle {\begin{pmatrix}1&1\\2&2\end{pmatrix}}{\begin{pmatrix}1&1\\-1&-1\end{pmatrix}}={\begin{pmatrix}-2&1\\-2&1\end{pmatrix}}{\begin{pmatrix}1&1\\2&2\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}},}(1000)(0001)=(0001)(1000)=(0000).{\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}{\begin{pmatrix}0&0\\0&1\end{pmatrix}}={\begin{pmatrix}0&0\\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\0&0\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}}.}
There is no need for a separate convention for the casea= 0, because the definition applies also in this case:
Some references include or exclude0as a zero divisor inallrings by convention, but they then suffer from having to introduce exceptions in statements such as the following:
LetRbe a commutative ring, letMbe anR-module, and letabe an element ofR. One says thataisM-regularif the "multiplication bya" mapM→aM{\displaystyle M\,{\stackrel {a}{\to }}\,M}is injective, and thatais azero divisor onMotherwise.[4]The set ofM-regular elements is amultiplicative setinR.[4]
Specializing the definitions of "M-regular" and "zero divisor onM" to the caseM=Rrecovers the definitions of "regular" and "zero divisor" given earlier in this article.
|
https://en.wikipedia.org/wiki/Zero_divisor
|
Zero to the power of zero, denoted as00, is amathematical expressionwith different interpretations depending on the context. In certain areas of mathematics, such ascombinatoricsandalgebra,00is conventionally defined as 1 because this assignment simplifies manyformulasand ensures consistency in operations involvingexponents. For instance, in combinatorics, defining00= 1aligns with the interpretation of choosing 0 elements from asetand simplifiespolynomialandbinomial expansions.
However, in other contexts, particularly inmathematical analysis,00is often considered anindeterminate form. This is because the value ofxyas bothxandyapproach zero can lead to different results based on thelimiting process. The expression arises in limit problems and may result in a range of values or diverge toinfinity, making it difficult to assign a single consistent value in these cases.
The treatment of00also varies across differentcomputer programming languagesandsoftware. While many follow the convention of assigning00= 1for practical reasons, others leave itundefinedor return errors depending on the context of use, reflecting the ambiguity of the expression in mathematical analysis.
Many widely used formulas involvingnatural-numberexponents require00to be defined as1. For example, the following three interpretations ofb0make just as much sense forb= 0as they do for positive integersb:
All three of these specialize to give00= 1.
When evaluatingpolynomials, it is convenient to define00as1. A (real) polynomial is an expression of the forma0x0+ ⋅⋅⋅ +anxn, wherexis an indeterminate, and the coefficientsaiarereal numbers. Polynomials are added termwise, and multiplied by applying thedistributive lawand the usual rules for exponents. With these operations, polynomials form aringR[x]. Themultiplicative identityofR[x]is the polynomialx0; that is,x0times any polynomialp(x)is justp(x).[2]Also, polynomials can be evaluated by specializingxto a real number. More precisely, for any given real numberr, there is a unique unitalR-algebra homomorphismevr:R[x] →Rsuch thatevr(x) =r. Becauseevris unital,evr(x0) = 1. That is,r0= 1for each real numberr, including 0. The same argument applies withRreplaced by anyring.[3]
Defining00= 1is necessary for many polynomial identities. For example, thebinomial theorem(1+x)n=∑k=0n(nk)xk{\textstyle (1+x)^{n}=\sum _{k=0}^{n}{\binom {n}{k}}x^{k}}holds forx= 0only if00= 1.[4]
Similarly, rings ofpower seriesrequirex0to be defined as 1 for all specializations ofx. For example, identities like11−x=∑n=0∞xn{\textstyle {\frac {1}{1-x}}=\sum _{n=0}^{\infty }x^{n}}andex=∑n=0∞xnn!{\textstyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}hold forx= 0only if00= 1.[5]
In order for the polynomialx0to define acontinuous functionR→R, one must define00= 1.
Incalculus, thepower ruleddxxn=nxn−1{\textstyle {\frac {d}{dx}}x^{n}=nx^{n-1}}is valid forn= 1atx= 0only if00= 1.
Limits involvingalgebraic operationscan often be evaluated by replacing subexpressions with their limits; if the resulting expression does not determine the original limit, the expression is known as anindeterminate form.[6]The expression00is an indeterminate form: Given real-valued functionsf(t)andg(t)approaching0(astapproaches a real number or±∞) withf(t) > 0, the limit off(t)g(t)can be any non-negative real number or+∞, or it candiverge, depending onfandg. For example, each limit below involves a functionf(t)g(t)withf(t),g(t) → 0ast→ 0+(aone-sided limit), but their values are different:limt→0+tt=1,{\displaystyle \lim _{t\to 0^{+}}{t}^{t}=1,}limt→0+(e−1/t2)t=0,{\displaystyle \lim _{t\to 0^{+}}\left(e^{-1/t^{2}}\right)^{t}=0,}limt→0+(e−1/t2)−t=+∞,{\displaystyle \lim _{t\to 0^{+}}\left(e^{-1/t^{2}}\right)^{-t}=+\infty ,}limt→0+(a−1/t)−t=a.{\displaystyle \lim _{t\to 0^{+}}\left(a^{-1/t}\right)^{-t}=a.}
Thus, the two-variable functionxy, though continuous on the set{(x,y) :x> 0},cannot be extendedto acontinuous functionon{(x,y) :x> 0} ∪ {(0, 0)}, no matter how one chooses to define00.[7]
On the other hand, iffandgareanalytic functionson an open neighborhood of a numberc, thenf(t)g(t)→ 1astapproachescfrom any side on whichfis positive.[8]This and more general results can be obtained by studying the limiting behavior of the functionlog(f(t)g(t))=g(t)logf(t){\textstyle \log(f(t)^{g(t)})=g(t)\log f(t)}.[9][10]
In thecomplex domain, the functionzwmay be defined for nonzerozby choosing abranchoflogzand definingzwasewlogz. This does not define0wsince there is no branch oflogzdefined atz= 0, let alone in a neighborhood of0.[11][12][13]
In 1752,EulerinIntroductio in analysin infinitorumwrote thata0= 1[14]and explicitly mentioned that00= 1.[15]An annotation attributed[16]toMascheroniin a 1787 edition of Euler's bookInstitutiones calculi differentialis[17]offered the "justification"
00=(a−a)n−n=(a−a)n(a−a)n=1{\displaystyle 0^{0}=(a-a)^{n-n}={\frac {(a-a)^{n}}{(a-a)^{n}}}=1}
as well as another more involved justification. In the 1830s,Libri[18][16]published several further arguments attempting to justify the claim00= 1, though these were far from convincing, even by standards of rigor at the time.[19]
Euler, when setting00= 1, mentioned that consequently the values of the function0xtake a "huge jump", from∞forx< 0, to1atx= 0, to0forx> 0.[14]In 1814,Pfaffused asqueeze theoremargument to prove thatxx→ 1asx→ 0+.[8]
On the other hand, in 1821Cauchy[20]explained why the limit ofxyas positive numbersxandyapproach0while being constrained by some fixed relationcould be made to assume any value between0and∞by choosing the relation appropriately. He deduced that the limit of the fulltwo-variablefunctionxywithout a specified constraint is "indeterminate". With this justification, he listed00along with expressions like0/0in atable of indeterminate forms.
Apparently unaware of Cauchy's work,Möbius[8]in 1834, building on Pfaff's argument, claimed incorrectly thatf(x)g(x)→ 1wheneverf(x),g(x) → 0asxapproaches a numberc(presumablyfis assumed positive away fromc). Möbius reduced to the casec= 0, but then made the mistake of assuming that each offandgcould be expressed in the formPxnfor some continuous functionPnot vanishing at0and some nonnegative integern, which is true for analytic functions, but not in general. An anonymous commentator pointed out the unjustified step;[21]then another commentator who signed his name simply as "S" provided the explicit counterexamples(e−1/x)x→e−1and(e−1/x)2x→e−2asx→ 0+and expressed the situation by writing that "00can have many different values".[21]
There do not seem to be any authors assigning00a specific value other than 1.[22]
TheIEEE 754-2008floating-point standard is used in the design of most floating-point libraries. It recommends a number of operations for computing a power:[25]
Thepowvariant is inspired by thepowfunction fromC99, mainly for compatibility.[26]It is useful mostly for languages with a single power function. Thepownandpowrvariants have been introduced due to conflicting usage of the power functions and the different points of view (as stated above).[27]
The C and C++ standards do not specify the result of00(a domain error may occur). But for C, as ofC99, if thenormativeannex F is supported, the result for real floating-point types is required to be1because there are significant applications for which this value is more useful thanNaN[28](for instance, withdiscrete exponents); the result on complex types is not specified, even if the informative annex G is supported. TheJavastandard,[29]the.NET FrameworkmethodSystem.Math.Pow,[30]Julia, andPython[31][32]also treat00as1. Some languages document that their exponentiation operation corresponds to thepowfunction from theC mathematical library; this is the case withLua's^operator[33]andPerl's**operator[34](where it is explicitly mentioned that the result of0**0is platform-dependent).
R,[35]SageMath,[36]andPARI/GP[37]evaluatex0to1.Mathematica[38]simplifiesx0to1even if no constraints are placed onx; however, if00is entered directly, it is treated as an error or indeterminate.Mathematica[38]andPARI/GP[37][39]further distinguish between integer and floating-point values: If the exponent is a zero of integer type, they return a1of the type of the base; exponentiation with a floating-point exponent of value zero is treated as undefined, indeterminate or error.
|
https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero
|
L'Hôpital's rule(/ˌloʊpiːˈtɑːl/,loh-pee-TAHL), also known asBernoulli's rule, is a mathematical theorem that allows evaluatinglimitsofindeterminate formsusingderivatives. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-centuryFrenchmathematicianGuillaume de l'Hôpital. Although the rule is often attributed to de l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematicianJohann Bernoulli.
L'Hôpital's rule states that for functionsfandgwhich are defined on an openintervalIanddifferentiableonI∖{c}{\textstyle I\setminus \{c\}}for a (possibly infinite)accumulation pointcofI, iflimx→cf(x)=limx→cg(x)=0or±∞,{\textstyle \lim \limits _{x\to c}f(x)=\lim \limits _{x\to c}g(x)=0{\text{ or }}\pm \infty ,}andg′(x)≠0{\textstyle g'(x)\neq 0}for allxinI∖{c}{\textstyle I\setminus \{c\}}, andlimx→cf′(x)g′(x){\textstyle \lim \limits _{x\to c}{\frac {f'(x)}{g'(x)}}}exists, then
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be directly evaluated bycontinuity.
Guillaume de l'Hôpital(also written l'Hospital[a]) published this rule in his 1696 bookAnalyse des Infiniment Petits pour l'Intelligence des Lignes Courbes(literal translation:Analysis of the Infinitely Small for the Understanding of Curved Lines), the first textbook ondifferential calculus.[1][b]However, it is believed that the rule was discovered by the Swiss mathematicianJohann Bernoulli.[3]
The general form of l'Hôpital's rule covers many cases. LetcandLbeextended real numbers: real numbers, as well as positive and negative infinity. LetIbe anopen intervalcontainingc(for a two-sided limit) or an open interval with endpointc(for aone-sided limit, or alimit at infinityifcis infinite). OnI∖{c}{\displaystyle I\smallsetminus \{c\}}, the real-valued functionsfandgare assumeddifferentiablewithg′(x)≠0{\displaystyle g'(x)\neq 0}. It is also assumed thatlimx→cf′(x)g′(x)=L{\textstyle \lim \limits _{x\to c}{\frac {f'(x)}{g'(x)}}=L}, a finite or infinite limit.
If eitherlimx→cf(x)=limx→cg(x)=0{\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0}orlimx→c|f(x)|=limx→c|g(x)|=∞,{\displaystyle \lim _{x\to c}|f(x)|=\lim _{x\to c}|g(x)|=\infty ,}thenlimx→cf(x)g(x)=L.{\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=L.}Although we have writtenx→cthroughout, the limits may also be one-sided limits (x→c+orx→c−), whencis a finite endpoint ofI.
In the second case, the hypothesis thatfdivergesto infinity is not necessary; in fact, it is sufficient thatlimx→c|g(x)|=∞.{\textstyle \lim _{x\to c}|g(x)|=\infty .}
The hypothesis thatg′(x)≠0{\displaystyle g'(x)\neq 0}appears most commonly in the literature, but some authors sidestep this hypothesis by adding other hypotheses which implyg′(x)≠0{\displaystyle g'(x)\neq 0}. For example,[4]one may require in the definition of the limitlimx→cf′(x)g′(x)=L{\textstyle \lim \limits _{x\to c}{\frac {f'(x)}{g'(x)}}=L}that the functionf′(x)g′(x){\textstyle {\frac {f'(x)}{g'(x)}}}must be defined everywhere on an intervalI∖{c}{\displaystyle I\smallsetminus \{c\}}.[c]Another method[5]is to require that bothfandgbe differentiable everywhere on an interval containingc.
All four conditions for l'Hôpital's rule are necessary:
Where one of the above conditions is not satisfied, l'Hôpital's rule is not valid in general, and its conclusion may be false in certain cases.
The necessity of the first condition can be seen by considering the counterexample where the functions aref(x)=x+1{\displaystyle f(x)=x+1}andg(x)=2x+1{\displaystyle g(x)=2x+1}and the limit isx→1{\displaystyle x\to 1}.
The first condition is not satisfied for this counterexample becauselimx→1f(x)=limx→1(x+1)=(1)+1=2≠0{\displaystyle \lim _{x\to 1}f(x)=\lim _{x\to 1}(x+1)=(1)+1=2\neq 0}andlimx→1g(x)=limx→1(2x+1)=2(1)+1=3≠0{\displaystyle \lim _{x\to 1}g(x)=\lim _{x\to 1}(2x+1)=2(1)+1=3\neq 0}. This means that the form is not indeterminate.
The second and third conditions are satisfied byf(x){\displaystyle f(x)}andg(x){\displaystyle g(x)}. The fourth condition is also satisfied with
limx→1f′(x)g′(x)=limx→1(x+1)′(2x+1)′=limx→112=12.{\displaystyle \lim _{x\to 1}{\frac {f'(x)}{g'(x)}}=\lim _{x\to 1}{\frac {(x+1)'}{(2x+1)'}}=\lim _{x\to 1}{\frac {1}{2}}={\frac {1}{2}}.}
But the conclusion fails, sincelimx→1f(x)g(x)=limx→1x+12x+1=limx→1(x+1)limx→1(2x+1)=23≠12.{\displaystyle \lim _{x\to 1}{\frac {f(x)}{g(x)}}=\lim _{x\to 1}{\frac {x+1}{2x+1}}={\frac {\lim _{x\to 1}(x+1)}{\lim _{x\to 1}(2x+1)}}={\frac {2}{3}}\neq {\frac {1}{2}}.}
Differentiability of functions is a requirement because if a function is not differentiable, then the derivative of the function is not guaranteed to exist at each point inI{\displaystyle {\mathcal {I}}}. The fact thatI{\displaystyle {\mathcal {I}}}is an open interval is grandfathered in from the hypothesis of theCauchy's mean value theorem. The notable exception of the possibility of the functions being not differentiable atc{\displaystyle c}exists because l'Hôpital's rule only requires the derivative to exist as the function approachesc{\displaystyle c}; the derivative does not need to be taken atc{\displaystyle c}.
For example, letf(x)={sinx,x≠01,x=0{\displaystyle f(x)={\begin{cases}\sin x,&x\neq 0\\1,&x=0\end{cases}}},g(x)=x{\displaystyle g(x)=x}, andc=0{\displaystyle c=0}. In this case,f(x){\displaystyle f(x)}is not differentiable atc{\displaystyle c}. However, sincef(x){\displaystyle f(x)}is differentiable everywhere exceptc{\displaystyle c}, thenlimx→cf′(x){\displaystyle \lim _{x\to c}f'(x)}still exists. Thus, since
limx→cf(x)g(x)=00{\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}={\frac {0}{0}}}andlimx→cf′(x)g′(x){\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}}exists, l'Hôpital's rule still holds.
The necessity of the condition thatg′(x)≠0{\displaystyle g'(x)\neq 0}nearc{\displaystyle c}can be seen by the following counterexample due toOtto Stolz.[6]Letf(x)=x+sinxcosx{\displaystyle f(x)=x+\sin x\cos x}andg(x)=f(x)esinx.{\displaystyle g(x)=f(x)e^{\sin x}.}Then there is no limit forf(x)/g(x){\displaystyle f(x)/g(x)}asx→∞.{\displaystyle x\to \infty .}However,
which tends to 0 asx→∞{\displaystyle x\to \infty }, although it is undefined at infinitely many points. Further examples of this type were found byRalph P. Boas Jr.[7]
The requirement that the limitlimx→cf′(x)g′(x){\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}}exists is essential; if it does not exist, the original limitlimx→cf(x)g(x){\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}}may nevertheless exist. Indeed, asx{\displaystyle x}approachesc{\displaystyle c}, the functionsf{\displaystyle f}org{\displaystyle g}may exhibit many oscillations of small amplitude but steep slope, which do not affectlimx→cf(x)g(x){\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}}but do prevent the convergence oflimx→cf′(x)g′(x){\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}}.
For example, iff(x)=x+sin(x){\displaystyle f(x)=x+\sin(x)},g(x)=x{\displaystyle g(x)=x}andc=∞{\displaystyle c=\infty }, thenf′(x)g′(x)=1+cos(x)1,{\displaystyle {\frac {f'(x)}{g'(x)}}={\frac {1+\cos(x)}{1}},}which does not approach a limit since cosine oscillates infinitely between1and−1. But the ratio of the original functions does approach a limit, since the amplitude of the oscillations off{\displaystyle f}becomes small relative tog{\displaystyle g}:
In a case such as this, all that can be concluded is that
so that if the limit offg{\textstyle {\frac {f}{g}}}exists, then it must lie between the inferior and superior limits off′g′{\textstyle {\frac {f'}{g'}}}. In the example, 1 does indeed lie between 0 and 2.)
Note also that by thecontrapositiveform of the Rule, iflimx→cf(x)g(x){\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}}does not exist, thenlimx→cf′(x)g′(x){\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}}also does not exist.
In the following computations, we indicate each application of l'Hôpital's rule by the symbol=H{\displaystyle \ {\stackrel {\mathrm {H} }{=}}\ }.
Sometimes l'Hôpital's rule is invoked in a tricky way: supposef(x)+f′(x){\displaystyle f(x)+f'(x)}converges asx→ ∞and thatex⋅f(x){\displaystyle e^{x}\cdot f(x)}converges to positive or negative infinity. Then:limx→∞f(x)=limx→∞ex⋅f(x)ex=Hlimx→∞ex(f(x)+f′(x))ex=limx→∞(f(x)+f′(x)),{\displaystyle \lim _{x\to \infty }f(x)=\lim _{x\to \infty }{\frac {e^{x}\cdot f(x)}{e^{x}}}\ {\stackrel {\mathrm {H} }{=}}\ \lim _{x\to \infty }{\frac {e^{x}{\bigl (}f(x)+f'(x){\bigr )}}{e^{x}}}=\lim _{x\to \infty }{\bigl (}f(x)+f'(x){\bigr )},}and so,limx→∞f(x){\textstyle \lim _{x\to \infty }f(x)}exists andlimx→∞f′(x)=0.{\textstyle \lim _{x\to \infty }f'(x)=0.}(This result remains true without the added hypothesis thatex⋅f(x){\displaystyle e^{x}\cdot f(x)}converges to positive or negative infinity, but the justification is then incomplete.)
Sometimes L'Hôpital's rule does not reduce to an obvious limit in a finite number of steps, unless some intermediate simplifications are applied. Examples include the following:
A common logical fallacy is to use L'Hôpital's rule to prove the value of a derivative by computing the limit of adifference quotient. Since applying l'Hôpital requires knowing the relevant derivatives, this amounts tocircular reasoningorbegging the question, assuming what is to be proved. For example, consider the proof of the derivative formula forpowers ofx:
Applying L'Hôpital's rule and finding the derivatives with respect tohyieldsnxn−1as expected, but this computation requires the use of the very formula that is being proven. Similarly, to provelimx→0sin(x)x=1{\displaystyle \lim _{x\to 0}{\frac {\sin(x)}{x}}=1}, applying L'Hôpital requires knowing the derivative ofsin(x){\displaystyle \sin(x)}atx=0{\displaystyle x=0}, which amounts to calculatinglimh→0sin(h)h{\displaystyle \lim _{h\to 0}{\frac {\sin(h)}{h}}}in the first place; a valid proof requires a different method such as thesqueeze theorem.
Other indeterminate forms, such as1∞,00,∞0,0 · ∞, and∞ − ∞, can sometimes be evaluated using L'Hôpital's rule. We again indicate applications of L'Hopital's rule by=H{\displaystyle \ {\stackrel {\mathrm {H} }{=}}\ }.
For example, to evaluate a limit involving∞ − ∞, convert the difference of two functions to a quotient:
L'Hôpital's rule can be used on indeterminate forms involvingexponentsby usinglogarithmsto "move the exponent down". Here is an example involving the indeterminate form00:
It is valid to move the limit inside theexponential functionbecause this function iscontinuous. Now the exponentx{\displaystyle x}has been "moved down". The limitlimx→0+x⋅lnx{\displaystyle \lim _{x\to 0^{+}}x\cdot \ln x}is of the indeterminate form0 · ∞dealt with in an example above: L'Hôpital may be used to determine that
Thus
The following table lists the most common indeterminate forms and the transformations which precede applying l'Hôpital's rule:
The Stolz–Cesàro theorem is a similar result involving limits of sequences, but it uses finitedifference operatorsrather thanderivatives.
Consider theparametric curvein thexy-plane with coordinates given by the continuous functionsg(t){\displaystyle g(t)}andf(t){\displaystyle f(t)}, thelocusof points(g(t),f(t)){\displaystyle (g(t),f(t))}, and supposef(c)=g(c)=0{\displaystyle f(c)=g(c)=0}. The slope of the tangent to the curve at(g(c),f(c))=(0,0){\displaystyle (g(c),f(c))=(0,0)}is the limit of the ratiof(t)g(t){\displaystyle \textstyle {\frac {f(t)}{g(t)}}}ast→c. The tangent to the curve at the point(g(t),f(t)){\displaystyle (g(t),f(t))}is thevelocity vector(g′(t),f′(t)){\displaystyle (g'(t),f'(t))}with slopef′(t)g′(t){\displaystyle \textstyle {\frac {f'(t)}{g'(t)}}}. L'Hôpital's rule then states that the slope of the curve at the origin (t=c) is the limit of the tangent slope at points approaching the origin, provided that this is defined.
The proof of L'Hôpital's rule is simple in the case wherefandgarecontinuously differentiableat the pointcand where a finite limit is found after the first round of differentiation. This is only a special case of L'Hôpital's rule, because it only applies to functions satisfying stronger conditions than required by the general rule. However, many common functions have continuous derivatives (e.g.polynomials,sineandcosine,exponential functions), so this special case covers most applications.
Suppose thatfandgare continuously differentiable at a real numberc, thatf(c)=g(c)=0{\displaystyle f(c)=g(c)=0}, and thatg′(c)≠0{\displaystyle g'(c)\neq 0}. Then
This follows from the difference quotient definition of the derivative. The last equality follows from the continuity of the derivatives atc. The limit in the conclusion is not indeterminate becauseg′(c)≠0{\displaystyle g'(c)\neq 0}.
The proof of a more general version of L'Hôpital's rule is given below.
The following proof is due toTaylor (1952), where a unified proof for the00{\textstyle {\frac {0}{0}}}and±∞±∞{\textstyle {\frac {\pm \infty }{\pm \infty }}}indeterminate forms is given. Taylor notes that different proofs may be found inLettenmeyer (1936)andWazewski (1949).
Letfandgbe functions satisfying the hypotheses in theGeneral formsection. LetI{\displaystyle {\mathcal {I}}}be the open interval in the hypothesis with endpointc. Considering thatg′(x)≠0{\displaystyle g'(x)\neq 0}on this interval andgis continuous,I{\displaystyle {\mathcal {I}}}can be chosen smaller so thatgis nonzero onI{\displaystyle {\mathcal {I}}}.[d]
For eachxin the interval, definem(x)=inff′(t)g′(t){\displaystyle m(x)=\inf {\frac {f'(t)}{g'(t)}}}andM(x)=supf′(t)g′(t){\displaystyle M(x)=\sup {\frac {f'(t)}{g'(t)}}}ast{\displaystyle t}ranges over all values betweenxandc. (The symbols inf and sup denote theinfimumandsupremum.)
From the differentiability offandgonI{\displaystyle {\mathcal {I}}},Cauchy's mean value theoremensures that for any two distinct pointsxandyinI{\displaystyle {\mathcal {I}}}there exists aξ{\displaystyle \xi }betweenxandysuch thatf(x)−f(y)g(x)−g(y)=f′(ξ)g′(ξ){\displaystyle {\frac {f(x)-f(y)}{g(x)-g(y)}}={\frac {f'(\xi )}{g'(\xi )}}}. Consequently,m(x)≤f(x)−f(y)g(x)−g(y)≤M(x){\displaystyle m(x)\leq {\frac {f(x)-f(y)}{g(x)-g(y)}}\leq M(x)}for all choices of distinctxandyin the interval. The valueg(x)-g(y) is always nonzero for distinctxandyin the interval, for if it was not, themean value theoremwould imply the existence of apbetweenxandysuch thatg'(p)=0.
The definition ofm(x) andM(x) will result in an extended real number, and so it is possible for them to take on the values ±∞. In the following two cases,m(x) andM(x) will establish bounds on the ratiof/g.
Case 1:limx→cf(x)=limx→cg(x)=0{\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0}
For anyxin the intervalI{\displaystyle {\mathcal {I}}}, and pointybetweenxandc,
and therefore asyapproachesc,f(y)g(x){\displaystyle {\frac {f(y)}{g(x)}}}andg(y)g(x){\displaystyle {\frac {g(y)}{g(x)}}}become zero, and so
Case 2:limx→c|g(x)|=∞{\displaystyle \lim _{x\to c}|g(x)|=\infty }
For everyxin the intervalI{\displaystyle {\mathcal {I}}}, defineSx={y∣yis betweenxandc}{\displaystyle S_{x}=\{y\mid y{\text{ is between }}x{\text{ and }}c\}}. For every pointybetweenxandc,
Asyapproachesc, bothf(x)g(y){\displaystyle {\frac {f(x)}{g(y)}}}andg(x)g(y){\displaystyle {\frac {g(x)}{g(y)}}}become zero, and therefore
Thelimit superiorandlimit inferiorare necessary since the existence of the limit off/ghas not yet been established.
It is also the case that
[e]and
In case 1, thesqueeze theoremestablishes thatlimx→cf(x)g(x){\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}}exists and is equal toL. In the case 2, and the squeeze theorem again asserts thatlim infx→cf(x)g(x)=lim supx→cf(x)g(x)=L{\displaystyle \liminf _{x\to c}{\frac {f(x)}{g(x)}}=\limsup _{x\to c}{\frac {f(x)}{g(x)}}=L}, and so the limitlimx→cf(x)g(x){\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}}exists and is equal toL. This is the result that was to be proven.
In case 2 the assumption thatf(x) diverges to infinity was not used within the proof. This means that if |g(x)| diverges to infinity asxapproachescand bothfandgsatisfy the hypotheses of L'Hôpital's rule, then no additional assumption is needed about the limit off(x): It could even be the case that the limit off(x) does not exist. In this case, L'Hopital's theorem is actually a consequence of Cesàro–Stolz.[9]
In the case when |g(x)| diverges to infinity asxapproachescandf(x) converges to a finite limit atc, then L'Hôpital's rule would be applicable, but not absolutely necessary, since basic limit calculus will show that the limit off(x)/g(x) asxapproachescmust be zero.
A simple but very useful consequence of L'Hopital's rule is that the derivative of a function cannot have a removable discontinuity. That is, suppose thatfis continuous ata, and thatf′(x){\displaystyle f'(x)}exists for allxin some open interval containinga, except perhaps forx=a{\displaystyle x=a}. Suppose, moreover, thatlimx→af′(x){\displaystyle \lim _{x\to a}f'(x)}exists. Thenf′(a){\displaystyle f'(a)}also exists and
In particular,f'is also continuous ata.
Thus, if a function is not continuously differentiable near a point, the derivative must have an essential discontinuity at that point.
Consider the functionsh(x)=f(x)−f(a){\displaystyle h(x)=f(x)-f(a)}andg(x)=x−a{\displaystyle g(x)=x-a}. The continuity offatatells us thatlimx→ah(x)=0{\displaystyle \lim _{x\to a}h(x)=0}. Moreover,limx→ag(x)=0{\displaystyle \lim _{x\to a}g(x)=0}since a polynomial function is always continuous everywhere. Applying L'Hopital's rule shows thatf′(a):=limx→af(x)−f(a)x−a=limx→ah′(x)g′(x)=limx→af′(x){\displaystyle f'(a):=\lim _{x\to a}{\frac {f(x)-f(a)}{x-a}}=\lim _{x\to a}{\frac {h'(x)}{g'(x)}}=\lim _{x\to a}f'(x)}.
|
https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule
|
In acomputeroperating systemthat usespagingforvirtual memorymanagement,page replacement algorithmsdecide which memory pages to page out, sometimes called swap out, or write to disk, when apageof memory needs to be allocated. Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines thequalityof the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
The page replacing problem is a typicalonline problemfrom the competitive analysis perspective in the sense that the optimal deterministic algorithm is known.
Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s.
That mostly ended with the development of sophisticatedLRU(least recently used) approximations andworking setalgorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms:
Requirements for page replacement algorithms have changed due to differences in operating systemkernelarchitectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed byjournaling. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels (Linux,FreeBSD, andSolaris) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem.
Replacement algorithms can belocalorglobal.
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing amemory partition).
A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning arefixed partitioningandbalanced setalgorithms based on theworking setmodel. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis.[1]
Modern general purpose computers and some embedded processors have support forvirtual memory. Each process has its own virtual address space. Apage tablemaps a subset of the process virtual addresses to physical addresses. In addition, in most architectures the page table holds an "access" bit and a "dirty" bit for each page in the page table. The CPU sets the access bit when the process reads or writes memory in that page. The CPU sets the dirty bit when the process writes memory in that page. The operating system can modify the access and dirty bits. The operating system can detect accesses to memory and files through the following means:
Most replacement algorithms simply return the target page as their result. This means that if target page isdirty(that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (tocleanthe page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems withfull duplexchannels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.
To deal with this situation, variousprecleaningpolicies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replacednext. Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.
The (h,k)-paging problem is a generalization of the model of paging problem: Let h,k be positive integers such thath≤k{\displaystyle h\leq k}. We measure the performance of an algorithm with cache of sizeh≤k{\displaystyle h\leq k}relative tothe theoretically optimal page replacement algorithm. Ifh<k{\displaystyle h<k}, we provide the optimal page replacement algorithm with strictly less resource.
The (h,k)-paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm, specifically, separately parameterizing the cache size of the online algorithm and optimal algorithm.
Marking algorithms is a general class of paging algorithms. For each page, we associate it with a bit called its mark. Initially, we set all pages as unmarked. During a stage (a period of operation or a sequence of requests) of page requests, we mark a page when it is first requested in this stage. A marking algorithm is such an algorithm that never pages out a marked page.
If ALG is a marking algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of size h, whereh≤k{\displaystyle h\leq k}, then ALG iskk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive. So every marking algorithm attains thekk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive ratio.
LRU is a marking algorithm while FIFO is not a marking algorithm.
An algorithm is conservative, if on any consecutive request sequence containing k or fewer distinct page references, the algorithm will incur k or fewer page faults.
If ALG is a conservative algorithm with a cache of size k, and OPT is the optimal algorithm with a cache ofh≤k{\displaystyle h\leq k}, then ALG iskk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive. So every conservative algorithm attains thekk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive ratio.
LRU, FIFO and CLOCK are conservative algorithms.
There are a variety of page replacement algorithms:[2]
The theoretically optimal page replacement algorithm (also known as OPT,clairvoyantreplacement algorithm, orBélády'soptimal page replacement policy)[3][4][2]is an algorithm that works as follows: when a page needs to be swapped in, theoperating systemswaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds.
This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist[5]that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs.
Analysis of the paging problem has also been done in the field ofonline algorithms. Efficiency of randomized online algorithms for the paging problem is measured usingamortized analysis.
The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.
At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current timer interval are marked with a referenced bit. When a page needs to be replaced, theoperating systemdivides the pages into four classes:
Although it does not seem possible for a page to be modified yet not referenced, this happens when a class 3 page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-referenced (within the last timer interval) page is less important than a not-modified page that is intensely referenced.
NRU is a marking algorithm, so it iskk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive.
The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of theoperating system. The idea is obvious from the name – the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiencesBélády's anomaly.
In simple words, on a page fault, the frame that has been in memory the longest is replaced.
FIFO page replacement algorithm is used by theOpenVMSoperating system, with some modifications.[6]Partial second chance is provided by skipping a limited number of entries with valid translation table references,[7]and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used.
FIFO is a conservative algorithm, so it iskk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive.
A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO.
As its name suggests, Second-chance gives every page a "second-chance" – an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced.
Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced.[8]This algorithm was first described in 1969 byFernando J. Corbató.[9]
CLOCK is a conservative algorithm, so it iskk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive.
The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good asadaptive replacement cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.
The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.
Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, theoperating systemselects the page with the lowest counter and swaps it out.
Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations.
One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.
On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).
A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU,LIRS) can be found in Megiddo & Modha 2004.[19]
LRU is a marking algorithm, so it iskk−h+1{\displaystyle {\tfrac {k}{k-h+1}}}-competitive.
Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice.OS/390uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and theIntel i860processor used a random replacement policy (Rhodehamel 1989[20]).
The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary.
The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows.
The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values.
The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this in chronological order: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen.
The followingPythoncode simulates the aging algorithm.
CountersVi{\displaystyle V_{i}}are initialized with0and updated as described above viaVi←(Ri≪(k−1))|(Vi≫1){\displaystyle V_{i}\leftarrow (R_{i}\ll (k-1))|(V_{i}\gg 1)}, usingarithmetic shift operators.
In the given example of R-bits for 6 pages over 5 clock ticks, the function prints the following output, which lists the R-bits for each clock ticktand the individual counter valuesVi{\displaystyle V_{i}}for each page inbinaryrepresentation.[21]
Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest16/32(depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.
The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF, locality is based on distance not on the used references. In the LDF, replace the page which is on longest distance from the current page. If two pages are on same distance then the page which is next to current page in anti-clock rotation will get replaced.[citation needed]
Many of the techniques discussed above assume the presence of a reference bit associated with each page. Some hardware has no such bit, so its efficient use requires techniques that operate well without one.
One notable example isVAXhardware runningOpenVMS. This system knows if a page has been modified, but not necessarily if a page has been read. Its approach is known as Secondary Page Caching. Pages removed from working sets (process-private memory, generally) are placed on special-purpose lists while remaining in physical memory for some time. Removing a page from a working set is not technically a page-replacement operation, but effectively identifies that page as a candidate. A page whose backing store is still valid (whose contents are not dirty, or otherwise do not need to be preserved) is placed on the tail of the Free Page List. A page that requires writing to backing store will be placed on the Modified Page List. These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold.
Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory. A page referenced this way will be removed from the Free or Modified list and placed back into a process working set. The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page, increasing efficiency. These pages can then be placed on the Free Page List. The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second-Chance algorithm described earlier.
Another example is used by theLinux kernelonARM. The lack of hardware functionality is made up for by providing two page tables – the processor-native page tables, with neither referenced bits nordirty bits, and software-maintained page tables with the required bits present. The emulated bits in the software-maintained table are set by page faults. In order to get the page faults, clearing emulated bits in the second table revokes some of the access rights to the corresponding page, which is implemented by altering the native table.
Linuxuses a unified page cache for
The unified page cache operates on units of the smallest page size supported by the CPU (4 KiB inARMv8,x86andx86-64) with some pages of the next larger size (2 MiB inx86-64) called "huge pages" by Linux. The pages in the page cache are divided in an "active" set and an "inactive" set. Both sets keep a LRU list of pages. In the basic case, when a page is accessed by a user-space program it is put in the head of the inactive set. When it is accessed repeatedly, it is moved to the active list. Linux moves the pages from the active set to the inactive set as needed so that the active set is smaller than the inactive set. When a page is moved to the inactive set it is removed from the page table of any process address space, without being paged out of physical memory.[22][23]When a page is removed from the inactive set, it is paged out of physical memory. The size of the "active" and "inactive" list can be queried from/proc/meminfoin the fields "Active", "Inactive", "Active(anon)", "Inactive(anon)", "Active(file)" and "Inactive(file)".
The working set of a process is the set of pages expected to be used by that process during some time interval.
The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind ofmedium-term scheduler)[clarification needed]
|
https://en.wikipedia.org/wiki/Page_replacement_algorithm
|
I do considerassignment statementsand pointer variables to be among computer science's "most valuable treasures."
Incomputer science, apointeris anobjectin manyprogramming languagesthat stores amemory address. This can be that of another value located incomputer memory, or in some cases, that ofmemory-mappedcomputer hardware. A pointerreferencesa location in memory, and obtaining the value stored at that location is known asdereferencingthe pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlyingcomputer architecture.
Using pointers significantly improvesperformancefor repetitive operations, like traversingiterabledatastructures(e.g.strings,lookup tables,control tables,linked lists, andtreestructures). In particular, it is often much cheaper in time and space to copy and dereference pointers than it is to copy and access the data to which the pointers point.
Pointers are also used to hold the addresses of entry points forcalledsubroutines inprocedural programmingand for run-time linking todynamic link libraries (DLLs). Inobject-oriented programming,pointers to functionsare used forbindingmethods, often usingvirtual method tables.
A pointer is a simple, more concrete implementation of the more abstractreferencedata type. Several languages, especiallylow-level languages, support some type of pointer, although some have more restrictions on their use than others. While "pointer" has been used to refer to references in general, it more properly applies todata structureswhoseinterfaceexplicitly allows the pointer to be manipulated (arithmetically viapointer arithmetic) as a memory address, as opposed to amagic cookieorcapabilitywhich does not allow such.[citation needed]Because pointers allow both protected and unprotected access tomemory addresses, there are risks associated with using them, particularly in the latter case. Primitive pointers are often stored in a format similar to aninteger; however, attempting to dereference or "look up" such a pointer whose value is not a valid memory address could cause a program tocrash(or contain invalid data). To alleviate this potential problem, as a matter oftype safety, pointers are considered a separate type parameterized by the type of data they point to, even if the underlying representation is an integer. Other measures may also be taken (such asvalidationandbounds checking), to verify that the pointer variable contains a value that is both a valid memory address and within the numerical range that the processor is capable of addressing.
In 1955, Soviet Ukrainian computer scientistKateryna Yushchenkocreated theAddress programming languagethat made possible indirect addressing and addresses of the highest rank – analogous to pointers. This language was widely used on the Soviet Union computers. However, it was unknown outside the Soviet Union and usuallyHarold Lawsonis credited with the invention, in 1964, of the pointer.[2]In 2000, Lawson was presented the Computer Pioneer Award by theIEEE"[f]or inventing the pointer variable and introducing this concept into PL/I, thus providing for the first time, the capability to flexibly treat linked lists in a general-purpose high-level language".[3]His seminal paper on the concepts appeared in the June 1967 issue of CACM entitled: PL/I List Processing. According to theOxford English Dictionary, thewordpointerfirst appeared in print as astack pointerin a technical memorandum by theSystem Development Corporation.
Incomputer science, a pointer is a kind ofreference.
Adata primitive(or justprimitive) is any datum that can be read from or written tocomputer memoryusing one memory access (for instance, both abyteand awordare primitives).
Adata aggregate(or justaggregate) is a group of primitives that arelogicallycontiguous in memory and that are viewed collectively as one datum (for instance, an aggregate could be 3 logically contiguous bytes, the values of which represent the 3 coordinates of a point in space). When an aggregate is entirely composed of the same type of primitive, the aggregate may be called anarray; in a sense, a multi-bytewordprimitive is an array of bytes, and some programs use words in this way.
In the context of these definitions, abyteis the smallest primitive; eachmemory addressspecifies a different byte. The memory address of the initial byte of a datum is considered the memory address (orbase memory address) of the entire datum.
Amemory pointer(or justpointer) is a primitive, the value of which is intended to be used as a memory address; it is said thata pointer points to a memory address. It is also said thata pointer points to a datum [in memory]when the pointer's value is the datum's memory address.
More generally, a pointer is a kind ofreference, and it is said thata pointer references a datum stored somewhere in memory; to obtain that datum isto dereference the pointer. The feature that separates pointers from other kinds of reference is that a pointer's value is meant to be interpreted as a memory address, which is a rather low-level concept.
References serve as a level of indirection: A pointer's value determines which memory address (that is, which datum) is to be used in a calculation. Because indirection is a fundamental aspect of algorithms, pointers are often expressed as a fundamentaldata typeinprogramming languages; instatically(orstrongly) typed programming languages, thetypeof a pointer determines the type of the datum to which the pointer points.
Pointers are a very thinabstractionon top of the addressing capabilities provided by most modernarchitectures. In the simplest scheme, anaddress, or a numericindex, is assigned to each unit of memory in the system, where the unit is typically either abyteor aword– depending on whether the architecture isbyte-addressableorword-addressable– effectively transforming all of memory into a very largearray. The system would then also provide an operation to retrieve the value stored in the memory unit at a given address (usually utilizing the machine'sgeneral-purpose registers).
In the usual case, a pointer is large enough to hold more addresses than there are units of memory in the system. This introduces the possibility that a program may attempt to access an address which corresponds to no unit of memory, either because not enough memory is installed (i.e. beyond the range of available memory) or the architecture does not support such addresses. The first case may, in certain platforms such as theIntel x86architecture, be called asegmentation fault(segfault). The second case is possible in the current implementation ofAMD64, where pointers are 64 bit long and addresses only extend to 48 bits. Pointers must conform to certain rules (canonical addresses), so if a non-canonical pointer is dereferenced, the processor raises ageneral protection fault.
On the other hand, some systems have more units of memory than there are addresses. In this case, a more complex scheme such asmemory segmentationorpagingis employed to use different parts of the memory at different times. The last incarnations of the x86 architecture support up to 36 bits of physical memory addresses, which were mapped to the 32-bit linear address space through thePAEpaging mechanism. Thus, only 1/16 of the possible total memory may be accessed at a time. Another example in the same computer family was the 16-bitprotected modeof the80286processor, which, though supporting only 16 MB of physical memory, could access up to 1 GB of virtual memory, but the combination of 16-bit address and segment registers made accessing more than 64 KB in one data structure cumbersome.
In order to provide a consistent interface, some architectures providememory-mapped I/O, which allows some addresses to refer to units of memory while others refer to device registers of other devices in the computer. There are analogous concepts such as file offsets, array indices, and remote object references that serve some of the same purposes as addresses for other types of objects.
Pointers are directly supported without restrictions in languages such asPL/I,C,C++,Pascal,FreeBASIC, and implicitly in mostassembly languages. They are used mainly to constructreferences, which in turn are fundamental to construct nearly alldata structures, and to pass data between different parts of a program.
Infunctional programminglanguages that rely heavily on lists, data references are managed abstractly by using primitive constructs likeconsand the corresponding elementscar and cdr, which can be thought of as specialised pointers to the first and second components of a cons-cell. This gives rise to some of the idiomatic "flavour" of functional programming. By structuring data in suchcons-lists, these languages facilitaterecursivemeans for building and processing data—for example, by recursively accessing the head and tail elements of lists of lists; e.g. "taking the car of the cdr of the cdr". By contrast, memory management based on pointer dereferencing in some approximation of anarrayof memory addresses facilitates treating variables as slots into which data can be assignedimperatively.
When dealing with arrays, the criticallookupoperation typically involves a stage calledaddress calculationwhich involves constructing a pointer to the desired data element in the array. In other data structures, such aslinked lists, pointers are used as references to explicitly tie one piece of the structure to another.
Pointers are used to pass parameters by reference. This is useful if the programmer wants a function's modifications to a parameter to be visible to the function's caller. This is also useful for returning multiple values from a function.
Pointers can also be used toallocateand deallocate dynamic variables and arrays in memory. Since a variable will often become redundant after it has served its purpose, it is a waste of memory to keep it, and therefore it is good practice to deallocate it (using the original pointer reference) when it is no longer needed. Failure to do so may result in amemory leak(where available free memory gradually, or in severe cases rapidly, diminishes because of an accumulation of numerous redundant memory blocks).
The basicsyntaxto define a pointer is:[4]
This declaresptras the identifier of an object of the following type:
This is usually stated more succinctly as "ptris a pointer toint."
Because the C language does not specify an implicit initialization for objects of automatic storage duration,[5]care should often be taken to ensure that the address to whichptrpoints is valid; this is why it is sometimes suggested that a pointer be explicitly initialized to thenull pointervalue, which is traditionally specified in C with the standardized macroNULL:[6]
Dereferencing a null pointer in C producesundefined behavior,[7]which could be catastrophic. However, most implementations[citation needed]simply halt execution of the program in question, usually with asegmentation fault.
However, initializing pointers unnecessarily could hinder program analysis, thereby hiding bugs.
In any case, once a pointer has been declared, the next logical step is for it to point at something:
This assigns the value of the address ofatoptr. For example, ifais stored at memory location of 0x8130 then the value ofptrwill be 0x8130 after the assignment. To dereference the pointer, an asterisk is used again:
This means take the contents ofptr(which is 0x8130), "locate" that address in memory and set its value to 8.
Ifais later accessed again, its new value will be 8.
This example may be clearer if memory is examined directly.
Assume thatais located at address 0x8130 in memory andptrat 0x8134; also assume this is a 32-bit machine such that an int is 32-bits wide. The following is what would be in memory after the following code snippet is executed:
(The NULL pointer shown here is 0x00000000.)
By assigning the address ofatoptr:
yields the following memory values:
Then by dereferencingptrby coding:
the computer will take the contents ofptr(which is 0x8130), 'locate' that address, and assign 8 to that location yielding the following memory:
Clearly, accessingawill yield the value of 8 because the previous instruction modified the contents ofaby way of the pointerptr.
When setting updata structureslikelists,queuesand trees, it is necessary to have pointers to help manage how the structure is implemented and controlled. Typical examples of pointers are start pointers, end pointers, andstackpointers. These pointers can either beabsolute(the actualphysical addressor avirtual addressinvirtual memory) orrelative(anoffsetfrom an absolute start address ("base") that typically uses fewer bits than a full address, but will usually require one additional arithmetic operation to resolve).
Relative addresses are a form of manualmemory segmentation, and share many of its advantages and disadvantages. A two-byte offset, containing a 16-bit, unsigned integer, can be used to provide relative addressing for up to 64KiB(216bytes) of a data structure. This can easily be extended to 128, 256 or 512 KiB if the address pointed to is forced to bealignedon a half-word, word or double-word boundary (but, requiring an additional "shift left"bitwise operation—by 1, 2 or 3 bits—in order to adjust the offset by a factor of 2, 4 or 8, before its addition to the base address). Generally, though, such schemes are a lot of trouble, and for convenience to the programmer absolute addresses (and underlying that, aflat address space) is preferred.
A one byte offset, such as the hexadecimalASCIIvalue of a character (e.g. X'29') can be used to point to an alternative integer value (or index) in an array (e.g., X'01'). In this way, characters can be very efficiently translated from 'raw data' to a usable sequentialindexand then to an absolute address without alookup table.
In C, array indexing is formally defined in terms of pointer arithmetic; that is, the language specification requires thatarray[i]be equivalent to*(array + i).[8]Thus in C, arrays can be thought of as pointers to consecutive areas of memory (with no gaps),[8]and the syntax for accessing arrays is identical for that which can be used to dereference pointers. For example, an arrayarraycan be declared and used in the following manner:
This allocates a block of five integers and names the blockarray, which acts as a pointer to the block. Another common use of pointers is to point to dynamically allocated memory frommallocwhich returns a consecutive block of memory of no less than the requested size that can be used as an array.
While most operators on arrays and pointers are equivalent, the result of thesizeofoperator differs. In this example,sizeof(array)will evaluate to5*sizeof(int)(the size of the array), whilesizeof(ptr)will evaluate tosizeof(int*), the size of the pointer itself.
Default values of an array can be declared like:
Ifarrayis located in memory starting at address 0x1000 on a 32-bitlittle-endianmachine then memory will contain the following (values are inhexadecimal, like the addresses):
Represented here are five integers: 2, 4, 3, 1, and 5. These five integers occupy 32 bits (4 bytes) each with the least-significant byte stored first (this is a little-endianCPU architecture) and are stored consecutively starting at address 0x1000.
The syntax for C with pointers is:
The last example is how to access the contents ofarray. Breaking it down:
Below is an example definition of alinked listin C.
This pointer-recursive definition is essentially the same as the reference-recursive definition from the languageHaskell:
Nilis the empty list, andCons a (Link a)is aconscell of typeawith another link also of typea.
The definition with references, however, is type-checked and does not use potentially confusing signal values. For this reason, data structures in C are usually dealt with viawrapper functions, which are carefully checked for correctness.
Pointers can be used to pass variables by their address, allowing their value to be changed. For example, consider the followingCcode:
In some programs, the required amount of memory depends on whatthe usermay enter. In such cases the programmer needs to allocate memory dynamically. This is done by allocating memory at theheaprather than on thestack, where variables usually are stored (although variables can also be stored in the CPU registers). Dynamic memory allocation can only be made through pointers, and names – like with common variables – cannot be given.
Pointers are used to store and manage the addresses ofdynamically allocatedblocks of memory. Such blocks are used to store data objects or arrays of objects. Most structured and object-oriented languages provide an area of memory, called theheaporfree store, from which objects are dynamically allocated.
The example C code below illustrates how structure objects are dynamically allocated and referenced. Thestandard C libraryprovides the functionmalloc()for allocating memory blocks from the heap. It takes the size of an object to allocate as a parameter and returns a pointer to a newly allocated block of memory suitable for storing the object, or it returns a null pointer if the allocation failed.
The code below illustrates how memory objects are dynamically deallocated, i.e., returned to the heap or free store. The standard C library provides the functionfree()for deallocating a previously allocated memory block and returning it back to the heap.
On some computing architectures, pointers can be used to directly manipulate memory or memory-mapped devices.
Assigning addresses to pointers is an invaluable tool when programmingmicrocontrollers. Below is a simple example declaring a pointer of type int and initialising it to ahexadecimaladdress in this example the constant 0x7FFF:
In the mid 80s, using theBIOSto access the video capabilities of PCs was slow. Applications that were display-intensive typically used to accessCGAvideo memory directly by casting thehexadecimalconstant 0xB8000 to a pointer to an array of 80 unsigned 16-bit int values. Each value consisted of anASCIIcode in the low byte, and a colour in the high byte. Thus, to put the letter 'A' at row 5, column 2 in bright white on blue, one would write code like the following:
Control tablesthat are used to controlprogram flowusually make extensive use of pointers. The pointers, usually embedded in a table entry, may, for instance, be used to hold the entry points tosubroutinesto be executed, based on certain conditions defined in the same table entry. The pointers can however be simply indexes to other separate, but associated, tables comprising an array of the actual addresses or the addresses themselves (depending upon the programming language constructs available). They can also be used to point to earlier table entries (as in loop processing) or forward to skip some table entries (as in aswitchor "early" exit from a loop). For this latter purpose, the "pointer" may simply be the table entry number itself and can be transformed into an actual address by simple arithmetic.
In many languages, pointers have the additional restriction that the object they point to has a specifictype. For example, a pointer may be declared to point to aninteger; the language will then attempt to prevent the programmer from pointing it to objects which are not integers, such asfloating-point numbers, eliminating some errors.
For example, in C
moneywould be an integer pointer andbagswould be a char pointer.
The following would yield a compiler warning of "assignment from incompatible pointer type" underGCC
becausemoneyandbagswere declared with different types.
To suppress the compiler warning, it must be made explicit that you do indeed wish to make the assignment bytypecastingit
which says to cast the integer pointer ofmoneyto a char pointer and assign tobags.
A 2005 draft of the C standard requires that casting a pointer derived from one type to one of another type should maintain the alignment correctness for both types (6.3.2.3 Pointers, par. 7):[9]
In languages that allow pointer arithmetic, arithmetic on pointers takes into account the size of the type. For example, adding an integer number to a pointer produces another pointer that points to an address that is higher by that number times the size of the type. This allows us to easily compute the address of elements of an array of a given type, as was shown in the C arrays example above. When a pointer of one type is cast to another type of a different size, the programmer should expect that pointer arithmetic will be calculated differently. In C, for example, if themoneyarray starts at 0x2000 andsizeof(int)is 4 bytes whereassizeof(char)is 1 byte, thenmoney + 1will point to 0x2004, butbags + 1would point to 0x2001. Other risks of casting include loss of data when "wide" data is written to "narrow" locations (e.g.bags[0] = 65537;), unexpected results whenbit-shiftingvalues, and comparison problems, especially with signed vs unsigned values.
Although it is impossible in general to determine at compile-time which casts are safe, some languages storerun-time type informationwhich can be used to confirm that these dangerous casts are valid at runtime. Other languages merely accept a conservative approximation of safe casts, or none at all.
In C and C++, even if two pointers compare as equal that doesn't mean they are equivalent. In these languagesandLLVM, the rule is interpreted to mean that "just because two pointers point to the same address, does not mean they are equal in the sense that they can be used interchangeably", the difference between the pointers referred to as theirprovenance.[10]Casting to an integer type such asuintptr_tis implementation-defined and the comparison it provides does not provide any more insight as to whether the two pointers are interchangeable. In addition, further conversion to bytes and arithmetic will throw off optimizers trying to keep track the use of pointers, a problem still being elucidated in academic research.[11]
As a pointer allows a program to attempt to access an object that may not be defined, pointers can be the origin of a variety ofprogramming errors. However, the usefulness of pointers is so great that it can be difficult to perform programming tasks without them. Consequently, many languages have created constructs designed to provide some of the useful features of pointers without some of theirpitfalls, also sometimes referred to aspointer hazards. In this context, pointers that directly address memory (as used in this article) are referred to asraw pointers, by contrast withsmart pointersor other variants.
One major problem with pointers is that as long as they can be directly manipulated as a number, they can be made to point to unused addresses or to data which is being used for other purposes. Many languages, including most functional programming languages and recentimperative programminglanguages likeJava, replace pointers with a more opaque type of reference, typically referred to as simply areference, which can only be used to refer to objects and not manipulated as numbers, preventing this type of error. Array indexing is handled as a special case.
A pointer which does not have any address assigned to it is called awild pointer. Any attempt to use such uninitialized pointers can cause unexpected behavior, either because the initial value is not a valid address, or because using it may damage other parts of the program. The result is often asegmentation fault,storage violationorwild branch(if used as a function pointer or branch address).
In systems with explicit memory allocation, it is possible to create adangling pointerby deallocating the memory region it points into. This type of pointer is dangerous and subtle because a deallocated memory region may contain the same data as it did before it was deallocated but may be then reallocated and overwritten by unrelated code, unknown to the earlier code. Languages withgarbage collectionprevent this type of error because deallocation is performed automatically when there are no more references in scope.
Some languages, likeC++, supportsmart pointers, which use a simple form ofreference countingto help track allocation of dynamic memory in addition to acting as a reference. In the absence of reference cycles, where an object refers to itself indirectly through a sequence of smart pointers, these eliminate the possibility of dangling pointers and memory leaks.Delphistrings support reference counting natively.
TheRust programming languageintroduces aborrow checker,pointer lifetimes, and an optimisation based aroundoption typesfornull pointersto eliminate pointer bugs, without resorting togarbage collection.
Anull pointerhas a value reserved for indicating that the pointer does not refer to a valid object. Null pointers are routinely used to represent conditions such as the end of alistof unknown length or the failure to perform some action; this use of null pointers can be compared tonullable typesand to theNothingvalue in anoption type.
Adangling pointeris a pointer that does not point to a valid object and consequently may make a program crash or behave oddly. In thePascalorC programming languages, pointers that are not specifically initialized may point to unpredictable addresses in memory.
The following example code shows a dangling pointer:
Here,p2may point to anywhere in memory, so performing the assignment*p2 = 'b';can corrupt an unknown area of memory or trigger asegmentation fault.
Where a pointer is used as the address of the entry point to a program or start of afunction which doesn't return anythingand is also either uninitialized or corrupted, if a call orjumpis nevertheless made to this address, a "wild branch" is said to have occurred. In other words, a wild branch is a function pointer that is wild (dangling).
The consequences are usually unpredictable and the error may present itself in several different ways depending upon whether or not the pointer is a "valid" address and whether or not there is (coincidentally) a valid instruction (opcode) at that address. The detection of a wild branch can present one of the most difficult and frustrating debugging exercises since much of the evidence may already have been destroyed beforehand or by execution of one or more inappropriate instructions at the branch location. If available, aninstruction set simulatorcan usually not only detect a wild branch before it takes effect, but also provide a complete or partial trace of its history.
Anautorelative pointeris a pointer whose value is interpreted as an offset from the address of the pointer itself; thus, if a data structure has an autorelative pointer member that points to some portion of the data structure itself, then the data structure may be relocated in memory without having to update the value of the auto relative pointer.[12]
The cited patent also uses the termself-relative pointerto mean the same thing. However, the meaning of that term has been used in other ways:
Abased pointeris a pointer whose value is an offset from the value of another pointer. This can be used to store and load blocks of data, assigning the address of the beginning of the block to the base pointer.[14]
In some languages, a pointer can reference another pointer, requiring multiple dereference operations to get to the original value. While each level of indirection may add a performance cost, it is sometimes necessary in order to provide correct behavior for complexdata structures. For example, in C it is typical to define alinked listin terms of an element that contains a pointer to the next element of the list:
This implementation uses a pointer to the first element in the list as a surrogate for the entire list. If a new value is added to the beginning of the list,headhas to be changed to point to the new element. Since C arguments are always passed by value, using double indirection allows the insertion to be implemented correctly, and has the desirable side-effect of eliminating special case code to deal with insertions at the front of the list:
In this case, if the value ofitemis less than that ofhead, the caller'sheadis properly updated to the address of the new item.
A basic example is in theargvargument to themain function in C (and C++), which is given in the prototype aschar **argv—this is because the variableargvitself is a pointer to an array of strings (an array of arrays), so*argvis a pointer to the 0th string (by convention the name of the program), and**argvis the 0th character of the 0th string.
In some languages, a pointer can reference executable code, i.e., it can point to a function, method, or procedure. Afunction pointerwill store the address of a function to be invoked. While this facility can be used to call functions dynamically, it is often a favorite technique of virus and other malicious software writers.
In doublylinked listsortree structures, a back pointer held on an element 'points back' to the item referring to the current element. These are useful for navigation and manipulation, at the expense of greater memory use.
It is possible to simulate pointer behavior using an index to an (normally one-dimensional) array.
Primarily for languages which do not support pointers explicitly butdosupport arrays, thearraycan be thought of and processed as if it were the entire memory range (within the scope of the particular array) and any index to it can be thought of as equivalent to ageneral-purpose registerin assembly language (that points to the individual bytes but whose actual value is relative to the start of the array, not its absolute address in memory).
Assuming the array is, say, a contiguous 16megabytecharacterdata structure, individual bytes (or astringof contiguous bytes within the array) can be directly addressed and manipulated using the name of the array with a 31 bit unsignedintegeras the simulated pointer (this is quite similar to theC arraysexample shown above). Pointer arithmetic can be simulated by adding or subtracting from the index, with minimal additional overhead compared to genuine pointer arithmetic.
It is even theoretically possible, using the above technique, together with a suitableinstruction set simulatorto simulateanymachine codeor the intermediate (byte code) ofanyprocessor/language in another language that does not support pointers at all (for exampleJava/JavaScript). To achieve this, thebinarycode can initially be loaded into contiguous bytes of the array for the simulator to "read", interpret and execute entirely within the memory containing the same array.
If necessary, to completely avoidbuffer overflowproblems,bounds checkingcan usually be inserted by the compiler (or if not, hand coded in the simulator).
Adais a strongly typed language where all pointers are typed and only safe type conversions are permitted. All pointers are by default initialized tonull, and any attempt to access data through anullpointer causes anexceptionto be raised. Pointers in Ada are calledaccess types. Ada 83 did not permit arithmetic on access types (although many compiler vendors provided for it as a non-standard feature), but Ada 95 supports “safe” arithmetic on access types via the packageSystem.Storage_Elements.
Several old versions ofBASICfor the Windows platform had support for STRPTR() to return the address of a string, and for VARPTR() to return the address of a variable. Visual Basic 5 also had support for OBJPTR() to return the address of an object interface, and for an ADDRESSOF operator to return the address of a function. The types of all of these are integers, but their values are equivalent to those held by pointer types.
Newer dialects ofBASIC, such asFreeBASICorBlitzMax, have exhaustive pointer implementations, however. In FreeBASIC, arithmetic onANYpointers (equivalent to C'svoid*) are treated as though theANYpointer was a byte width.ANYpointers cannot be dereferenced, as in C. Also, casting betweenANYand any other type's pointers will not generate any warnings.
InCandC++pointers are variables that store addresses and can benull. Each pointer has a type it points to, but one can freely cast between pointer types (but not between a function pointer and an object pointer). A special pointer type called the “void pointer” allows pointing to any (non-function) object, but is limited by the fact that it cannot be dereferenced directly (it shall be cast). The address itself can often be directly manipulated by casting a pointer to and from an integral type of sufficient size, though the results are implementation-defined and may indeed cause undefined behavior; while earlier C standards did not have an integral type that was guaranteed to be large enough,C99specifies theuintptr_ttypedefnamedefined in<stdint.h>, but an implementation need not provide it.
C++fully supports C pointers and C typecasting. It also supports a new group of typecasting operators to help catch some unintended dangerous casts at compile-time. SinceC++11, theC++ standard libraryalso providessmart pointers(unique_ptr,shared_ptrandweak_ptr) which can be used in some situations as a safer alternative to primitive C pointers. C++ also supports another form of reference, quite different from a pointer, called simply areferenceorreference type.
Pointer arithmetic, that is, the ability to modify a pointer's target address with arithmetic operations (as well as magnitude comparisons), is restricted by the language standard to remain within the bounds of a single array object (or just after it), and will otherwise invokeundefined behavior. Adding or subtracting from a pointer moves it by a multiple of the size of itsdatatype. For example, adding 1 to a pointer to 4-byte integer values will increment the pointer's pointed-to byte-address by 4. This has the effect of incrementing the pointer to point at the next element in a contiguous array of integers—which is often the intended result. Pointer arithmetic cannot be performed onvoidpointers because thevoid typehas no size, and thus the pointed address can not be added to, althoughgccand other compilers will perform byte arithmetic onvoid*as a non-standard extension, treating it as if it werechar *.
Pointer arithmetic provides the programmer with a single way of dealing with different types: adding and subtracting the number of elements required instead of the actual offset in bytes. (Pointer arithmetic withchar *pointers uses byte offsets, becausesizeof(char)is 1 by definition.) In particular, the C definition explicitly declares that the syntaxa[n], which is then-th element of the arraya, is equivalent to*(a + n), which is the content of the element pointed bya + n. This implies thatn[a]is equivalent toa[n], and one can write, e.g.,a[3]or3[a]equally well to access the fourth element of an arraya.
While powerful, pointer arithmetic can be a source ofcomputer bugs. It tends to confuse noviceprogrammers, forcing them into different contexts: an expression can be an ordinary arithmetic one or a pointer arithmetic one, and sometimes it is easy to mistake one for the other. In response to this, many modern high-level computer languages (for exampleJava) do not permit direct access to memory using addresses. Also, the safe C dialectCycloneaddresses many of the issues with pointers. SeeC programming languagefor more discussion.
Thevoidpointer, orvoid*, is supported in ANSI C and C++ as a generic pointer type. A pointer tovoidcan store the address of any object (not function),[a]and, in C, is implicitly converted to any other object pointer type on assignment, but it must be explicitly cast if dereferenced.K&RC usedchar*for the “type-agnostic pointer” purpose (before ANSI C).
C++ does not allow the implicit conversion ofvoid*to other pointer types, even in assignments. This was a design decision to avoid careless and even unintended casts, though most compilers only output warnings, not errors, when encountering other casts.
In C++, there is novoid&(reference to void) to complementvoid*(pointer to void), because references behave like aliases to the variables they point to, and there can never be a variable whose type isvoid.
In C++ pointers to non-static members of a class can be defined. If a classChas a memberT athen&C::ais a pointer to the memberaof typeT C::*. This member can be an object or afunction.[16]They can be used on the right-hand side of operators.*and->*to access the corresponding member.
These pointer declarations cover most variants of pointer declarations. Of course it is possible to have triple pointers, but the main principles behind a triple pointer already exist in a double pointer. The naming used here is what the expressiontypeid(type).name()equals for each of these types when usingg++orclang.[17][18]
The following declarations involving pointers-to-member are valid only in C++:
The()and[]have a higher priority than*.[19]
In theC# programming language, pointers are supported by either marking blocks of code that include pointers with theunsafekeyword, or byusingtheSystem.Runtime.CompilerServicesassembly provisions for pointer access.
The syntax is essentially the same as in C++, and the address pointed can be eithermanagedorunmanagedmemory. However, pointers to managed memory (any pointer to a managed object) must be declared using thefixedkeyword, which prevents thegarbage collectorfrom moving the pointed object as part of memory management while the pointer is in scope, thus keeping the pointer address valid.
However, an exception to this is from using theIntPtrstructure, which is a memory managed equivalent toint*, and does not require theunsafekeyword nor theCompilerServicesassembly. This type is often returned when using methods from theSystem.Runtime.InteropServices, for example:
The.NET frameworkincludes many classes and methods in theSystemandSystem.Runtime.InteropServicesnamespaces (such as theMarshalclass) which convert .NET types (for example,System.String) to and from manyunmanagedtypes and pointers (for example,LPWSTRorvoid*) to allow communication withunmanaged code. Most such methods have the same security permission requirements as unmanaged code, since they can affect arbitrary places in memory.
TheCOBOLprogramming language supports pointers to variables. Primitive or group (record) data objects declared within theLINKAGE SECTIONof a program are inherently pointer-based, where the only memory allocated within the program is space for the address of the data item (typically a single memory word). In program source code, these data items are used just like any otherWORKING-STORAGEvariable, but their contents are implicitly accessed indirectly through theirLINKAGEpointers.
Memory space for each pointed-to data object is typicallyallocated dynamicallyusing externalCALLstatements or via embedded extended language constructs such asEXEC CICSorEXEC SQLstatements.
Extended versions of COBOL also provide pointer variables declared withUSAGEISPOINTERclauses. The values of such pointer variables are established and modified usingSETandSETADDRESSstatements.
Some extended versions of COBOL also providePROCEDURE-POINTERvariables, which are capable of storing theaddresses of executable code.
ThePL/Ilanguage provides full support for pointers to all data types (including pointers to structures),recursion,multitasking, string handling, and extensive built-infunctions. PL/I was quite a leap forward compared to the programming languages of its time.[citation needed]PL/I pointers are untyped, and therefore no casting is required for pointer dereferencing or assignment. The declaration syntax for a pointer isDECLARE xxx POINTER;, which declares a pointer named "xxx". Pointers are used withBASEDvariables. A based variable can be declared with a default locator (DECLARE xxx BASED(ppp);or without (DECLARE xxx BASED;), where xxx is a based variable, which may be an element variable, a structure, or an array, and ppp is the default pointer). Such a variable can be address without an explicit pointer reference (xxx=1;, or may be addressed with an explicit reference to the default locator (ppp), or to any other pointer (qqq->xxx=1;).
Pointer arithmetic is not part of the PL/I standard, but many compilers allow expressions of the formptr = ptr±expression. IBM PL/I also has the builtin functionPTRADDto perform the arithmetic. Pointer arithmetic is always performed in bytes.
IBMEnterprisePL/I compilers have a new form of typed pointer called aHANDLE.
TheD programming languageis a derivative of C and C++ which fully supports C pointers and C typecasting.
TheEiffel object-oriented languageemploys value and reference semantics without pointer arithmetic. Nevertheless, pointer classes are provided. They offer pointer arithmetic, typecasting, explicit memory management,
interfacing with non-Eiffel software, and other features.
Fortran-90introduced a strongly typed pointer capability. Fortran pointers contain more than just a simple memory address. They also encapsulate the lower and upper bounds of array dimensions, strides (for example, to support arbitrary array sections), and other metadata. Anassociation operator,=>is used to associate aPOINTERto a variable which has aTARGETattribute. The Fortran-90ALLOCATEstatement may also be used to associate a pointer to a block of memory. For example, the following code might be used to define and create a linked list structure:
Fortran-2003 adds support for procedure pointers. Also, as part of theC Interoperabilityfeature, Fortran-2003 supports intrinsic functions for converting C-style pointers into Fortran pointers and back.
Gohas pointers. Its declaration syntax is equivalent to that of C, but written the other way around, ending with the type. Unlike C, Go has garbage collection, and disallows pointer arithmetic. Reference types, like in C++, do not exist. Some built-in types, like maps and channels, are boxed (i.e. internally they are pointers to mutable structures), and are initialized using themakefunction. In an approach to unified syntax between pointers and non-pointers, the arrow (->) operator has been dropped: the dot operator on a pointer refers to the field or method of the dereferenced object. This, however, only works with 1 level of indirection.
There is no explicit representation of pointers inJava. Instead, more complex data structures likeobjectsandarraysare implemented usingreferences. The language does not provide any explicit pointer manipulation operators. It is still possible for code to attempt to dereference a null reference (null pointer), however, which results in a run-timeexceptionbeing thrown. The space occupied by unreferenced memory objects is recovered automatically bygarbage collectionat run-time.[20]
Pointers are implemented very much as in Pascal, as areVARparameters in procedure calls.Modula-2is even more strongly typed than Pascal, with fewer ways to escape the type system. Some of the variants of Modula-2 (such asModula-3) include garbage collection.
Much as with Modula-2, pointers are available. There are still fewer ways to evade the type system and soOberonand its variants are still safer with respect to pointers than Modula-2 or its variants. As withModula-3, garbage collection is a part of the language specification.
Unlike many languages that feature pointers, standardISOPascalonly allows pointers to reference dynamically created variables that are anonymous and does not allow them to reference standard static or local variables.[21]It does not have pointer arithmetic. Pointers also must have an associated type and a pointer to one type is not compatible with a pointer to another type (e.g. a pointer to a char is not compatible with a pointer to an integer). This helps eliminate the type security issues inherent with other pointer implementations, particularly those used forPL/IorC. It also removes some risks caused bydangling pointers, but the ability to dynamically let go of referenced space by using thedisposestandard procedure (which has the same effect as thefreelibrary function found inC) means that the risk of dangling pointers has not been entirely eliminated.[22]
However, in some commercial and open source Pascal (or derivatives) compiler implementations —likeFree Pascal,[23]Turbo Pascalor theObject PascalinEmbarcadero Delphi— a pointer is allowed to reference standard static or local variables and can be cast from one pointer type to another. Moreover, pointer arithmetic is unrestricted: adding or subtracting from a pointer moves it by that number of bytes in either direction, but using theIncorDecstandard procedures with it moves the pointer by the size of thedata typeit isdeclaredto point to. An untyped pointer is also provided under the namePointer, which is compatible with other pointer types.
ThePerlprogramming languagesupports pointers, although rarely used, in the form of the pack and unpack functions. These are intended only for simple interactions with compiled OS libraries. In all other cases, Perl usesreferences, which are typed and do not allow any form of pointer arithmetic. They are used to construct complex data structures.[24]
|
https://en.wikipedia.org/wiki/Pointer_(computer_programming)
|
W^X(write xor execute, pronouncedWxorX) is a security policy inoperating systemsandsoftware frameworks. It implementsexecutable space protectionby ensuring everymemory page(a fixed-size block in a program’svirtual address space, the memory layout it uses) is either writable orexecutable, but not both. Without such protection, a program can write (as data "W") CPU instructions in an area of memory intended for data and then run (as executable "X"; or read-execute "RX") those instructions. This can be dangerous if the writer of the memory is malicious.
The terminology was first introduced in 2003 forUnix-likesystems, but is today also used by some multi-platform systems (such as.NET[1]). Other operating systems have adopted similar policies under different names (e.g.,DEPin Windows).
In Unix, W^X is typically controlled via themprotectsystem call. It is relatively simple onprocessor architecturessupporting fine-grained page permissions, such asSPARC,x86-64,PA-RISC,Alpha, andARM.
The term W^X has also been applied to file system write/execute permissions to mitigate file write vulnerabilities (as with in memory) and attacker persistence.[2]Enforcing restrictions on file permissions can also close gaps in W^X enforcement caused by memory mapped files.[3][4]Outright forbidding the usage of arbitrary native code can also mitigate kernel and CPU vulnerabilities not exposed via the existing code on the computer.[5]A less intrusive approach is to lock a file for the duration of any mapping into executable memory, which suffices to prevent post-inspection bypasses.
Some earlyIntel 64processors lacked theNX bitrequired for W^X, but this appeared in later chips. On more limited processors such as theInteli386, W^X requires using the CScode segmentlimit as a "line in the sand", a point in the address space above which execution is not permitted and data is located, and below which it is allowed and executable pages are placed. This scheme was used inExec Shield.[6]
Linkerchanges are generally required to separate data from code (such astrampolinesthat are needed for linker andlibraryruntimefunctions). The switch allowing mixing is usually calledexecstackon Unix-like systems[7]
W^X can also pose a minor problem forjust-in-time compilation, which involves an interpreter generating machine code on the fly and then running it. The simple solution used by most, historically includingFirefox, involves just making the page executable after the interpreter is done writing machine code, usingVirtualProtecton Windows ormprotecton Unix-like operating systems. The other solution involves mapping the same region of memory to two pages, one with RW and the other with RX.[8]There is no simple consensus on which solution is safer: supporters of the latter approach believe allowing a page that has ever been writable to be executed defeats the point of W^X (there exists anSELinuxpolicy to control such operations calledallow_execmod) and thataddress space layout randomizationwould make it safe to put both pages in the same process. Supporters of the former approach believe that the latter approach is only safe when the two pages are given to two separate processes, andinter-process communicationwould be costlier than callingmprotect.
W^X was first implemented inOpenBSD3.3, released May 2003. In 2004, Microsoft introduced a similar feature called DEP (Data Execution Prevention) inWindowsXP. Similar features are available for other operating systems, including thePaXandExec Shieldpatches forLinux, andNetBSD's implementation of PaX. InRed Hat Enterprise Linux(and automaticallyCentOS) version 5, or by Linux Kernel 2.6.18-8,SELinuxreceived theallow_execmem,allow_execheap, andallow_execmodpolicies that provide W^X when disabled.
Although W^X (or DEP) has only protected userland programs for most of its existence, in 2012 Microsoft extended it to the Windows kernel on the x86 and ARM architectures.[9]In late 2014 and early 2015, W^X was added in the OpenBSD kernel on the AMD64 architecture.[10]In early 2016, W^X was fully implemented on NetBSD's AMD64 kernel and partially on the i386 kernel.
macOScomputers running onApple siliconprocessors enforce W^X for all programs. Intel-based Macs enforce the policy only for programs that use the OS's Hardened Runtime mode.[11][12]
Starting withFirefox46 in 2016 and ending withFirefox116 in 2023, Firefox's virtual machine forJavaScriptimplemented the W^X policy.[8]This was later rolled back on some platforms for performance reasons, though remained in others which enforce W^X for all programs.[13]
Starting with .NET 6.0 in 2021, .NET now uses W^X.[1]
|
https://en.wikipedia.org/wiki/W%5EX
|
Incomputer storage,Bélády's anomalyis the phenomenon in which increasing the number of page frames results in an increase in the number ofpage faultsfor certain memory access patterns. This phenomenon is commonly experienced when using thefirst-in first-out(FIFO)page replacement algorithm. In FIFO, the page fault may or may not increase as the page frames increase, but in optimal and stack-based algorithms likeLeast Recently Used(LRU), as the page frames increase, the page fault decreases.László Béládydemonstrated this in 1969.[1]
In common computermemory management, information is loaded in specific-sized chunks. Each chunk is referred to as apage. Main memory can hold only a limited number of pages at a time. It requires aframefor each page it can load. Apage faultoccurs when a page is not found, and might need to be loaded from disk into memory.
When a page fault occurs and all frames are in use, one must be cleared to make room for the new page. A simple algorithm is FIFO: whichever page has been in the frames the longest is the one that is cleared. Until Bélády's anomaly was demonstrated, it was believed that an increase in the number of page frames would always result in the same number of, or fewer, page faults.
Bélády, Nelson and Shedler constructed reference strings for which the FIFO page replacement algorithm produced nearly twice as many page faults in a larger memory than in a smaller one and they conjectured that 2 is a general bound.[citation needed]
In 2010, Fornai and Iványi showed that the anomaly is in fact unbounded and that one can construct a reference string to any arbitrary page fault ratio.[citation needed]
|
https://en.wikipedia.org/wiki/B%C3%A9l%C3%A1dy%27s_anomaly
|
Incomputer architecture, thememory hierarchyseparatescomputer storageinto a hierarchy based onresponse time. Since response time,complexity, andcapacityare related, the levels may also be distinguished by theirperformanceand controlling technologies.[1]Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower levelprogrammingconstructs involvinglocality of reference.
Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories(m1,m2, ...,mn)in which each membermiis typically smaller and faster than the next highest membermi+1of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer.
There are four major storage levels.[1]
This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level forvirtual memorywhen designing acomputer architecture, and one can include a level ofnearline storagebetween online and offline storage.
The number of levels in the memory hierarchy and the performance at each level has increased over time. The type of memory or storage components also change historically.[6]For example, the memory hierarchy of an Intel Haswell Mobile[7]processor circa 2013 is:
The lower levels of the hierarchy – from mass storage downwards – are also known astiered storage. The formal distinction between online, nearline, and offline storage is:[12]
For example, always-on spinning disks are online, while spinning disks that spin down, such as massive arrays of idle disk (MAID), are nearline. Removable media such as tape cartridges that can be automatically loaded, as in atape library, are nearline, while cartridges that must be manually loaded are offline.
Most modernCPUsare so fast that, for most program workloads, thebottleneckis thelocality of referenceof memory accesses and the efficiency of thecachingand memory transfer between different levels of the hierarchy[citation needed]. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called thespace cost, as a larger memory object is more likely to overflow a small and fast level and require use of a larger, slower level. The resulting load on memory use is known aspressure(respectivelyregister pressure,cache pressure, and (main)memory pressure). Terms for data being missing from a higher level and needing to be fetched from a lower level are, respectively:register spilling(due toregister pressure: register to cache),cache miss(cache to main memory), and (hard)page fault(realmain memory tovirtualmemory, i.e. mass storage, commonly referred to asdiskregardless of the actual mass storage technology used).
Modernprogramming languagesmainly assume two levels of memory, main (working) memory and mass storage, though inassembly languageandinline assemblersin languages such asC, registers can be directly accessed. Taking optimal advantage of the memory hierarchy requires the cooperation of programmers, hardware, and compilers (as well as underlying support from the operating system):
Many programmers assume one level of memory. This works fine until the application hits a performance wall. Then the memory hierarchy will be assessed duringcode refactoring.
|
https://en.wikipedia.org/wiki/Memory_hierarchy
|
Incomputer science,instruction schedulingis acompiler optimizationused to improveinstruction-level parallelism, which improves performance on machines withinstruction pipelines. Put more simply, it tries to do the following without changing the meaning of the code:
The pipeline stalls can be caused by structural hazards (processor resource limit), data hazards (output of one instruction needed by another instruction) and control hazards (branching).
Instruction scheduling is typically done on a singlebasic block. In order to determine whether rearranging the block's instructions in a certain way preserves the behavior of that block, we need the concept of adata dependency. There are three types of dependencies, which also happen to be the threedata hazards:
Technically, there is a fourth type, Read after Read (RAR or "Input"): Both instructions read the same location. Input dependence does not constrain the execution order of two statements, but it is useful in scalar replacement of array elements.
To make sure we respect the three types of dependencies, we construct a dependency graph, which is adirected graphwhere each vertex is an instruction and there is an edge from I1to I2if I1must come before I2due to a dependency. If loop-carried dependencies are left out, the dependency graph is adirected acyclic graph. Then, anytopological sortof this graph is a valid instruction schedule. The edges of the graph are usually labelled with thelatencyof the dependence. This is the number of clock cycles that needs to elapse before the pipeline can proceed with the target instruction without stalling.
The simplest algorithm to find a topological sort is frequently used and is known aslist scheduling. Conceptually, it repeatedly selects a source of the dependency graph, appends it to the current instruction schedule and removes it from the graph. This may cause other vertices to be sources, which will then also be considered for scheduling. The algorithm terminates if the graph is empty.
To arrive at a good schedule, stalls should be prevented. This is determined by the choice of the next instruction to be scheduled. A number of heuristics are in common use:
Instruction scheduling may be done either before or afterregister allocationor both before and after it. The advantage of doing it before register allocation is that this results in maximum parallelism. The disadvantage of doing it before register allocation is that this can result in the register allocator needing to use a number of registers exceeding those available. This will cause spill/fill code to be introduced, which will reduce the performance of the section of code in question.
If the architecture being scheduled has instruction sequences that have potentially illegal combinations (due to a lack of instruction interlocks), the instructions must be scheduled after register allocation. This second scheduling pass will also improve the placement of the spill/fill code.
If scheduling is only done after register allocation, then there will be false dependencies introduced by the register allocation that will limit the amount of instruction motion possible by the scheduler.
There are several types of instruction scheduling:
TheGNU Compiler Collectionis one compiler known to perform instruction scheduling, using the-march(both instruction set and scheduling) or-mtune(only scheduling) flags. It uses descriptions of instruction latencies and what instructions can be run in parallel (or equivalently, which "port" each use) for each microarchitecture to perform the task. This feature is available to almost all architectures that GCC supports.[2]
Until version 12.0.0, the instruction scheduling inLLVM/Clang could only accept a-march(calledtarget-cpuin LLVM parlance) switch for both instruction set and scheduling. Version 12 adds support for-mtune(tune-cpu) for x86 only.[3]
Sources of information on latency and port usage include:
LLVM'sllvm-exegesisshould be usable on all machines, especially to gather information on non-x86 ones.[6]
|
https://en.wikipedia.org/wiki/Superblock_scheduling
|
Memory management(alsodynamic memory management,dynamic storage allocation, ordynamic memory allocation) is a form ofresource managementapplied tocomputer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a singleprocessmight be underway at any time.[1]
Several methods have been devised that increase the effectiveness of memory management.Virtual memorysystems separate thememory addressesused by a process from actual physical addresses, allowing separation of processes and increasing the size of thevirtual address spacebeyond the available amount ofRAMusingpagingor swapping tosecondary storage. The quality of the virtual memory manager can have an extensive effect on overall systemperformance. The system allows a computer to appear as if it may have more memory available than physically present, thereby allowing multiple processes to share it.
In someoperating systems, e.g.Burroughs/Unisys MCP,[2]andOS/360 and successors,[3]memory is managed by the operating system.[note 1]In other operating systems, e.g.Unix-likeoperating systems, memory is managed at the application level.
Memory management within an address space is generally categorized as eithermanual memory managementor automatic memory management.
The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool[note 2]of memory called theheap[note 3]orfree store. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.
In the C language, the function which allocates memory from the heap is calledmallocand the function which takes previously allocated memory and marks it as "free" (to be used by future allocations) is calledfree.[note 4]
Several issues complicate the implementation, such asexternal fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator'smetadatacan also inflate the size of (individually) small allocations. This is often managed bychunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" (i.e. that there are no "memory leaks").
The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 byDigital Equipment Corporationillustrates theoverheadsinvolved for a variety of allocators. The lowest averageinstruction path lengthrequired to allocate a single memory slot was 52 (as measured with an instruction levelprofileron a variety of software).[1]
Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through apointerreference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with thekernel, and may use any of the following methods:
Fixed-size blocks allocation, also called memory pool allocation, uses afree listof fixed-size blocks of memory (often all of the same size). This works well for simpleembedded systemswhere no large objects need to be allocated but suffers fromfragmentationespecially with long memory addresses. However, due to the significantly reduced overhead, this method can substantially improve performance for objects that need frequent allocation and deallocation, and so it is often used invideo games.
In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certainpower of twoin size, or blocks of some other convenient size progression. All blocks of a particular size are kept in a sortedlinked listortreeand all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and split. One of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the correspondingly larger-sized buddy-block list.
This memory allocation mechanism preallocates memory chunks suitable to fit objects of a certain type or size.[5]These chunks are called caches and the allocator only has to keep track of a list of free cache slots. Constructing an object will use any one of the free cache slots and destructing an object will add a slot back to the free cache slot list. This technique alleviates memory fragmentation and is efficient as there is no need to search for a suitable portion of memory, as any open slot will suffice.
ManyUnix-likesystems as well asMicrosoft Windowsimplement a function calledallocafor dynamically allocating stack memory in a way similar to the heap-basedmalloc. A compiler typically translates it to inlined instructions manipulating the stack pointer.[6]Although there is no need of manually freeing memory allocated this way as it is automatically freed when the function that calledallocareturns, there exists a risk of overflow. And since alloca is anad hocexpansion seen in many systems but never in POSIX or the C standard, its behavior in case of a stack overflow is undefined.
A safer version of alloca called_malloca, which reports errors, exists on Microsoft Windows. It requires the use of_freea.[7]gnulibprovides an equivalent interface, albeit instead of throwing an SEH exception on overflow, it delegates to malloc when an overlarge size is detected.[8]A similar feature can be emulated using manual accounting and size-checking, such as in the uses ofalloca_accountin glibc.[9]
The proper management of memory in an application is a difficult problem, and several different strategies for handling memory management have been devised.
In many programming language implementations, the runtime environment for the program automatically allocates memory in thecall stackfor non-staticlocal variablesof asubroutine, calledautomatic variables, when the subroutine is called, and automatically releases that memory when the subroutine is exited. Special declarations may allow local variables to retain values between invocations of the procedure, or may allow local variables to be accessed by other subroutines. The automatic allocation of local variables makesrecursionpossible, to a depth limited by available memory.
Garbage collection is a strategy for automatically detecting memory allocated to objects that are no longer usable in a program, and returning that allocated memory to a pool of free memory locations. This method is in contrast to "manual" memory management where a programmer explicitly codes memory requests and memory releases in the program. While automatic garbage collection has the advantages of reducing programmer workload and preventing certain kinds of memory allocation bugs, garbage collection does require memory resources of its own, and can compete with the application program for processor time.
Reference counting is a strategy for detecting that memory is no longer usable by a program by maintaining a counter for how many independent pointers point to the memory. Whenever a new pointer points to a piece of memory, the programmer is supposed to increase the counter. When the pointer changes where it points, or when the pointer is no longer pointing to any area or has itself been freed, the counter should decrease. When the counter drops to zero, the memory should be considered unused and freed. Some reference counting systems require programmer involvement and some are implemented automatically by the compiler. A disadvantage of reference counting is thatcircular referencescan develop which cause a memory leak to occur. This can be mitigated by either adding the concept of a "weak reference" (a reference that does not participate in reference counting, but is notified when the area it is pointing to is no longer valid) or by combining reference counting and garbage collection together.
A memory pool is a technique of automatically deallocating memory based on the state of the application, such as the lifecycle of a request or transaction. The idea is that many applications execute large chunks of code which may generate memory allocations, but that there is a point in execution where all of those chunks are known to be no longer valid. For example, in a web service, after each request the web service no longer needs any of the memory allocated during the execution of the request. Therefore, rather than keeping track of whether or not memory is currently being referenced, the memory is allocated according to the request or lifecycle stage with which it is associated. When that request or stage has passed, all associated memory is deallocated simultaneously.
Virtual memoryis a method of decoupling the memory organization from the physical hardware. The applications operate on memory viavirtual addresses. Each attempt by the application to access a particular virtual memory address results in the virtual memory address being translated to an actualphysical address.[10]In this way the addition of virtual memory enables granular control over memory systems and methods of access.
In virtual memory systems the operating system limits how aprocesscan access the memory. This feature, calledmemory protection, can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another.
Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information.Shared memoryis one of the fastest techniques forinter-process communication.
Memory is usually classified by access rate intoprimary storageandsecondary storage. Memory management systems, among other operations, also handle the moving of information between these two levels of memory.
An operating system manages various resources in the computing system. The memory subsystem is the system element for managing memory. The memory subsystem combines the hardware memory resource and the MCP OS software that manages the resource.
The memory subsystem manages the physical memory and the virtual memory of the system (both part of the hardware resource). The virtual memory extends physical memory by using extra space on a peripheral device, usually disk. The memory subsystem is responsible for moving code and data between main and virtual memory in a process known as overlaying. Burroughs was the first commercial implementation of virtual memory (although developed at Manchester University for the Ferranti Atlas computer) and integrated virtual memory with the system design of the B5000 from the start (in 1961) needing no external memory management unit (MMU).[11]: 48
The memory subsystem is responsible for mapping logical requests for memory blocks to physical portions of memory (segments) which are found in the list of free segments. Each allocated block is managed by means of a segment descriptor,[12]a special control word containing relevant metadata about the segment including address, length, machine type, and the p-bit or ‘presence’ bit which indicates whether the block is in main memory or needs to be loaded from the address given in the descriptor.
Descriptorsare essential in providing memory safety and security so that operations cannot overflow or underflow the referenced block (commonly known as buffer overflow). Descriptors themselves are protected control words that cannot be manipulated except for specific elements of the MCP OS (enabled by the UNSAFE block directive inNEWP).
Donald Knuth describes a similar system in Section 2.5 ‘Dynamic Storage Allocation’ of‘Fundamental Algorithms’.[disputed–discuss]
IBMSystem/360does not support virtual memory.[note 5]Memory isolation ofjobsis optionally accomplished usingprotection keys, assigning storage for each job a different key, 0 for the supervisor or 1–15. Memory management inOS/360is asupervisorfunction. Storage is requested using theGETMAINmacro and freed using theFREEMAINmacro, which result in a call to the supervisor (SVC) to perform the operation.
In OS/360 the details vary depending on how the system isgenerated, e.g., forPCP,MFT,MVT.
In OS/360 MVT, suballocation within a job'sregionor the sharedSystem Queue Area(SQA) is based onsubpools, areas a multiple of 2 KB in size—the size of an area protected by a protection key. Subpools are numbered 0–255.[13]Within a region subpools are assigned either the job's storage protection or the supervisor's key, key 0. Subpools 0–127 receive the job's key. Initially only subpool zero is created, and all user storage requests are satisfied from subpool 0, unless another is specified in the memory request. Subpools 250–255 are created by memory requests by the supervisor on behalf of the job. Most of these are assigned key 0, although a few get the key of the job. Subpool numbers are also relevant in MFT, although the details are much simpler.[14]MFT uses fixedpartitionsredefinable by the operator instead of dynamic regions and PCP has only a single partition.
Each subpool is mapped by a list of control blocks identifying allocated and free memory blocks within the subpool. Memory is allocated by finding a free area of sufficient size, or by allocating additional blocks in the subpool, up to the region size of the job. It is possible to free all or part of an allocated memory area.[15]
The details forOS/VS1are similar[16]to those for MFT and for MVT; the details forOS/VS2are similar to those for MVT, except that the page size is 4 KiB. For both OS/VS1 and OS/VS2 the sharedSystem Queue Area(SQA) is nonpageable.
InMVSthe address space[17]includes an additional pageable shared area, theCommon Storage Area(CSA), and two additional private areas, the nonpageablelocal system queue area(LSQA) and the pageableSystem Work area(SWA). Also, the storage keys 0–7 are all reserved for use by privileged code.
|
https://en.wikipedia.org/wiki/Memory_management
|
Incomputing,Page Size Extension(PSE) refers to a feature ofx86processors that allows forpageslarger than the traditional 4KiBsize. It was introduced in the originalPentiumprocessor, but it was only publicly documented byIntelwith the release of thePentium Pro.[1]TheCPUIDinstruction can be used to identify the availability of PSE on x86CPUs.[2]
Imagine the following scenario: An application program requests a 1MiBmemory block. In order to fulfill this request, an operating system that supports paging and that is running on olderx86CPUs will have to allocate 256pagesof 4 KiB each. An overhead of 1 KiB of memory is required for maintaining page directories and page tables.
When accessing this 1 MiB memory, each of the 256 page entries would be cached in thetranslation lookaside buffer(TLB; a cache that remembers virtual address to physical address translations for faster lookup on subsequent memory requests). Cluttering the TLB is possibly one of the largest disadvantages of having several page entries for what could have been allocated in one single memory block. If the TLB gets filled, then a TLB entry would have to be freed, the page directory and page tables would have to be “walked” in memory, and finally, the memory would be accessed and the new entry would be brought into the TLB. This is a severe performance penalty and was possibly the largest motivation for augmenting the x86 architecture with larger page sizes.
The PSE allows for page sizes of 4 MiB to exist along with 4 KiB pages. The 1 MiB request described previously would easily be fulfilled with a single 4 MiB page, and it would require only one TLB entry. However, the disadvantage of using larger page sizes isinternal fragmentation.
In traditional 32-bitprotected mode, x86 processors use a two-level page translation scheme, where thecontrol registerCR3points to a single 4 KiB-longpage directory, which is divided into 1024 × 4-byte entries that point to 4 KiB-longpage tables, similarly consisting of 1024 × 4-byte entries pointing to 4 KiB-long pages.
Enabling PSE (by setting bit 4,PSE, of the system registerCR4) changes this scheme. The entries in the page directory have an additional flag, in bit 7, namedPS(forpage size). This flag was ignored without PSE, but now, the page-directory entry with PS set to 1 does not point to a page table, but to a single large 4 MiB page. The page-directory entry with PS set to 0 behaves as without PSE.
If newerPSE-36capability is available on the CPU, as checked using theCPUIDinstruction, then 4 more bits, in addition to normal 10 bits, are used inside a page-directory entry pointing to a large page. This allows a large page to be located in 36-bit address space.
IfPhysical Address Extension(PAE) is used, the size of large pages is reduced from 4 MiB down to 2 MiB, and PSE is always enabled, regardless of the PSE bit inCR4.
|
https://en.wikipedia.org/wiki/Page_Size_Extension
|
Incomputing, avirtual address space(VAS) oraddress spaceis the set of ranges of virtual addresses that anoperating systemmakes available to a process.[1]The range of virtual addresses usually starts at a low address and can extend to the highest address allowed by the computer'sinstruction set architectureand supported by theoperating system's pointer size implementation, which can be 4bytesfor32-bitor 8bytesfor64-bitOS versions. This provides several benefits, one of which is security throughprocess isolationassuming each process is given a separateaddress space.
When a new application on a32-bitOS is executed, the process has a4GiBVAS: each one of thememory addresses(from 0 to 232− 1) in that space can have a single byte as a value. Initially, none of them have values ('-' represents no value). Using or setting values in such a VAS would cause amemory exception.
Then the application's executable file is mapped into the VAS. Addresses in the process VAS are mapped to bytes in the exe file. The OS manages the mapping:
The v's are values from bytes in themapped file. Then, requiredDLLfiles are mapped (this includes custom libraries as well as system ones such askernel32.dllanduser32.dll):
The process then starts executing bytes in the EXE file. However, the only way the process can use or set '-' values in its VAS is to ask the OS to map them to bytes from a file. A common way to use VAS memory in this way is to map it to thepage file. The page file is a single file, but multiple distinct sets of contiguous bytes can be mapped into a VAS:
And different parts of the page file can map into the VAS of different processes:
OnMicrosoft Windows32-bit, by default, only2 GiBare made available to processes for their own use.[2]The other2 GiBare used by the operating system. On later 32-bit editions of Microsoft Windows, it is possible to extend the user-mode virtual address space to3 GiBwhile only1 GiBis left for kernel-mode virtual address space by marking the programs as IMAGE_FILE_LARGE_ADDRESS_AWARE and enabling the/3GBswitch in the boot.ini file.[3][4]
On Microsoft Windows 64-bit, in a process running an executable that was linked with/LARGEADDRESSAWARE:NO, the operating system artificially limits the user mode portion of the process's virtual address space to 2 GiB. This applies to both 32- and 64-bit executables.[5][6]Processes running executables that were linked with the/LARGEADDRESSAWARE:YESoption, which is the default for 64-bit Visual Studio 2010 and later,[7]have access to more than2 GiBof virtual address space: up to4 GiBfor 32-bit executables, up to8 TiBfor 64-bit executables in Windows through Windows 8, and up to128 TiBfor 64-bit executables in Windows 8.1 and later.[4][8]
Allocating memory viaC'smallocestablishes the
page file as the backing store for any new virtual address space. However, a process can alsoexplicitly mapfile bytes.
Forx86CPUs,Linux32-bit allows splitting the user and kernel address ranges in different ways:3G/1G user/kernel(default),1G/3G user/kernelor2G/2G user/kernel.[9]
|
https://en.wikipedia.org/wiki/Virtual_address_space
|
This article catalogs comparable aspects of notableoperating systemshells.
Background execution allows a shell to run a command without user interaction in the terminal, freeing the command line for additional work with the shell. POSIX shells and other Unix shells allow background execution by using the&character at the end of command.
Completion features assist the user in typing commands at the command line, by looking for and suggesting matching words for incomplete ones. Completion is generally requested by pressing the completion key (often theTab ↹key).
Command name completionis the completion of the name of a command. In most shells, a command can be a program in the command path (usually$PATH), a builtin command, a function or alias.
Path completionis the completion of the path to a file, relative or absolute.
Wildcard completionis a generalization of path completion, where an expression matches any number of files, using any supported syntax forfile matching.
Variable completionis the completion of the name of a variable name (environment variableor shell variable).
Bash, zsh, and fish have completion for all variable names. PowerShell has completions for environment variable names, shell variable names and — from within user-defined functions — parameter names.
Command argument completionis the completion of a specific command's arguments. There are two types of arguments,namedand positional: Named arguments, often calledoptions, are identified by their name or letter preceding a value, whereas positional arguments consist only of the value. Some shells allow completion of argument names, but few support completing values.
Bash, zsh and fish offer parameter name completion through a definition external to the command, distributed in a separate completion definition file. For command parameter name/value completions, these shells assume path/filename completion if no completion is defined for the command. Completion can be set up to dynamically suggest completions by calling a shell function.[43]The fish shell additionally supports parsing ofman pagesto extract parameter information that can be used to improve completions/suggestions. In PowerShell, all types of commands (cmdlets, functions, script files) inherently expose data about the names, types and valid value ranges/lists for each argument. This metadata is used by PowerShell to automatically support argument name and value completion for built-in commands/functions, user-defined commands/functions as well as for script files. Individual cmdlets can also define dynamic completion of argument values where the completion values are computed dynamically on the running system.
Users of a shell may find themselves typing something similar to what they have typed before. Support forcommand historymeans that a user can recall a previous command into the command-line editor and edit it before issuing the potentially modified command.
Shells that support completion may also be able to directly complete the command from the command history given a partial/initial part of the previous command.
Most modern shells support command history. Shells which support command history in general also support completion from history rather than just recalling commands from the history. In addition to the plain command text, PowerShell also records execution start- and end time and execution status in the command history.
Mandatory arguments/parameters are arguments/parameters which must be assigned a value upon invocation of the command, function or script file. A shell that can determine ahead of invocation that there are missing mandatory values, can assist the interactive user by prompting for those values instead of letting the command fail. Having the shell prompt for missing values will allow the author of a script, command or function to mark a parameter as mandatory instead of creating script code to either prompt for the missing values (after determining that it is being run interactively) or fail with a message.
Shells featuring automatic suggestions display optional command-line completions as the user types. ThePowerShellandfishshells natively support this feature; pressing theTab ↹key inserts the completion.
Implementations of this feature can differ between shells; for example, PowerShell[44]andzsh[45]use an external module to provide completions, and fish derives its completions from the user's command history.[46]
Shells may record a history of directories the user has been in and allow for fast switching to any recorded location. This is referred to as a "directory stack". The concept had been realized as early as 1978[47]in the release ofthe C shell(csh).
Command line interpreters4DOSand its graphical successorTake Command Consolealso feature a directory stack.
A directory name can be used directly as a command which implicitly changes the current location to the directory.
This must be distinguished from an unrelatedload drivefeature supported byConcurrent DOS,Multiuser DOS,System ManagerandREAL/32, where the drive letter L: will be implicitly updated to point to the load path of a loaded application, thereby allowing applications to refer to files residing in their load directory under a standardized drive letter instead of under an absolute path.[48]
When a command line does not match a command or arguments directly, spell checking can automatically correct common typing mistakes (such ascase sensitivity, missing letters). There are two approaches to this; the shell can either suggest probable corrections upon command invocation, or this can happen earlier as part of a completion or autosuggestion.
Thetcshandzshshells feature optional spell checking/correction, upon command invocation.
Fish does the autocorrection upon completion and autosuggestion. The feature is therefore not in the way when typing out the whole command and pressing enter, whereas extensive use of the tab and right-arrow keys makes the shell mostly case insensitive.
The PSReadLine[31]PowerShell module (which is shipped with version 5.0) provides the option to specify a CommandValidationHandler ScriptBlock which runs before submitting the command. This allows for custom correcting of commonly mistyped commands, and verification before actually running the command.
A shell script (or job) can report progress of long running tasks to the interactive user.
Unix/Linux systems may offer other tools support using progress indicators from scripts or as standalone-commands, such as the program "pv".[49]These are not integrated features of the shells, however.
JP Softwarecommand-line processors provide user-configurable colorization of file and directory names in directory listings based on their file extension and/or attributes through an optionally defined%COLORDIR%environment variable.
For the Unix/Linux shells, this is a feature of thelscommand and the terminal.
The command line processors inDOS Plus,Multiuser DOS,REAL/32and in all versions ofDR-DOSsupport a number of optional environment variables to define escape sequences allowing to control text highlighting, reversion or colorization for display or print purposes in commands likeTYPE. All mentioned command line processors support%$ON%and%$OFF%. If defined, these sequences will be emitted before and after filenames. A typical sequence for%$ON%would be\033[1min conjunction withANSI.SYS,\033pfor anASCIIterminal or\016for an IBM orESC/Pprinter. Likewise, typical sequences for%$OFF%would be\033[0m,\033q,\024, respectively. The variables%$HEADER%and%$FOOTER%are only supported by COMMAND.COM in DR-DOS 7.02 and higher to define sequences emitted before and after text blocks in order to control text highlighting, pagination or other formatting options.
For the Unix/Linux shells, this is a feature of the terminal.
A defining feature of the fish shell is built-in syntax highlighting, As the user types, text is colored to represent whether the input is a valid command or not (the executable exists and the user has permissions to run it), and valid file paths are underlined.[50]
An independent project offers syntax highlighting as an add-on to the Z Shell (zsh).[51]This is not part of the shell, however.
PowerShell provides customizable syntax highlighting on the command line through the PSReadLine[31]module. This module can be used with PowerShell v3.0+, and is bundled with v5.0 onwards. It is loaded by default in the command line host "powershell.exe" since v5.0.[52]
Take Command Console (TCC) offers syntax highlighting in the integrated environment.
4DOS, 4OS2, 4NT / Take Command Console and PowerShell (in PowerShell ISE) looks up context-sensitive help information whenF1is pressed.
Zsh provides various forms of configurable context-sensitive help as part of itsrun-helpwidget,_complete_helpcommand, or in the completion of options for some commands.
The fish shell provides brief descriptions of a command's flags during tab completion.
In anticipation of what a given running application may accept as keyboard input, the user of the shell instructs the shell to generate a sequence ofsimulatedkeystrokes, which the application will interpret as a keyboard input from an interactive user. By sending keystroke sequences the user may be able to direct the application to perform actions that would be impossible to achieve through input redirection or would otherwise require an interactive user. For example, if an application acts on keystrokes, which cannot be redirected, distinguishes between normal and extended keys, flushes the queue before accepting new input on startup or under certain conditions, or because it does not read through standard input at all. Keystroke stacking typically also provides means to control the timing of simulated keys being sent or to delay new keys until the queue was flushed etc. It also allows to simulate keys which are not present on a keyboard (because the corresponding keys do not physically exist or because a different keyboard layout is being used) and therefore would be impossible to type by a user.
Some shell scripts need to query the user for sensitive information such aspasswords, private digital keys,PIN codesor other confidential information. Sensitive input should not be echoed back to the screen/input device where it could be gleaned by unauthorized persons. Plaintext memory representation of sensitive information should also be avoided as it could allow the information to be compromised, e.g., through swap files, core dumps etc.[68]
The shells bash, zsh and PowerShell offer this as a specific feature.[69][70]Shells which do not offer this as a specific feature may still be able to turn off echoing through some other means. Shells executing on a Unix/Linux operating system can use thesttyexternal command to switch off/on echoing of input characters.[71]In addition to not echoing back the characters, PowerShell's-AsSecureStringoption also encrypts the input character-by-character during the input process, ensuring that the string is never represented unencrypted in memory where it could be compromised through memory dumps, scanning, transcription etc.
Some operating systems define anexecutepermission which can be granted to users/groups for a file when thefile systemitself supports it.
On Unix systems, the execute permission controls access to invoking the file as a program, and applies both to executables and scripts.
As the permission is enforced in theprogram loader, no obligation is needed from the invoking program, nor the invoked program, in enforcing the execute permission – this also goes for shells and other interpreter programs.
The behaviour is mandated by thePOSIX C librarythat is used for interfacing with the kernel. POSIX specifies that theexecfamily of functions shall fail with EACCESS (permission denied) if the file denies execution permission (seeexecve– System Interfaces Reference,The Single UNIX Specification, Version 5 fromThe Open Group).
Theexecutepermission only applies when the script is run directly. If a script is invoked as an argument to the interpreting shell, it will be executed regardless of whether the user holds theexecutepermission for that script.
Although Windows also specifies anexecutepermission, none of the Windows-specific shells block script execution if the permission has not been granted.
Several shells can be started or be configured to start in a mode where only a limited set of commands and actions is available to the user. While not a securityboundary(the command accessing a resource is blocked rather than the resource) this is nevertheless typically used to restrict users' actions before logging in.
A restricted mode is part of thePOSIXspecification for shells, and most of the Linux/Unix shells support such a mode where several of the built-in commands are disabled and only external commands from a certain directory can be invoked.[72][73]
PowerShell supports restricted modes throughsession configuration filesor session configurations. A session configuration file can define visible (available) cmdlets, aliases, functions, path providers and more.[74]
Scripts that invoke other scripts can be a security risk as they can potentially execute foreign code in the context of the user who launched the initial script. Scripts will usually be designed to exclusively include scripts from known safe locations; but in some instances, e.g. when offering the user a way to configure the environment or loading localized messages, the script may need to include other scripts/files.[75]One way to address this risk is for the shell to offer a safe subset of commands which can be executed by an included script.
|
https://en.wikipedia.org/wiki/Comparison_of_command_shells
|
Human–computer interaction(HCI) is the process through which people operate and engage with computer systems. Research in HCI covers the design and the use ofcomputer technology, which focuses on theinterfacesbetween people (users) andcomputers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. These include visual, auditory, and tactile (haptic) feedback systems, which serve as channels for interaction in both traditional interfaces and mobile computing contexts.[1]A device that allows interaction between human being and a computer is known as a "human–computer interface".
As a field of research, human–computer interaction is situated at the intersection ofcomputer science,behavioral sciences,design,media studies, andseveral other fields of study. The term was popularized byStuart K. Card,Allen Newell, andThomas P. Moranin their 1983 book,The Psychology of Human–Computer Interaction.The first known use was in 1975 by Carlisle.[2]The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.[3][4]
Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termedhuman–machine interaction(HMI),man-machine interaction(MMI) orcomputer-human interaction(CHI). Desktop applications, web browsers, handheld computers, and computer kiosks make use of the prevalentgraphical user interfaces(GUI) of today.[5]Voice user interfaces(VUIs) are used forspeech recognitionand synthesizing systems, and the emergingmulti-modaland Graphical user interfaces (GUI) allow humans to engage withembodied character agentsin a way that cannot be achieved with other interface paradigms.
TheAssociation for Computing Machinery(ACM) defines human–computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them".[5]A key aspect of HCI is user satisfaction, also referred to as End-User Computing Satisfaction. It goes on to say:
"Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques incomputer graphics,operating systems,programming languages, and development environments are relevant. On the human side,communication theory,graphicandindustrial designdisciplines,linguistics,social sciences,cognitive psychology,social psychology, andhuman factorssuch ascomputer user satisfactionare relevant. And, of course, engineering and design methods are relevant."[5]HCI ensures that humans can safely and efficiently interact with complex technologies in fields like aviation and healthcare.[6]
Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success.
Poorly designedhuman-machine interfacescan lead to many unexpected problems. A classic example is theThree Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster.[7][8][9]Similarly, some accidents in aviation have resulted from manufacturers' decisions to use non-standardflight instrumentsor throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout. Thus, the conceptually good idea had unintended results.[10]
A human–computer interface can be described as the interface of communication between a human user and a computer. The flow of information between the human and computer is defined as theloop of interaction. The loop of interaction has several aspects to it, including:
Human–computer interaction involves the ways in which humans make—or do not make—use of computational artifacts, systems, and infrastructures. Much of the research in this field seeks toimprovethe human–computer interaction by improving theusabilityof computer interfaces.[11]How usability is to be precisely understood, how it relates to other social and cultural values, and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.[12][13]
Much of the research in the field of human–computer interaction takes an interest in:
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing apost-cognitivistperspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.
Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software, and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.
The following experimental design principles are considered, when evaluating a currentuser interface, or designing a new user interface:
The iterative design process is repeated until a sensible, user-friendly interface is created.[16]
Various strategies delineating methods for human–PCinteraction designhave developed since the conception of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present-day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrappinguser experiencearound a finished framework.
Topics in human–computer interaction include the following:
Human-AI Interaction explores how users engage with artificial intelligence systems, particularly focusing on usability, trust, and interpretability. The research mainly aims to design AI-driven interfaces that are transparent, explainable, and ethically responsible.[20]Studies highlight the importance of explainable AI (XAI) and human-in-the-loop decision-making, ensuring that AI outputs are understandable and trustworthy.[21]Researchers also develop design guidelines for human-AI interaction, improving the collaboration between users and AI systems.[22]
Augmented reality (AR) integrates digital content with the real world. It enhances human perception and interaction with physical environments. AR research mainly focuses on adaptive user interfaces, multimodal input techniques, and real-world object interaction.[23]Advances in wearable AR technology improve usability, enabling more natural interaction with AR applications.[24]
Virtual reality (VR) creates a fully immersive digital environment, allowing users to interact with computer-generated worlds through sensory input devices. Research focuses on user presence, interaction techniques, and cognitive effects of immersion.[25]A key area of study is the impact of VR on cognitive load and user adaptability, influencing how users process information in virtual spaces.[26]
Mixed reality (MR) blends elements of both augmented reality (AR) and virtual reality (VR). It enables real-time interaction with both physical and digital objects. HCI research in MR concentrates on spatial computing, real-world object interaction, and context-aware adaptive interfaces.[27]MR technologies are increasingly applied in education, training simulations, and healthcare, enhancing learning outcomes and user engagement.[28]
Extended reality (XR) is an umbrella term encompassing AR, VR, and MR, offering a continuum between real and virtual environments. Research investigates user adaptability, interaction paradigms, and ethical implications of immersive technologies.[29]Recent studies highlight how AI-driven personalization and adaptive interfaces improve the usability of XR applications.[30]
Accessibility in human–computer interaction (HCI) focuses on designing inclusive digital experiences, ensuring usability for people with diverse abilities. Research in this area is related to assistive technologies, adaptive interfaces, and universal design principles.[31]Studies indicate that accessible design benefits not only people with disabilities but also enhances usability for all users.[32]
Social computing is an interactive and collaborative behavior considered between technology and people. In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis, as there are a lot of social computing technologies that include blogs, emails, social networking, quick messaging, and various others. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name.[33]Other research finds that individuals perceive their interactions with computers more negatively than humans, despite behaving the same way towards these machines.[34]
In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors.Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem by solving the semantic ambiguities between the two parties.[35]
In the interaction of humans and computers, research has studied how computers can detect, process, and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human–computer interaction. The influence of emotions in human–computer interaction has been studied in fields such as financial decision-making usingECGand organizational knowledge sharing usingeye-trackingand face readers as affect-detection channels. In these fields, it has been shown that affect-detection channels have the potential todetect human emotionsand those information systems can incorporate the data obtained from affect-detection channels to improve decision models.
Abrain–computer interface(BCI), is a direct communication pathway between an enhanced or wiredbrainand an external device. BCI differs fromneuromodulationin that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[36]
Security interactions are the study of interaction between humans and computers specifically as it pertains toinformation security. Its aim, in plain terms, is to improve theusabilityof security features inend userapplications.
Unlike HCI, which has roots in the early days ofXerox PARCduring the 1970s, HCISec is a nascent field of study by comparison. Interest in this topic tracks with that ofInternet security, which has become an area of broad public concern only in very recent years.
When security features exhibit poor usability, the following are common reasons:
Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated byDouglas Engelbart: "If ease of use were the only valid criterion, people would stick to tricycles and never try bicycles."[37]
How humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include:
As of 2010[update]the future for HCI is expected[38]to include the following characteristics:
One of the main conferences for new research in human–computer interaction is the annually heldAssociation for Computing Machinery's (ACM)Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronouncedkai, orKhai). CHI is organized by ACM Special Interest Group on Computer-Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners, and industry people, with company sponsors such as Google, Microsoft, and PayPal.
There are also dozens of other smaller, regional, or specialized HCI-related conferences held around the world each year, including:[39]
|
https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction
|
AnInternet Explorer shellis a class of computer program (web browseror otherwise) that uses theInternet Explorerbrowser engine, known asMSHTMLand previously Trident. This engine isclosed-source, but Microsoft has exposed anapplication programming interface(API) that permits the developers toinstantiateeither MSHTML or a full-fledgedchromelessInternet Explorer (known as theWebBrowsercontrol) within the graphical user interface of their software.[1]
These applications supplement some of the usualuser interfacecomponents of Internet Explorer (IE) for browsing, adding features such as popup blocking andtabbed browsing. For example,MSN Explorercan be considered an Internet Explorer shell, in that it is essentially an expansion of IE with addedMSN-related functionality. A more complete list of MSHTML-based browsers can be found under thelist of web browsers.
Actively maintained:
Discontinued:
Other applications that are not primarily for web browsing, such asIntuit's Quicken and QuickBooks,AOL,Winamp, andRealPlayer, use the rendering engine to provide a limited-functionality "mini" browser within their own user interfaces.
On Windows, components of Internet Explorer are also used inWindows Explorer, theoperating system shellthat provides the defaultfile systembrowsing and desktop services. For example, folder views in Windows Explorer on versions of Windows prior toWindows XPutilize IE'sDHTMLprocessing abilities; they are essentially little web pages.Active Desktoptechnology is another example.
MSHTML was, until Outlook 2007, also used to renderHTMLportions of email messages inMicrosoft OutlookandOutlook Expressemail clients(Outlook 2007 now usesMicrosoft Wordto render HTML e-mail). This integration is an often-exploited "back door", since the Internet Explorer components make available more of the functionality within the HTML code.
Microsoft Windows also supportsHTML Applications, computer programs written in HTML, CSS and JavaScript and bear a.htafilename extension. They run with HTML Application Host, which is a plain Internet Explorer shell without any GUI elements around it.
|
https://en.wikipedia.org/wiki/Internet_Explorer_shell
|
Ashell accountis a user account on a remoteserver, typically running underUnixorLinuxoperating systems. The account gives access to a text-basedcommand-line interfacein ashell, via aterminal emulator. The user typically communicates with the server via theSSHprotocol. In the early days of the Internet, one would connect using amodem.
Shell accounts were first made accessible in the 1980s to interested members of the public by Internet Service Providers—such asNetcom,Panix,The World,and Digex—although in rare instances individuals had access to shell accounts through their employer or university. They were used for file storage, web space, email accounts, newsgroup access and software development.[1][2][3]Before the late 1990s, shell accounts were often much less expensive than full net access throughSLIPorPPP, which was required to access the then-newWorld Wide Web. Mostpersonal computeroperating systemsalso lackedTCP/IPstacks by default before the mid-1990s. Products such asThe Internet Adapterwere devised that could work as a proxy server, allowing users to run a web browser for the price of a shell account.[4]
While direct internet connections made shell accounts largely obsolete for most users, they remained popular with some technically inclined subscribers.[5]
Shell providersare often found to offer shell accounts at low-cost or free. These shell accounts generally provide users with access to various software and services includingcompilers, IRC clients, background processes, FTP, text editors (such asnano) andemail clients(such aspine).[6]Some shell providers may also allow tunneling of traffic to bypass corporate firewalls.
|
https://en.wikipedia.org/wiki/Shell_account
|
In computing, ashell builtinis acommandor afunction, exposed by ashell, that is implemented in the shell itself, instead of an externalprogramwhich the shell would load and execute.[1][2][3][4]
A shell builtin starts faster than an external program because there is no program loading overhead. However, its implementation code is in the shell program, and thus modifying it requires modifying the shell. Therefore, a shell builtin is usually only used for simple, almost trivial, commands, such as text output.
Some commands must be implemented as builtins due to the nature of theoperating system.
Notably, thecdcommand, which changes theworking directoryof the shell is often a builtin since a program runs in a separateprocessand working directory is specific to each process. Runningcdas an external program would not affect the working directory of the shell that loaded it.[5]
Thisoperating-system-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Shell_builtin
|
In computing, thesuperuseris a special user account used for system administration. Depending on the operating system (OS), the actual name of this account might beroot,administrator,adminorsupervisor. In some cases, the actual name of the account is not the determining factor; on Unix-like systems, for example, the user with a user identifier (UID) of zero is the superuser [i.e., uid=0], regardless of the name of that account;[1]and in systems which implement arole-based securitymodel, any user with the role of superuser (or its synonyms) can carry out all actions of the superuser account.
Theprinciple of least privilegerecommends that most users and applications run under an ordinary account to perform their work, as a superuser account is capable of making unrestricted, potentially adverse, system-wide changes.
InUnix-likecomputer OSes (such asLinux),rootis the conventional name of the user who has all rights or permissions (to all files and programs) in all modes (single- or multi-user). Alternative names includebaroninBeOSandavataron some Unix variants.[2]BSDoften provides atoor("root" written backward) account in addition to a root account.[3]Regardless of the name, the superuser always has auser IDof 0. The root user can do many things an ordinary user cannot, such as changing the ownership of files and binding to networkportsnumbered below 1024.
The namerootmay have originated becauserootis the only user account with permission to modify theroot directoryof a Unix system. This directory was originally considered to be root'shome directory,[4]but the UNIXFilesystem Hierarchy Standardnow recommends that root's home be at/root.[5]The first processbootstrappedin aUnix-likesystem, usually calledinit, runs with root privileges. It spawns all other processes directly or indirectly, which inherit their parents' privileges. Only a process running as root is allowed to change its user ID to that of another user; once it has done so, there is no way back. Doing so is sometimes calleddropping root privilegesand is often done as a security measure to limit the damage from possible contamination of the process. Another case isloginand other programs that ask users for credentials and in case of successfulauthenticationallow them to run programs with privileges of their accounts.
It is often recommended thatrootis never used as a normal user account,[6][7]since simpletypographical errorsin entering commands can cause major damage to the system. Instead, a normal user account should be used, and then either thesu(substitute user) orsudo(substitute user do) command is used. Thesuapproach requires the user to know the root password, while thesudomethod requires that the user be set up with the power to run "as root" within the/etc/sudoersfile, typically indirectly by being made a member of thewheel,[8]adm,[9]admin, orsudogroup.
For a number of reasons, thesudoapproach is now generally preferred – for example it leaves anaudit trailof who has used the command and what administrative operations they performed.[10]
Some OSes, such asmacOSand someLinuxdistributions (most notablyUbuntu[6]), automatically give the initial user created the ability to run as root viasudo– but this is configured to ask them for their password before doing administrative actions. In some cases the actualrootaccount is disabled by default, so it can't be directly used.[6]In mobile platform-oriented OSs such asApple iOSandAndroid, superuser access is inaccessible by design, but generally the security system can beexploitedin order to obtain it.[citation needed]In a few systems, such asPlan 9, there is no superuser at all.[11]
InWindows NTand later systems derived from it (such asWindows 2000,Windows XP,Windows Server 2003, andWindows Vista/7/8/10/11), there must be at least one administrator account (Windows XP and earlier) or one able to elevate privileges to superuser (Windows Vista/7/8/10/11 viaUser Account Control).[12]In Windows XP and earlier systems, there is a built-in administrator account that remains hidden when a user administrator-equivalent account exists.[13]This built-in administrator account is created with a blank password.[13]This poses security risks as local users would be able to access the computer via the built-in administrator account if the password is left blank, so the account is disabled by default in Windows Vista and later systems due to the introduction of User Account Control (UAC).[13]Remote users are unable to access the built-in administrator account.
A Windows administrator account is not an exact analogue of theUnixroot account – Administrator, the built-in administrator account, and a user administrator account have the same level of privileges. The default user account created in Windows systems is an administrator account. Unlike macOS, Linux, and Windows Vista/7/8/10 administrator accounts, administrator accounts in Windows systems without UAC do not insulate the system from most of the pitfalls of full root access. One of these pitfalls includes decreased resilience to malware infections. To avoid this and maintain optimal system security on pre-UAC Windows systems, it is recommended to simply authenticate when necessary from a standard user account, either via a password set to the built-in administrator account, or another administrator account.
In Windows Vista/7/8/10/11 administrator accounts, a prompt will appear to authenticate running a process with elevated privileges. Usually, no user credentials are required to authenticate the UAC prompt in administrator accounts but authenticating the UAC prompt requires entering the username and password of an administrator in standard user accounts. In Windows XP (and earlier systems) administrator accounts, authentication is not required to run a process with elevated privileges. This poses a security risk that led to the development of UAC. Users can set a process to run with elevated privileges from standard accounts by setting the process to "run as administrator" or using therunascommand and authenticating the prompt with credentials (username and password) of an administrator account. Much of the benefit of authenticating from a standard account is negated if the administrator account's credentials being used has a blank password (as in the built-in administrator account in Windows XP and earlier systems), hence why it is recommended to set a password for the built-in administrator account.
InWindows NT, 2000 and higher, the root user is the Administrator account.[14]
InNovell NetWare, the superuser was called "supervisor",[15]later "admin".
In OpenVMS, "SYSTEM" is the superuser account for the OS.
On many older OSes on computers intended for personal and home use, anyone using the system had full privileges. Many such systems, such asDOS, did not have the concept of multiple accounts, and although others such asWindows 95did allow multiple accounts, this was only so that each could have its own preferences profile – all users still had full administrative control over the machine.
|
https://en.wikipedia.org/wiki/Superuser
|
Awindow managerissystem softwarethat controls the placement and appearance ofwindowswithin awindowing systemin agraphical user interface.[1]Most window managers are designed to help provide adesktop environment. They work in conjunction with the underlying graphical system that provides required functionality—support for graphics hardware, pointing devices, and a keyboard—and are often written and created using awidget toolkit.
Few window managers are designed with a clear distinction between thewindowing systemand the window manager. Every graphical user interface based on awindows metaphorhas some form of window management. In practice, the elements of this functionality vary greatly.[2]Elements usually associated with window managers allow the user to open, close, minimize, maximize, move, resize, and keep track of running windows, includingwindow decorators. Many window managers also come with various utilities and features such astask bars, program launchers, docks to facilitate halving or quartering windows on screen, workspaces for grouping windows,desktop icons, wallpaper, an ability to keep select windows in foreground, the ability to "roll up" windows to show only their title bars, to cascade windows, to stack windows into a grid, to group windows of the same program in the task bar in order to save space, and optional multi-row taskbars.[3][4][5][6]
In 1973, theXerox Altobecame the first computer shipped with a workingWIMPGUI. It used astacking window managerthat allowed overlapping windows.[7]However, this was so far ahead of its time that its design paradigm would not become widely adopted until more than a decade later. While it is unclear ifMicrosoft Windowscontains designs copied from Apple'sclassic Mac OS, it is clear that neither was the first to produce a GUI using stacking windows. In the early 1980s, theXerox Star, successor to the Alto, usedtilingfor most main application windows, and used overlapping only for dialogue boxes, removing most of the need for stacking.[8]
Theclassic Mac OSwas one of the earliest commercially successful examples of a GUI that used a sort of stacking window management viaQuickDraw. Its successor,macOS, uses a somewhat more advanced window manager that has supported compositing sinceMac OS X 10.0, and was updated inMac OS X 10.2to support hardware accelerated compositing via theQuartz Compositor.[9]
GEM 1.1, fromDigital Research, was aoperating environmentthat included a stacking window manager, allowing all windows to overlap. It was released in the early 1980s.[10]GEMis famous for having been included as the main GUI used on theAtari ST, which ranAtari TOS, and was also a popular GUI forMS-DOSprior to the widespread use of Microsoft Windows. As a result of a lawsuit byApple, Digital Research was forced to remove the stacking capabilities in GEM 2.0, making its window manager a tiling window manager.[11]
During the mid-1980s,Amiga OScontained an early example of a compositing window manager calledIntuition(one of the low-level libraries of AmigaOS, which was present in Amiga systemROMs), capable of recognizing which windows or portions of them were covered, and which windows were in the foreground and fully visible, so it could draw only parts of the screen that required refresh. Additionally, Intuition supported compositing. Applications could first request a region of memory outside the current display region for use as bitmap. The Amiga windowing system would then use a series ofbit blitsusing the system's hardwareblitterto build a composite of these applications' bitmaps, along with buttons and sliders, in display memory, without requiring these applications to redraw any of their bitmaps.
In 1988,Presentation Managerbecame the default shell inOS/2, which, in its first version, only used acommand line interface(CLI).IBMand Microsoft designed OS/2 as a successor to DOS and Windows for DOS. After the success of Windows 3.10, however, Microsoft abandoned the project in favor of Windows. After that, the Microsoft project for a future OS/2 version 3 becameWindows NT, and IBM made a complete redesign of the shell of OS/2, substituting the Presentation Manager of OS/2 1.x for theobject-orientedWorkplace Shellthat made its debut in OS/2 2.0.[12]
On systems using theX window system, there is a clear distinction between the window manager and thewindowing system. Strictly speaking, anX window managerdoes not directly interact with video hardware, mice, or keyboards – that is the responsibility of thedisplay server.
Users of the X Window System have the ability to easily use many different window managers –Metacity, used inGNOME 2, andKWin, used inKDE Plasma Workspaces, and many others. Since many window managers are modular,[vague]people can use others,[vague]such asCompiz(a 3Dcompositing window manager), which replaces the window manager.[vague]Sawfishandawesomeon the other hand areextensiblewindow managers offering exacting window control. Components of different window managers can even be mixed and matched; for example, thewindow decorationsfromKWincan be used with thedesktopanddockcomponents of GNOME.
X window managers also have the ability tore-parentapplications, meaning that, while initially all applications are adopted by theroot window(essentially the whole screen), an application started within the root window can be adopted by (i.e., put inside of) another window. Window managers under the X window system adopt applications from the root window and re-parent them to apply window decorations (for example, adding a title bar). Re-parenting can also be used to add the contents of one window to another. For example, aflash playerapplication can be re-parented to a browser window, and can appear to the user as supposedly being part of that program. Re-parenting window managers can therefore arrange one or more programs within the same window, and can easily combinetilingandstackingin various ways.
Microsoft Windows has provided an integrated stacking window manager sinceWindows 2.0;Windows Vistaintroduced the compositingDesktop Window Manager(dwm.exe) as an optional hardware-accelerated alternative. In Windows, sinceGDIis part of the kernel,[13]the role of the window manager is tightly coupled with the kernel's graphical subsystems and is largely non-replaceable, althoughthird-party utilitiescan be used to simulate a tiling window manager on top of such systems. SinceWindows 8, theDirect3D-based Desktop Window Manager can no longer be disabled.[14]It can only be restarted with the hotkey combination Ctrl+Shift+Win+B.[15]
Windows Explorer(explorer.exe) is used by default as theshellin modern Windows systems to provide a taskbar and file manager, along with many functions of a window manager; aspects of Windows can be modified through the provided configuration utilities, modifying theWindows Registryor with 3rd party tools, such asWindowBlindsorResource Hacker.
A complete X Windows Server, allowing the use of window managers ported from the unixoid world can also be provided for Microsoft Windows throughCygwin/Xeven inmultiwindowmode (and by other X Window System implementations). Thereby, it is easily possible to e.g. have X Window System client programs running either in the same Cygwin environment on the same machine, or on a Linux, BSD Unix etc. system via the network, and only their GUI being displayed and usable on top of the Microsoft Windows environment.
Note that Microsoft and X Window System use different terms to describe similar concepts. For example, there is rarely any mention of the termwindow managerby Microsoft because it is integrated and non-replaceable, and distinct from theshell.[clarification needed][16]TheWindows Shellis analogous to thedesktop environmentconcept in other graphical user interface systems.
Since 2021ChromeOSis shipped with its own window manager called Ash.[17]Chromium and ash share commoncodebase.[17]In the past one could run it by usinggoogle-chrome --open-ashon any compatible systems.
Window managers are often divided into three or more classes, which describe how windows are drawn and updated.
Compositing window managers let all windows be created and drawn separately and then put together and displayed in various 2D and 3D environments. The most advanced compositing window managers allow for a great deal of variety in interface look and feel, and for the presence of advanced 2D and 3D visual effects.
All window managers that have overlapping windows and are not compositing window managers arestacking window managers, although it is possible that not all use the same methods. Stacking window managers allow windows to overlap by drawing background windows first, which is referred to as thepainter's algorithm. Changes sometimes require that all windows be re-stacked or repainted, which usually involves redrawing every window. However, to bring a background window to the front usually only requires that one window be redrawn, since background windows may have bits of other windows painted over them, effectively erasing the areas that are covered.
Tiling window managers paint all windows on-screen by placing them side by side or above and below each other, so that no window ever covers another. Microsoft Windows 1.0 used tiling, and a variety of tiling window managers forXare available, such asi3,awesome, anddwm.
Dynamic window managers can dynamically switch between tiling or floating window layout. A variety of dynamic window managers forXare available.
An active window is the currently focusedwindowin the current window manager. Different window managers indicate the currently-active window in different ways and allow the user to switch between windows in different ways. For example, in Microsoft Windows, if bothNotepadandMicrosoft Paintare open, clicking in theNotepadwindow will cause that window to become active. In Windows, the active window is indicated by having a different colored title bar. Clicking is not the only way of selecting an active window, however: some window managers (such asFVWM) make the window under the mouse pointer active—simply moving the mouse is sufficient to switch windows; a click is not needed.
Window managers often provide a way to select the active window using the keyboard as an alternative to the mouse. One typical key combination isAlt+Tab, used by Windows andKDE(by default, though this is user-configurable); another isapple key-tilde, used by Macintosh. Pressing the appropriate key combination typically cycles through all visible windows in some order, though other actions are possible.
Many, though not all, window managers provide a region of the screen containing some kind of visual control (often a button) for each window on the screen. Each button typically contains the title of the window and may also contain an icon. This area of the screen generally provides some kind of visual indication of which window is active—for example, the active window's button may appear “pushed in”. It is also usually possible to switch the active window by clicking on the appropriate button. In Microsoft Windows, this area of the screen is called thetaskbar; in Apple Macintosh systems this area of the screen is called the dock.
The active window may not always lie in front of all other windows on the screen. The active window is simply the window to which keys typed on the keyboard are sent; it may be visually obscured by other windows. This is especially true in window managers which do not require a click to change active windows:FVWM, for example, makes active the window
under the mouse cursor but does not change itsZ-order(the order in which windows appear, measured from background to foreground). Instead, it is necessary to click on the border of the window to bring it to the foreground. There are also situations in click-to-focus window managers such as Microsoft Windows where the active window may be obscured; however, this is much less common.
|
https://en.wikipedia.org/wiki/Window_manager
|
Aread–eval–print loop(REPL), also termed aninteractive toplevelorlanguage shell, is a simple interactivecomputer programmingenvironment that takes single user inputs, executes them, and returns the result to the user; a program written in a REPL environment is executed piecewise.[1]The term usually refers to programming interfaces similar to the classicLisp machineinteractive environment. Common examples includecommand-lineshellsand similar environments forprogramming languages, and the technique is very characteristic ofscripting languages.[2]
In 1964, the expressionREAD-EVAL-PRINT cycleis used byL. Peter DeutschandEdmund Berkeleyfor an implementation ofLispon thePDP-1.[3]Just one month later,Project Macpublished a report byJoseph Weizenbaum(the creator ofELIZA, the world's first chatbot) describing a REPL-based language, called OPL-1, implemented in hisFortran-SLIPlanguage on theCompatible Time Sharing System (CTSS).[4][5][6]
The 1974Maclispreference manual byDavid A. Moonattests "Read-eval-print loop" on page 89, but does not use the acronym REPL.[7]
Since at least the 1980s, the abbreviationsREP LoopandREPLare attested in the context ofScheme.[8][9]
In a REPL, the user enters one or more expressions (rather than an entirecompilation unit) and the REPL evaluates them and displays the results.[1]The nameread–eval–print loopcomes from the names of the Lisp primitive functions which implement this functionality:
The development environment then returns to the read state, creating a loop, which terminates when the program is closed.
REPLs facilitateexploratory programminganddebuggingbecause the programmer can inspect the printed result before deciding what expression to provide for the next read. The read–eval–print loop involves the programmer more frequently than the classic edit–compile–run–debug cycle.
Because theprintfunction outputs in the same textual format that thereadfunction uses for input, most results are printed in a form that could be copied and pasted back into the REPL. However, it is sometimes necessary to print representations of elements that cannot sensibly be read back in, such as a socket handle or a complex class instance. In these cases, there must exist a syntax for unreadable objects. In Python, it is the<__module__.class instance>notation, and in Common Lisp, the#<whatever>form. The REPL ofCLIM,SLIME, and theSymbolicsLisp Machinecan also read back unreadable objects. They record for each output which object was printed. Later when the code is read back, the object will be retrieved from the printed output.
REPLs can be created to support any text-based language. REPL support for compiled languages is usually achieved by implementing aninterpreteron top of a virtual machine which provides an interface to the compiler. For example, starting with JDK 9,JavaincludedJShellas a command-line interface to the language. Various other languages have third-party tools available for download that provide similar shell interaction with the language.
As ashell, a REPL environment allows users to access relevant features of an operating system in addition to providing access to programming capabilities. The most common use for REPLs outside of operating system shells is for interactiveprototyping.[10]Other uses include mathematical calculation, creating documents that integrate scientific analysis (e.g.IPython), interactive software maintenance,benchmarking, and algorithm exploration.
A minimal definition is:
whereenvrepresents initialeval-uation environment. It is also assumed thatenvcan be destructively updated byeval.
Typical functionality provided by a Lisp REPL includes:
|
https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop
|
Acommand-line interface(CLI) is a means of interacting withsoftwareviacommands– each formatted as a line of text. Command-line interfaces emerged in the mid-1960s, oncomputer terminals, as an interactive and more user-friendly alternative to the non-interactive mode available withpunched cards.[1]
For a long time, CLI was the most common interface for software, but today thegraphical user interface(GUI) is more common. None-the-less, many programs such asoperating systemandsoftware developmentutilitiesstill provide CLI.
CLI enablesautomatingprogramssince commands can be stored in ascriptfilethat can be used repeatedly. A script allows its contained commands to be executed as group; as a program; as a command.
CLI is made possible bycommand-line interpretersorcommand-line processors, which are programs that execute input commands.
Alternatives to CLI include GUI (including thedesktop metaphorsuch asWindows),text-basedmenuing(includingDOS ShellandIBM AIX SMIT), andkeyboard shortcuts.
Compared with a graphical user interface, a command-line interface requires fewer system resources to implement. Since options to commands are given in a few characters in each command line, an experienced user often finds the options easier to access. Automation of repetitive tasks is simplified by line editing and history mechanisms for storing frequently used sequences; this may extend to ascripting languagethat can take parameters and variable options. A command-line history can be kept, allowing review or repetition of commands.
A command-line system may require paper or online manuals for the user's reference, although often ahelpoption provides a concise review of the options of a command. The command-line environment may not provide graphical enhancements such as differentfontsor extendededit windowsfound in a GUI. It may be difficult for a new user to become familiar with all the commands and options available, compared with theiconsanddrop-down menusof a graphical user interface, without reference to manuals.
Operating system (OS) command-line interfaces are usually distinct programs supplied with the operating system. A program that implements such a text interface is often called a command-line interpreter, command processor orshell.
Examples of command-line interpreters include Nushell,DEC'sDIGITAL Command Language(DCL) inOpenVMSandRSX-11, the variousUnix shells(sh,ksh,csh,tcsh,zsh,Bash, etc.),CP/M'sCCP,DOS'COMMAND.COM, as well as theOS/2and the WindowsCMD.EXEprograms, the latter groups being based heavily on DEC's RSX-11 andRSTSCLIs. Under most operating systems, it is possible to replace the default shell program with alternatives; examples include4DOSfor DOS,4OS2for OS/2, and4NT / Take Commandfor Windows.
Although the termshellis often used to describe a command-line interpreter, strictly speaking, ashellcan be any program that constitutes the user interface, including fully graphically oriented ones. For example, the default Windows GUI is a shell program namedEXPLORER.EXE, as defined in the SHELL=EXPLORER.EXE line in the WIN.INI configuration file. These programs are shells, but not CLIs.
Application programs (as opposed to operating systems) may also have command-line interfaces.
An application program may support none, any, or all of these three major types of command-line interface mechanisms:
Some applications support a CLI, presenting their own prompt to the user and accepting command lines. Other programs support both a CLI and a GUI. In some cases, a GUI is simply awrapperaround a separate CLIexecutable file. In other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality. For example, all features ofMATLAB, anumerical analysiscomputer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features.
InColossal Cave Adventurefrom 1975, the user uses a CLI to enter one or two words to explore a cave system.
The command-line interface evolved from a form of communication conducted by people overteleprinter(TTY) machines. Sometimes these involved sending an order or a confirmation usingtelex. Early computer systems often used teleprinter as the means of interaction with an operator.
The mechanical teleprinter was replaced by a"glass tty", a keyboard and screen emulating the teleprinter."Smart" terminalspermitted additional functions, such as cursor movement over the entire screen, or local editing of data on the terminal for transmission to the computer. As themicrocomputer revolutionreplaced the traditional – minicomputer + terminals –time sharingarchitecture, hardware terminals were replaced byterminal emulators— PC software that interpreted terminal signals sent through the PC'sserial ports. These were typically used to interface an organization's new PC's with their existing mini- or mainframe computers, or to connect PC to PC. Some of these PCs were runningBulletin Board Systemsoftware.
Early operating system CLIs were implemented as part ofresident monitorprograms, and could not easily be replaced. The first implementation of the shell as a replaceable component was part of theMulticstime-sharingoperating system.[2]In 1964,MIT Computation Centerstaff memberLouis Pouzindeveloped theRUNCOMtool for executing command scripts while allowing argument substitution.[3]Pouzin coined the termshellto describe the technique of using commands like a programming language, and wrote a paper about how to implement the idea in theMulticsoperating system.[4]Pouzin returned to his native France in 1965, and the first Multics shell was developed byGlenda Schroeder.[3]
The firstUnix shell, theV6 shell, was developed byKen Thompsonin 1971 atBell Labsand was modeled after Schroeder's Multics shell.[5][6]TheBourne shellwas introduced in 1977 as a replacement for the V6 shell. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs. The Bourne shell led to the development of theKornShell(ksh),Almquist shell(ash), and the popularBourne-again shell(or Bash).[6]
Early microcomputers themselves were based on a command-line interface such asCP/M,DOSorAppleSoft BASIC. During the 1980s and 1990s, the introduction of theApple Macintoshand ofMicrosoft Windowson PCs saw the command line interface as the primary user interface replaced by theGraphical User Interface.[7]The command line remained available as an alternative user interface, often used bysystem administratorsand other advanced users for system administration,computer programmingandbatch processing.
In November 2006,Microsoftreleased version 1.0 ofWindows PowerShell(formerly codenamedMonad), which combined features of traditional Unix shells with their proprietary object-oriented.NET Framework.MinGWandCygwinareopen-sourcepackages for Windows that offer a Unix-like CLI. Microsoft providesMKS Inc.'skshimplementationMKS Korn shellfor Windows through theirServices for UNIXadd-on.
Since 2001, theMacintoshoperating systemmacOShas been based on aUnix-likeoperating system calledDarwin.[8]On these computers, users can access a Unix-like command-line interface by running theterminal emulatorprogram calledTerminal, which is found in the Utilities sub-folder of the Applications folder, or by remotely logging into the machine usingssh.Z shellis the default shell for macOS; Bash,tcsh, and theKornShellare also provided. BeforemacOS Catalina, Bash was the default.
A CLI is used whenever a large vocabulary of commands or queries, coupled with a wide (or arbitrary) range of options, can be entered more rapidly as text than with a pure GUI. This is typically the case withoperating system command shells. CLIs are also used by systems with insufficient resources to support a graphical user interface. Some computer language systems (such asPython,[9]Forth,LISP,Rexx, and many dialects ofBASIC) provide an interactive command-line mode to allow for rapid evaluation of code.
CLIs are often used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.[10]CLIs are also popular among people with visual disabilities since the commands and responses can be displayed usingrefreshable Braille displays.
The general pattern of a command line is:[11][12]
In this format, the delimiters between command-line elements arewhitespace charactersand the end-of-line delimiter is thenewlinedelimiter. This is a widely used (but not universal) convention.
A CLI can generally be considered as consisting ofsyntaxandsemantics. Thesyntaxis the grammar that all commands must follow. In the case ofoperating systems,DOSandUnixeach define their own set of rules that all commands must follow. In the case ofembedded systems, each vendor, such asNortel,Juniper NetworksorCisco Systems, defines their own proprietary set of rules. These rules also dictate how a user navigates through the system of commands. Thesemanticsdefine what sort of operations are possible, on what sort of data these operations can be performed, and how the grammar represents these operations and data—the symbolic meaning in the syntax.
Two different CLIs may agree on either syntax or semantics, but it is only when they agree on both that they can be considered sufficiently similar to allow users to use both CLIs without needing to learn anything, as well as to enable re-use of scripts.
A simple CLI will display a prompt, accept acommand linetyped by the user terminated by theEnter key, then execute the specified command and provide textual display of results or error messages. Advanced CLIs will validate, interpret and parameter-expand the command line before executing the specified command, and optionally capture or redirect its output.
Unlike a button or menu item in a GUI, a command line is typically self-documenting,[16]stating exactly what the user wants done. In addition, command lines usually include manydefaultsthat can be changed to customize the results. Useful command lines can be saved by assigning acharacter stringoraliasto represent the full command, or several commands can be grouped to perform a more complex sequence – for instance, compile the program, install it, and run it — creating a single entity, called a command procedure or script which itself can be treated as a command. These advantages mean that a user must figure out a complex command or series of commands only once, because they can be saved, to be used again.
The commands given to a CLI shell are often in one of the following forms:
wheredoSomethingis, in effect, averb,howanadverb(for example, should the command be executedverboselyorquietly) andtoFilesan object or objects (typically one or more files) on which the command should act. The>in the third example is aredirection operator, telling the command-line interpreter to send the output of the command not to its own standard output (the screen) but to the named file. This will overwrite the file. Using>>will redirect the output and append it to the file. Another redirection operator is thevertical bar(|), which creates apipelinewhere the output of one command becomes the input to the next command.[17]
One can modify the set of available commands by modifying which paths appear in thePATHenvironment variable. Under Unix, commands also need be marked asexecutablefiles. The directories in the path variable are searched in the order they are given. By re-ordering the path, one can run e.g. \OS2\MDOS\E.EXE instead of \OS2\E.EXE, when the default is the opposite. Renaming of the executables also works: people often rename their favourite editor to EDIT, for example.
The command line allows one to restrict available commands, such as access to advanced internal commands. The WindowsCMD.EXEdoes this. Often, shareware programs will limit the range of commands, including printing a command 'your administrator has disabled running batch files' from the prompt.[clarification needed]
Some CLIs, such as those innetwork routers, have a hierarchy ofmodes, with a different set of commands supported in each mode. The set of commands are grouped by association with security, system, interface, etc. In these systems the user might traverse through a series of sub-modes. For example, if the CLI had two modes calledinterfaceandsystem, the user might use the commandinterfaceto enter the interface mode. At this point, commands from the system mode may not be accessible until the user exits the interface mode and enters the system mode.
A command prompt (or justprompt) is a sequence of (one or more) characters used in a command-line interface to indicate readiness to accept commands. It literallypromptsthe user to take action. A prompt usually ends with one of the characters$,%,#,[18][19]:,>or-[20]and often includes other information, such as the path of the currentworking directoryand thehostname.
On manyUnixandderivative systems, the prompt commonly ends in$or%if the user is a normal user, but in#if the user is asuperuser("root" in Unix terminology).
End-users can often modify prompts. Depending on the environment, they may include colors, special characters, and other elements (like variables and functions for the current time, user, shell number or working directory) in order, for instance, to make the prompt more informative or visually pleasing, to distinguish sessions on various machines, or to indicate the current level of nesting of commands. On some systems, special tokens in the definition of the prompt can be used to cause external programs to be called by the command-line interpreter while displaying the prompt.
In DOS' COMMAND.COM and in Windows NT'scmd.exeusers can modify the prompt by issuing aPROMPTcommand or by directly changing the value of the corresponding%PROMPT%environment variable. The default of most modern systems, theC:\>style is obtained, for instance, withPROMPT $P$G. The default of older DOS systems,C>is obtained by justPROMPT, although on some systems this produces the newerC:\>style, unless used on floppy drives A: or B:; on those systemsPROMPT $N$Gcan be used to override the automatic default and explicitly switch to the older style.
Many Unix systems feature the$PS1variable (Prompt String 1),[21]although other variables also may affect the prompt (depending on theshellused). In the Bash shell, a prompt of the form:
could be set by issuing the command
Inzshthe$RPROMPTvariable controls an optionalprompton the right-hand side of the display. It is not a real prompt in that the location of text entry does not change. It is used to display information on the same line as the prompt, but right-justified.
InRISC OSthe command prompt is a*symbol, and thus (OS) CLI commands are often referred to asstar commands.[22]One can also access the same commands from other command lines (such as theBBC BASICcommand line), by preceding the command with a*.
Acommand-line argumentorparameteris an item of information provided to a program when it is started.[23]A program can have many command-line arguments that identify sources or destinations of information, or that alter the operation of the program.
When a command processor is active a program is typically invoked by typing its name followed by command-line arguments (if any). For example, inUnixandUnix-likeenvironments, an example of a command-line argument is:
file.sis a command-line argument which tells the programrmto remove the file namedfile.s.
Some programming languages, such asC,C++andJava, allow a program to interpret the command-line arguments by handling them as string parameters in themain function.[24][25]Other languages, such asPython, expose operating system specificAPI(functionality) throughsysmodule, and in particularsys.argvforcommand-line arguments.
InUnix-like operating systems, a single hyphen used in place of a file name is a special value specifying that a program should handle data coming from thestandard inputor send data to thestandard output.
Acommand-line optionor simplyoption(also known as aflagorswitch) modifies the operation of a command; the effect is determined by the command's program. Options follow the command name on the command line, separated by spaces. A space before the first option is not always required, such asDir/?andDIR /?in DOS, which have the same effect[20]of listing the DIR command's available options, whereasdir --help(in many versions of Unix)doesrequire the option to be preceded by at least one space (and is case-sensitive).
The format of options varies widely between operating systems. In most cases, the syntax is by convention rather than an operating system requirement; the entire command line is simply a string passed to a program, which can process it in any way the programmer wants, so long as the interpreter can tell where the command name ends and its arguments and options begin.
A few representative samples of command-line options, all relating to listing files in a directory, to illustrate some conventions:
InMultics, command-line options and subsystem keywords may be abbreviated. This idea appears to derive from thePL/I programming language, with its shortened keywords (e.g., STRG for STRINGRANGE and DCL for DECLARE). For example, in the Multicsforumsubsystem, the-long_subjectparameter can be abbreviated-lgsj. It is also common for Multics commands to be abbreviated, typically corresponding to the initial letters of the words that are strung together with underscores to form command names, such as the use ofdidfordelete_iacl_dir.
In some other systems abbreviations are automatic, such as permitting enough of the first characters of a command name to uniquely identify it (such asSUas an abbreviation forSUPERUSER) while others may have some specific abbreviations pre-programmed (e.g.MDforMKDIRin COMMAND.COM) or user-defined via batch scripts andaliases(e.g.alias md mkdirintcsh).
On DOS, OS/2 and Windows, different programs called from their COMMAND.COM or CMD.EXE (or internal their commands) may use different syntax within the same operating system. For example:
InDOS,OS/2andWindows, the forward slash (/) is most prevalent, although the hyphen-minus is also sometimes used. In many versions of DOS (MS-DOS/PC DOS 2.xx and higher, all versions ofDR-DOSsince 5.0, as well asPTS-DOS,Embedded DOS,FreeDOSandRxDOS) theswitch character(sometimes abbreviatedswitcharorswitchchar) to be used is defined by a value returned from asystem call(INT 21h/AX=3700h). The default character returned by this API is/, but can be changed to a hyphen-minus on the above-mentioned systems, except for under Datalight ROM-DOS and MS-DOS/PC DOS 5.0 and higher, which always return/from this call (unless one of many availableTSRsto reenable the SwitChar feature is loaded). In some of these systems (MS-DOS/PC DOS 2.xx, DOS Plus 2.1, DR-DOS 7.02 and higher, PTS-DOS, Embedded DOS, FreeDOS and RxDOS), the setting can also be pre-configured by aSWITCHARdirective inCONFIG.SYS. General Software's Embedded DOS provides a SWITCH command for the same purpose, whereas4DOSallows the setting to be changed viaSETDOS /W:n.[26]Under DR-DOS, if the setting has been changed from/, the first directory separator\in the display of thePROMPTparameter$Gwill change to a forward slash/(which is also a valid directory separator in DOS, FlexOS, 4680 OS, 4690 OS, OS/2 and Windows) thereby serving as a visual clue to indicate the change.[20]Also, the current setting is reflected also in the built-in help screens.[20]Some versions of DR-DOSCOMMAND.COMalso support a PROMPT token$/to display the current setting. COMMAND.COM since DR-DOS 7.02 also provides apseudo-environment variablenamed%/%to allow portable batchjobs to be written.[27][28]Several external DR-DOS commands additionally support anenvironment variable%SWITCHAR%to override the system setting.
However, many programs are hardwired to use/only, rather than retrieving the switch setting before parsing command-line arguments. A very small number, mainly ports from Unix-like systems, are programmed to accept-even if the switch character is not set to it (for examplenetstatandping, supplied withMicrosoft Windows, will accept the /? option to list available options, and yet the list will specify the-convention).
InUnix-likesystems, the ASCIIhyphen-minusbegins options; the new (andGNU) convention is to usetwohyphens then a word (e.g.--create) to identify the option's use while the old convention (and still available as an option for frequently-used options) is to use one hyphen then one letter (e.g.,-c); if one hyphen is followed by two or more letters it may mean two options are being specified, or it may mean the second and subsequent letters are a parameter (such as filename or date) for the first option.[29]
Two hyphen-minus characters without following letters (--) may indicate that the remaining arguments should not be treated as options, which is useful for example if a file name itself begins with a hyphen, or if further arguments are meant for an inner command (e.g.,sudo). Double hyphen-minuses are also sometimes used to prefixlong optionswhere more descriptive option names are used. This is a common feature ofGNUsoftware. Thegetoptfunction and program, and thegetoptscommand are usually used for parsing command-line options.
Unix command names, arguments and options are case-sensitive (except in a few examples, mainly where popular commands from other operating systems have been ported to Unix).
FlexOS,4680 OSand4690 OSuse-.
CP/Mtypically used[.
Conversational Monitor System(CMS) uses a singleleft parenthesisto separate options at the end of the command from the other arguments. For example, in the following command the options indicate that the target file should be replaced if it exists, and the date and time of the source file should be retained on the copy:COPY source file a target file b (REPLACE OLDDATE)
Data General's CLI under theirRDOS,AOS, etc. operating systems, as well as the version of CLI that came with theirBusiness Basic, uses only/as the switch character, is case-insensitive, and allowslocal switcheson some arguments to control the way they are interpreted, such asMAC/U LIB/S A B C $LPT/Lhas the global optionUto the macro assembler command to append user symbols, but two local switches, one to specify LIB should be skipped on pass 2 and the other to direct listing to the printer, $LPT.
One of the criticisms of a CLI is the lack of cues to the user as to the available actions.[citation needed]In contrast, GUIs usually inform the user of available actions with menus, icons, or other visual cues.[citation needed]To overcome this limitation, many CLI programs display ausage message, typically when invoked with no arguments or one of?,-?,-h,-H,/?,/h,/H,/Help,-help, or--help.[20][30][31]
However, entering a program name without parameters in the hope that it will display usage help can be hazardous, as programs and scripts for which command line arguments are optional will execute without further notice.
Although desirable at least for the help parameter, programs may not support all option lead-in characters exemplified above.
Under DOS, where the defaultcommand-line option charactercan be changed from/to-, programs may query theSwitCharAPI in order to determine the current setting. So, if a program is not hardwired to support them all, a user may need to know the current setting even to be able to reliably request help.
If the SwitChar has been changed to-and therefore the/character is accepted as alternative path delimiter also at the DOS command line, programs may misinterpret options like/hor/Has paths rather than help parameters.[20]However, if given as first or only parameter, most DOS programs will, by convention, accept it as request for help regardless of the current SwitChar setting.[20][26]
In some cases, different levels of help can be selected for a program. Some programs supporting this allow to give a verbosity level as an optional argument to the help parameter (as in/H:1,/H:2, etc.) or they give just a short help on help parameters with question mark and a longer help screen for the other help options.[32]
Depending on the program, additional or more specific help on accepted parameters is sometimes available by either providing the parameter in question as an argument to the help parameter or vice versa (as in/H:Wor in/W:?(assuming/Wwould be another parameter supported by the program)).[33][34][31][30][32][nb 1]
In a similar fashion to the help parameter, but much less common, some programs provide additional information about themselves (like mode, status, version, author, license or contact information) when invoked with anaboutparameter like-!,/!,-about, or--about.[30]
Since the?and!characters typically also serve other purposes at the command line, they may not be available in all scenarios, therefore, they should not be the only options to access the corresponding help information.
If more detailed help is necessary than provided by a program's built-in internal help, many systems support a dedicated externalhelpcommand" command (or similar), which accepts a command name as calling parameter and will invoke an external help system.
In the DR-DOS family, typing/?or/Hat theCOMMAND.COMprompt instead of a command itself will display a dynamically generated list of available internal commands;[20]4DOSandNDOSsupport the same feature by typing?at the prompt[26](which is also accepted by newer versions of DR-DOS COMMAND.COM); internal commands can be individually disabled or reenabled viaSETDOS /I.[26]In addition to this, some newer versions of DR-DOS COMMAND.COM also accept a?%command to display a list of available built-inpseudo-environment variables. Besides their purpose as quick help reference this can be used in batchjobs to query the facilities of the underlying command-line processor.[20]
Built-in usage help andman pagescommonly employ a small syntax to describe the valid command form:[35][36][37][nb 2]
Notice that these characters have different meanings than when used directly in the shell. Angle brackets may be omitted when confusing the parameter name with a literal string is not likely.
In many areas of computing, but particularly in the command line, thespace charactercan cause problems as it has two distinct and incompatible functions: as part of a command or parameter, or as a parameter or nameseparator. Ambiguity can be prevented either by prohibiting embedded spaces in file and directory names in the first place (for example, by substituting them withunderscores_), or by enclosing a name with embedded spaces between quote characters or using anescape characterbefore the space, usually abackslash(\). For example
is ambiguous (isprogram namepart of the program name, or two parameters?); however
and
are not ambiguous.Unix-based operating systems minimize the use of embedded spaces to minimize the need for quotes. InMicrosoft Windows, one often has to use quotes because embedded spaces (such as in directory names) are common.
Although most users think of the shell as an interactive command interpreter, it is really a programming language in which each statement runs a command. Because it must satisfy both the interactive and programming aspects of command execution, it is a strange language, shaped as much by history as by design.
The termcommand-line interpreteris applied tocomputer programsdesigned tointerpreta sequence of lines of text which may be entered by a user, read from a file or another kind ofdata stream. The context of interpretation is usually one of a givenoperating systemorprogramming language.
Command-line interpreters allow users to issue various commands in a very efficient (and often terse) way. This requires the user to know the names of the commands and their parameters, and the syntax of thelanguagethat is interpreted.
The Unix#!mechanism and OS/2 EXTPROC command facilitate the passing of batch files to external processors. One can use these mechanisms to write specific command processors for dedicated uses, and process external data files which reside in batch files.
Many graphical interfaces, such as the OS/2Presentation Managerand early versions of Microsoft Windows use command lines to call helper programs to open documents and programs. The commands are stored in the graphical shell[clarification needed]or in files like the registry or theOS/2OS2USER.INIfile.
The earliest computers did not support interactive input/output devices, often relying onsense switchesand lights to communicate with thecomputer operator. This was adequate forbatchsystems that ran one program at a time, often with the programmer acting as operator. This also had the advantage of low overhead, since lights and switches could be tested and set with one machine instruction. Later a singlesystem consolewas added to allow the operator to communicate with the system.
From the 1960s onwards, user interaction with computers was primarily by means of command-line interfaces, initially on machines like theTeletype Model 33ASR, but then on earlyCRT-basedcomputer terminalssuch as theVT52.
All of these devices were purely text based, with no ability to display graphic or pictures.[nb 3]For businessapplication programs, text-basedmenuswere used, but for more general interaction the command line was the interface.
Around 1964Louis Pouzinintroduced the concept and the nameshellinMultics, building on earlier, simpler facilities in theCompatible Time-Sharing System(CTSS).[39][better source needed]
From the early 1970s theUnixoperating system adapted the concept of a powerful command-line environment, and introduced the ability topipethe output of one command in as input to another. Unix also had the capability to save and re-run strings of commands asshell scriptswhich acted like custom commands.
The command line was also the main interface for the early home computers such as theCommodore PET,Apple IIandBBC Micro– almost always in the form of aBASICinterpreter. When more powerful business-oriented microcomputers arrived withCP/Mand laterDOScomputers such as theIBM PC, the command line began to borrow some of the syntax and features of the Unix shells such asglobbingandpipingof output.
The command line was first seriously challenged by thePARCGUIapproach used in the 1983Apple Lisaand the 1984Apple Macintosh. A few computer users used GUIs such asGEOSandWindows 3.1but the majority ofIBM PCusers did not replace theirCOMMAND.COMshell with a GUI untilWindows 95was released in 1995.[40][41]
While most non-expert computer users now use a GUI almost exclusively, more advanced users have access to powerful command-line environments:
Most command-line interpreters supportscripting, to various extents. (They are, after all, interpreters of aninterpreted programming language, albeit in many cases the language is unique to the particular command-line interpreter.) They will interpret scripts (variously termedshell scriptsorbatch files) written in thelanguagethat they interpret. Some command-line interpreters also incorporate the interpreter engines of other languages, such asREXX, in addition to their own, allowing the executing of scripts, in those languages, directly within the command-line interpreter itself.
Conversely,scripting programming languages, in particular those with anevalfunction(such as REXX,Perl,Python,RubyorJython), can be used to implement command-line interpreters and filters. For a fewoperating systems, most notablyDOS, such a command interpreter provides a more flexible command-line interface than the one supplied. In other cases, such a command interpreter can present a highly customised user interface employing the user interface and input/output facilities of the language.
The command line provides an interface between programs as well as the user. In this sense, a command line is an alternative to adialog box. Editors and databases present a command line, in which alternate command processors might run. On the other hand, one might have options on the command line, which opens a dialog box. The latest version of 'Take Command' has this feature. DBase used a dialog box to construct command lines, which could be further edited before use.
Programs like BASIC,diskpart,Edlin, and QBASIC all provide command-line interfaces, some of which use the system shell. Basic is modeled on the default interface for 8-bit Intel computers. Calculators can be run as command-line or dialog interfaces.
Emacsprovides a command-line interface in the form of its minibuffer. Commands and arguments can be entered using Emacs standard text editing support, and output is displayed in another buffer.
There are a number of text mode games, likeAdventureorKing's Quest 1-3, which relied on the user typing commands at the bottom of the screen. One controls the character by typing commands like 'get ring' or 'look'. The program returns a text which describes how the character sees it, or makes the action happen. Thetext adventureThe Hitchhiker's Guide to the Galaxy, a piece ofinteractive fictionbased onDouglas Adam'sbook of the same name, is a teletype-style command-line game.
The most notable of these interfaces is thestandard streamsinterface, which allows the output of one command to be passed to the input of another. Text files can serve either purpose as well. This provides the interfaces of piping, filters and redirection. Under Unix,devices are filestoo, so the normal type of file for the shell used for stdin, stdout and stderr is attydevice file.
Another command-line interface allows a shell program to launch helper programs, either to launch documents or start a program. The command is processed internally by the shell, and then passed on to another program to launch the document. The graphical interface of Windows and OS/2 rely heavily on command lines passed through to other programs – console or graphical, which then usually process the command line without presenting a user-console.
Programs like the OS/2E editorand some other IBMeditors, can process command lines normally meant for the shell, the output being placed directly in the document window.
A web browser's URL input field can be used as a command line. It can be used tolaunchweb apps,access browser configuration, as well as perform a search.Google, which has been called "the command line of the internet" will perform a domain-specific search when it detects search parameters in a known format.[51]This functionality is present whether the search is triggered from a browser field or on Google's website.
There areJavaScriptlibraries that allow to write command line applications in browser as standalone Web apps or as part of bigger application.[52]An example of such a website is the CLI interface toDuckDuckGo.[53]There are alsoweb-based SSHapplications that allow access to a server’s command-line interface from a browser.
Many PCvideo gamesfeature a command line interface often referred to as a console. It is typically used by the game developers during development and by mod developers for debugging purposes as well as for cheating or skipping parts of the game.
|
https://en.wikipedia.org/wiki/Command_line_interface
|
Aprogramming languageis a system of notation for writingcomputer programs.[1]Programming languages are described in terms of theirsyntax(form) andsemantics(meaning), usually defined by aformal language. Languages usually provide features such as atype system,variables, and mechanisms forerror handling. Animplementationof a programming language is required in order toexecuteprograms, namely aninterpreteror acompiler. An interpreter directly executes the source code, while acompilerproduces anexecutableprogram.
Computer architecturehas strongly influenced the design of programming languages, with the most common type (imperative languages—which implement operations in a specified order) developed to perform well on the popularvon Neumann architecture. While early programming languages were closely tied to thehardware, over time they have developed moreabstractionto hide implementation details for greater simplicity.
Thousands of programming languages—often classified as imperative,functional,logic, orobject-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example,exception handlingsimplifies error handling, but at a performance cost.Programming language theoryis the subfield ofcomputer sciencethat studies the design, implementation, analysis, characterization, and classification of programming languages.
Programming languages differ fromnatural languagesin that natural languages are used for interaction between people, while programming languages are designed to allow humans to communicate instructions to machines.[citation needed]
The termcomputer languageis sometimes used interchangeably with "programming language".[2]However, usage of these terms varies among authors.
In one usage, programming languages are described as a subset of computer languages.[3]Similarly, the term "computer language" may be used in contrast to the term "programming language" to describe languages used in computing but not considered programming languages.[citation needed]Most practical programming languages are Turing complete,[4]and as such are equivalent in what programs they can compute.
Another usage regards programming languages as theoretical constructs for programmingabstract machinesand computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[5]John C. Reynoldsemphasizes thatformal specificationlanguages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[6]
The first programmable computers were invented at the end of the 1940s, and with them, the first programming languages.[7]The earliest computers were programmed infirst-generation programming languages(1GLs),machine language(simple instructions that could be directly executed by the processor). This code was very difficult to debug and was notportablebetween different computer systems.[8]In order to improve the ease of programming,assembly languages(orsecond-generation programming languages—2GLs) were invented, diverging from the machine language to make programs easier to understand for humans, although they did not increase portability.[9]
Initially, hardware resources were scarce and expensive, whilehuman resourceswere cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored.[10]The introduction ofhigh-level programming languages(third-generation programming languages—3GLs)—revolutionized programming. These languagesabstractedaway the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute.[9]In 1957,Fortran(FORmula TRANslation) was invented. Often considered the firstcompiledhigh-level programming language,[9][11]Fortran has remained in use into the twenty-first century.[12]
Around 1960, the firstmainframes—general purpose computers—were developed, although they could only be operated by professionals and the cost was extreme. The data and instructions were input bypunch cards, meaning that no input could be added while the program was running. The languages developed at this time therefore are designed for minimal interaction.[14]After the invention of themicroprocessor, computers in the 1970s became dramatically cheaper.[15]New computers also allowed more user interaction, which was supported by newer programming languages.[16]
Lisp, implemented in 1958, was the firstfunctional programminglanguage.[17]Unlike Fortran, it supportedrecursionandconditional expressions,[18]and it also introduceddynamic memory managementon aheapand automaticgarbage collection.[19]For the next decades, Lisp dominatedartificial intelligenceapplications.[20]In 1978, another functional language,ML, introducedinferred typesand polymorphicparameters.[16][21]
AfterALGOL(ALGOrithmic Language) was released in 1958 and 1960,[22]it became the standard in computing literature for describingalgorithms. Although its commercial success was limited, most popular imperative languages—includingC,Pascal,Ada,C++,Java, andC#—are directly or indirectly descended from ALGOL 60.[23][12]Among its innovations adopted by later programming languages included greater portability and the first use ofcontext-free,BNFgrammar.[24]Simula, the first language to supportobject-oriented programming(includingsubtypes,dynamic dispatch, andinheritance), also descends from ALGOL and achieved commercial success.[25]C, another ALGOL descendant, has sustained popularity into the twenty-first century. C allows access to lower-level machine operations more than other contemporary languages. Its power and efficiency, generated in part with flexiblepointeroperations, comes at the cost of making it more difficult to write correct code.[16]
Prolog, designed in 1972, was the firstlogic programminglanguage, communicating with a computer using formal logic notation.[26][27]With logic programming, the programmer specifies a desired result and allows theinterpreterto decide how to achieve it.[28][27]
During the 1980s, the invention of thepersonal computertransformed the roles for which programming languages were used.[29]New languages introduced in the 1980s included C++, asupersetof C that can compile C programs but also supportsclassesandinheritance.[30]Adaand other new languages introduced support forconcurrency.[31]The Japanese government invested heavily into the so-calledfifth-generation languagesthat added support for concurrency to logic programming constructs, but these languages were outperformed by other concurrency-supporting languages.[32][33]
Due to the rapid growth of theInternetand theWorld Wide Webin the 1990s, new programming languages were introduced to supportWeb pagesandnetworking.[34]Java, based on C++ and designed for increased portability across systems and security, enjoyed large-scale success because these features are essential for many Internet applications.[35][36]Another development was that ofdynamically typedscripting languages—Python,JavaScript,PHP, andRuby—designed to quickly produce small programs that coordinate existingapplications. Due to their integration withHTML, they have also been used for building web pages hosted onservers.[37][38]
During the 2000s, there was a slowdown in the development of new programming languages that achieved widespread popularity.[39]One innovation wasservice-oriented programming, designed to exploitdistributed systemswhose components are connected by a network. Services are similar to objects in object-oriented programming, but run on a separate process.[40]C#andF#cross-pollinated ideas between imperative and functional programming.[41]After 2010, several new languages—Rust,Go,Swift,ZigandCarbon—competed for the performance-critical software for which C had historically been used.[42]Most of the new programming languages usestatic typingwhile a few numbers of new languages usedynamic typinglikeRingandJulia.[43][44]
Some of the new programming languages are classified asvisual programming languageslikeScratch,LabVIEWandPWCT. Also, some of these languages mix between textual and visual programming usage likeBallerina.[45][46][47][48]Also, this trend lead to developing projects that help in developing new VPLs likeBlocklybyGoogle.[49]Many game engines likeUnrealandUnityadded support for visual scripting too.[50][51]
Every programming language includes fundamental elements for describing data and the operations or transformations applied to them, such as adding two numbers or selecting an item from a collection. These elements are governed by syntactic and semantic rules that define their structure and meaning, respectively.
A programming language's surface form is known as itssyntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages aregraphical, using visual relationships between symbols to specify a program.
The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (eitherformalor hard-coded in areference implementation). Since most languages are textual, this article discusses textual syntax.
The programming language syntax is usually defined using a combination ofregular expressions(forlexicalstructure) andBackus–Naur form(forgrammaticalstructure). Below is a simple grammar, based onLisp:
This grammar specifies the following:
The following are examples of well-formed token sequences in this grammar:12345,()and(a b c232 (1)).
Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibitundefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.
Usingnatural languageas an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:
The followingC languagefragment is syntactically correct, but performs operations that are not semantically defined (the operation*p >> 4has no meaning for a value having a complex type andp->imis not defined because the value ofpis thenull pointer):
If thetype declarationon the first line were omitted, the program would trigger an error on the undefined variablepduring compilation. However, the program would still be syntactically correct since type declarations provide only semantic information.
The grammar needed to specify a programming language can be classified by its position in theChomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they arecontext-free grammars.[52]Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis anundecidable problem, and generally blur the distinction between parsing and execution.[53]In contrast toLisp's macro systemand Perl'sBEGINblocks, which may contain general computations, C macros are merely string replacements and do not require code execution.[54]
The termsemanticsrefers to the meaning of languages, as opposed to their form (syntax).
Static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1][failed verification]For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that everyidentifieris declared before it is used (in languages that require such declarations) or that the labels on the arms of acase statementare distinct.[55]Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or thatsubroutinecalls have the appropriate number and type of arguments, can be enforced by defining them as rules in alogiccalled atype system. Other forms ofstatic analyseslikedata flow analysismay also be part of static semantics. Programming languages such asJavaandC#havedefinite assignment analysis, a form of data flow analysis, as part of their respective static semantics.[56]
Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define thestrategyby which expressions are evaluated to values, or the manner in whichcontrol structuresconditionally executestatements. Thedynamic semantics(also known asexecution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research goes intoformal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.[56]
Adata typeis a set of allowable values and operations that can be performed on these values.[57]Each programming language'stype systemdefines which data types exist, the type of anexpression, and howtype equivalenceandtype compatibilityfunction in the language.[58]
According totype theory, a language is fully typed if the specification of every operation defines types of data to which the operation is applicable.[59]In contrast, an untyped language, such as mostassembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths.[59]In practice, while few languages are fully typed, most offer a degree of typing.[59]
Because different types (such asintegersandfloats) represent values differently, unexpected results will occur if one type is used when another is expected.Type checkingwill flag this error, usually atcompile time(runtime type checking is more costly).[60]Withstrong typing,type errorscan always be detected unless variables are explicitlycastto a different type.Weak typingoccurs when languages allow implicit casting—for example, to enable operations between variables of different types without the programmer making an explicit type conversion. The more cases in which thistype coercionis allowed, the fewer type errors can be detected.[61]
Early programming languages often supported only built-in, numeric types such as theinteger(signed and unsigned) andfloating point(to support operations onreal numbersthat are not integers). Most programming languages support multiple sizes of floats (often calledfloatanddouble) and integers depending on the size and precision required by the programmer. Storing an integer in a type that is too small to represent it leads tointeger overflow. The most common way of representing negative numbers with signed types istwos complement, althoughones complementis also used.[62]Other common types includeBoolean—which is either true or false—andcharacter—traditionally onebyte, sufficient to represent allASCIIcharacters.[63]
Arraysare a data type whose elements, in many languages, must consist of a single type of fixed length. Other languages define arrays as references to data stored elsewhere and support elements of varying types.[64]Depending on the programming language, sequences of multiple characters, calledstrings, may be supported as arrays of characters or their ownprimitive type.[65]Strings may be of fixed or variable length, which enables greater flexibility at the cost of increased storage space and more complexity.[66]Other data types that may be supported includelists,[67]associative (unordered) arraysaccessed via keys,[68]recordsin which data is mapped to names in an ordered structure,[69]andtuples—similar to records but without names for data fields.[70]Pointersstore memory addresses, typically referencing locations on theheapwhere other data is stored.[71]
The simplestuser-defined typeis anordinal type, often called anenumeration, whose values can be mapped onto the set of positive integers.[72]Since the mid-1980s, most programming languages also supportabstract data types, in which the representation of the data and operations arehidden from the user, who can only access aninterface.[73]The benefits ofdata abstractioncan include increased reliability, reduced complexity, less potential forname collision, and allowing the underlyingdata structureto be changed without the client needing to alter its code.[74]
Instatic typing, all expressions have their types determined before a program executes, typically at compile-time.[59]Most widely used, statically typed programming languages require the types of variables to be specified explicitly. In some languages, types are implicit; one form of this is when the compiler caninfertypes based on context. The downside ofimplicit typingis the potential for errors to go undetected.[75]Complete type inference has traditionally been associated with functional languages such asHaskellandML.[76]
With dynamic typing, the type is not attached to the variable but only the value encoded in it. A single variable can be reused for a value of a different type. Although this provides more flexibility to the programmer, it is at the cost of lower reliability and less ability for the programming language to check for errors.[77]Some languages allow variables of aunion typeto which any type of value can be assigned, in an exception to their usual static typing rules.[78]
In computing, multiple instructions can be executed simultaneously. Many programming languages support instruction-level and subprogram-level concurrency.[79]By the twenty-first century, additional processing power on computers was increasingly coming from the use of additional processors, which requires programmers to design software that makes use of multiple processors simultaneously to achieve improved performance.[80]Interpreted languagessuch asPythonandRubydo not support the concurrent use of multiple processors.[81]Other programming languages do support managing data shared between different threads by controlling the order of execution of key instructions via the use ofsemaphores, controlling access to shared data viamonitor, or enablingmessage passingbetween threads.[82]
Many programming languages include exception handlers, a section of code triggered byruntime errorsthat can deal with them in two main ways:[83]
Some programming languages support dedicating a block of code to run regardless of whether an exception occurs before the code is reached; this is called finalization.[84]
There is a tradeoff between increased ability to handle exceptions and reduced performance.[85]For example, even though array index errors are common[86]C does not check them for performance reasons.[85]Although programmers can write code to catch user-defined exceptions, this can clutter a program. Standard libraries in some languages, such as C, use their return values to indicate an exception.[87]Some languages and their compilers have the option of turning on and off error handling capability, either temporarily or permanently.[88]
One of the most important influences on programming language design has beencomputer architecture.Imperative languages, the most commonly used type, were designed to perform well onvon Neumann architecture, the most common computer architecture.[89]In von Neumann architecture, thememorystores both data and instructions, while theCPUthat performs instructions on data is separate, and data must be piped back and forth to the CPU. The central elements in these languages are variables,assignment, anditeration, which is more efficient thanrecursionon these machines.[90]
Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse.[citation needed]The birth of programming languages in the 1950s was stimulated by the desire to make a universal programming language suitable for all machines and uses, avoiding the need to write code for different computers.[91]By the early 1960s, the idea of a universal language was rejected due to the differing requirements of the variety of purposes for which code was written.[92]
Desirable qualities of programming languages include readability, writability, and reliability.[93]These features can reduce the cost of training programmers in a language, the amount of time needed to write and maintain programs in the language, the cost of compiling the code, and increase runtime performance.[94]
Programming language design often involves tradeoffs.[104]For example, features to improve reliability typically come at the cost of performance.[105]Increased expressivity due to a large number of operators makes writing code easier but comes at the cost of readability.[105]
Natural-language programminghas been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate.Edsger W. Dijkstratook the position that the use of a formal language is essential to prevent the introduction of meaningless constructs.[106]Alan Perliswas similarly dismissive of the idea.[107]
The specification of a programming language is an artifact that the languageusersand theimplementorscan use to agree upon whether a piece ofsource codeis a validprogramin that language, and if so what its behavior shall be.
A programming language specification can take several forms, including the following:
An implementation of a programming language is the conversion of a program intomachine codethat can be executed by the hardware. The machine code then can be executed with the help of theoperating system.[111]The most common form of interpretation inproduction codeis by acompiler, which translates the source code via an intermediate-level language into machine code, known as anexecutable. Once the program is compiled, it will run more quickly than with other implementation methods.[112]Some compilers are able to provide furtheroptimizationto reduce memory or computation usage when the executable runs, but increasing compilation time.[113]
Another implementation method is to run the program with aninterpreter, which translates each line of software into machine code just before it executes. Although it can make debugging easier, the downside of interpretation is that it runs 10 to 100 times slower than a compiled executable.[114]Hybrid interpretation methods provide some of the benefits of compilation and some of the benefits of interpretation via partial compilation. One form this takes isjust-in-time compilation, in which the software is compiled ahead of time into an intermediate language, and then into machine code immediately before execution.[115]
Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonlydomain-specific languagesor internalscripting languagesfor a single product; some proprietary languages are used only internally within a vendor, while others are available to external users.[citation needed]
Some programming languages exist on the border between proprietary and open; for example,Oracle Corporationasserts proprietary rights to some aspects of theJava programming language,[116]andMicrosoft'sC#programming language, which has open implementations of most parts of the system, also hasCommon Language Runtime(CLR) as a closed environment.[117]
Many proprietary languages are widely used, in spite of their proprietary nature; examples includeMATLAB,VBScript, andWolfram Language. Some languages may make the transition from closed to open; for example,Erlangwas originally Ericsson's internal programming language.[118]
Open source programming languagesare particularly helpful foropen scienceapplications, enhancing the capacity forreplicationand code sharing.[119]
Thousands of different programming languages have been created, mainly in the computing field.[120]Individual software projects commonly use five programming languages or more.[121]
Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by usingpseudocode, which interleaves natural language with code written in a programming language.
A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. Aprogrammeruses theabstractionspresent in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (calledprimitives).[122]Programmingis the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.
Programs for a computer might beexecutedin abatch processwithout any human interaction, or a user might typecommandsin aninteractive sessionof aninterpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language can run its commands through an interpreter (such as aUnix shellor othercommand-line interface), without compiling, it is called ascripting language.[123]
Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example,COBOLis still strong in the corporate data center, often on largemainframes;[124][125]Fortranin scientific and engineering applications;Adain aerospace, transportation, military, real-time, and embedded applications; andCin embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications.
Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:
Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages (in descending order by overall popularity):Java,C,C++,Python,C#,JavaScript,VB .NET,R,PHP, andMATLAB.[129]
As of June 2024, the top five programming languages as measured byTIOBE indexarePython,C++,C,JavaandC#. TIOBE provides a list of top 100 programming languages according to popularity and update this list every month.[130]
Adialectof a programming language or adata exchange languageis a (relatively small) variation or extension of the language that does not change its intrinsic nature. With languages such asSchemeandForth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a newdialect. In other cases, a dialect is created for use in adomain-specific language, often a subset. In theLispworld, most languages that use basicS-expressionsyntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly as do, say,RacketandClojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. TheBASIClanguage hasmany dialects.
Programming languages are often placed into four main categories:imperative,functional,logic, andobject oriented.[131]
Althoughmarkup languagesare not programming languages, some have extensions that support limited programming. Additionally, there are special-purpose languages that are not easily compared to other programming languages.[135]
|
https://en.wikipedia.org/wiki/Programming_language
|
This is a list of theshellcommandsof the most recent version of the Portable Operating System Interface (POSIX) –IEEEStd 1003.1-2024 which is part of theSingle UNIX Specification(SUS). These commands are implemented in many shells on modernUnix,Unix-likeand otheroperating systems. This list does not cover commands for all versions of Unix and Unix-like shells nor other versions of POSIX.
|
https://en.wikipedia.org/wiki/List_of_POSIX_commands
|
Therestricted shellis aUnix shellthat restricts some of the capabilities available to an interactive user session, or to ashell script, running within it. It is intended to provide an additional layer of security, but is insufficient to allow execution of entirely untrusted software. A restricted mode operation is found in the originalBourne shell[1]and its later counterpartBash,[2]and in theKornShell.[3]In some cases a restricted shell is used in conjunction with achrootjail, in a further attempt to limit access to the system as a whole.
The restricted mode of the Bourne shellsh, and its POSIX workalikes, is used when the interpreter is invoked in one of the following ways:
The restricted mode of Bash is used when Bash is invoked in one of the following ways:
Similarly KornShell's restricted mode is produced by invoking it thus:
For some systems (e.g.,CentOS), the invocation throughrbashis not enabled by default, and the user obtains acommand not founderror if invoked directly, or a login failure if the/etc/passwdfile indicates/bin/rbashas the user's shell.
It suffices to create a link namedrbashpointing directly tobash. Though this invokes Bash directly, without the-ror--restrictedoptions, Bash does recognize that it was invoked throughrbashand it does come up as a restricted shell.
This can be accomplished with the following simple commands (executed as root, either logged in as user root, or usingsudo):
The following operations are not permitted in a restricted shell:
Bash adds further restrictions, including:[2]
Restrictions in the restricted KornShell are much the same as those in the restricted Bourne shell.[4]
The restricted shell is not secure. A user can break out of the restricted environment by running a program that features a shell function. The following is an example of the shell function invibeing used to escape from the restricted shell:
Or by simply starting a new unrestricted shell, if it is in thePATH, as demonstrated here:
Beyond the restricted modes of usual shells, specialized restricted shell programs include:
|
https://en.wikipedia.org/wiki/Restricted_shell
|
Ashell scriptis acomputer programdesigned to be run by aUnix shell, acommand-line interpreter.[1]The various dialects of shell scripts are considered to becommand languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. A script which sets up the environment, runs the program, and does any necessary cleanup or logging, is called awrapper.
The term is also used more generally to mean the automated mode of running an operating system shell; each operating system uses a particular name for these functions including batch files (MSDos-Win95 stream,OS/2), command procedures (VMS), and shell scripts (Windows NTstream and third-party derivatives like4NT—article is atcmd.exe), and mainframe operating systems are associated with a number of terms.
Shells commonly present in Unix and Unix-like systems include theKorn shell, theBourne shell, andGNU Bash. While a Unix operating system may have a different default shell, such asZshonmacOS, these shells are typically present for backwards compatibility.
Commentsare ignored by the shell. They typically begin with the hash symbol (#), and continue until the end of the line.[2]
Theshebang, or hash-bang, is a special kind of comment which the system uses to determine what interpreter to use to execute the file. The shebang must be the first line of the file, and start with "#!".[2]In Unix-like operating systems, the characters following the "#!" prefix are interpreted as a path to an executable program that will interpret the script.[3]
A shell script can provide a convenient variation of a system command where special environment settings, command options, or post-processing apply automatically, but in a way that allows the new script to still act as a fully normalUnix command.
One example would be to create a version ofls, the command to list files, giving it a shorter command name ofl, which would be normally saved in a user'sbindirectory as/home/username/bin/l, and a default set of command options pre-supplied.
Here, the first line uses ashebangto indicate which interpreter should execute the rest of the script, and the second line makes a listing with options for file format indicators, columns, all files (none omitted), and a size in blocks. TheLC_COLLATE=Csets the default collation order to not fold upper and lower case together, not intermixdotfileswith normal filenames as a side effect of ignoring punctuation in the names (dotfiles are usually only shown if an option like-ais used), and the"$@"causes any parameters given tolto pass through as parameters to ls, so that all of the normal options and othersyntaxknown to ls can still be used.
The user could then simply uselfor the most commonly used short listing.
Another example of a shell script that could be used as a shortcut would be to print a list of all the files and directories within a given directory.
In this case, the shell script would start with its normal starting line of#!/bin/sh. Following this, the script executes the commandclearwhich clears the terminal of all text before going to the next line. The following line provides the main function of the script. Thels -alcommand lists the files and directories that are in the directory from which the script is being run. Thelscommand attributes could be changed to reflect the needs of the user.
Shell scripts allow several commands that would be entered manually at a command-line interface to be executed automatically, and without having to wait for a user to trigger each stage of the sequence. For example, in a directory with three C source code files, rather than manually running the four commands required to build the final program from them, one could instead create a script forPOSIX-compliant shells, here namedbuildand kept in the directory with them, which would compile them automatically:
The script would allow a user to save the file being edited, pause the editor, and then just run./buildto create the updated program, test it, and then return to the editor. Since the 1980s or so, however, scripts of this type have been replaced with utilities likemakewhich are specialized for building programs.
Simple batch jobs are not unusual for isolated tasks, but using shell loops, tests, and variables provides much more flexibility to users. A POSIX sh script to convert JPEG images to PNG images, where the image names are provided on the command-line—possibly via wildcards—instead of each being listed within the script, can be created with this file, typically saved in a file like/home/username/bin/jpg2png
Thejpg2pngcommand can then be run on an entire directory full of JPEG images with just/home/username/bin/jpg2png *.jpg
Many modern shells also supply various features usually found only in more sophisticatedgeneral-purpose programming languages, such as control-flow constructs, variables,comments, arrays,subroutinesand so on. With these sorts of features available, it is possible to write reasonably sophisticated applications as shell scripts. However, they are still limited by the fact that most shell languages have little or no support for data typing systems, classes, threading, complex math, and other common full language features, and are also generally much slower than compiled code or interpreted languages written with speed as a performance goal.
The standard Unix toolssedandawkprovide extra capabilities for shell programming;Perlcan also be embedded in shell scripts as can other scripting languages likeTcl. Perl and Tcl come with graphics toolkits as well.
Scripting languages commonly found on UNIX, Linux, and POSIX-compliant operating system installations include:
The C and Tcl shells have syntax quite similar to that of said programming languages, and the Korn shells and Bash are developments of the Bourne shell, which is based on theALGOLlanguage with elements of a number of others added as well.[4]On the other hand, the various shells plus tools likeawk,sed,grep, andBASIC,Lisp,Cand so forth contributed to thePerlprogramming language.[5]
Other shells that may be available on a machine or for download and/or purchase include:
Related programs such as shells based onPython,Ruby,C,Java,Perl,Pascal,Rexxetc. in various forms are also widely available. Another somewhat common shell isOld shell(osh), whose manual page states it "is an enhanced, backward-compatible port of the standard command interpreter from Sixth Edition UNIX."[6]
So called remote shells such as
are really just tools to run a more complex shell on a remote system and have no 'shell' like characteristics themselves.
Many powerful scripting languages have been introduced for tasks that are too large or complex to be comfortably handled with ordinary shell scripts, but for which the advantages of a script are desirable and the development overhead of a full-blown, compiled programming language would be disadvantageous. The specifics of what separates scripting languages fromhigh-level programming languagesis a frequent source of debate, but, generally speaking, a scripting language is one which requires an interpreter.
Shell scripts often serve as an initial stage in software development, and are often subject to conversion later to a different underlying implementation, most commonly being converted toPerl,Python, orC. Theinterpreter directiveallows the implementation detail to be fully hidden inside the script, rather than being exposed as a filename extension, and provides for seamless reimplementation in different languages with no impact on end users.
While files with the ".sh"file extensionare usually a shell script of some kind, most shell scripts do not have any filename extension.[7][8][9][10]
Perhaps the biggest advantage of writing a shell script is that the commands and syntax are exactly the same as those directly entered at the command-line. The programmer does not have to switch to a totally different syntax, as they would if the script were written in a different language, or if a compiled language were used.
Often, writing a shell script is much quicker than writing the equivalent code in other programming languages. The many advantages include easy program or file selection, quick start, and interactive debugging. A shell script can be used to provide a sequencing and decision-making linkage around existing programs, and for moderately sized scripts the absence of a compilation step is an advantage. Interpretive running makes it easy to write debugging code into a script and re-run it to detect and fix bugs. Non-expert users can use scripting to tailor the behavior of programs, and shell scripting provides some limited scope for multiprocessing.
On the other hand, shell scripting is prone to costly errors. Inadvertent typing errors such asrm-rf * /(instead of the intendedrm -rf */) are folklore in the Unix community; a single extra space converts the command from one that deletes all subdirectories contained in the current directory, to one which deletes everything from the file system'sroot directory. Similar problems can transformcpandmvinto dangerous weapons, and misuse of the>redirect can delete the contents of a file. This is made more problematic by the fact that many UNIX commands differ in name by only one letter:cp,cd,dd,df, etc.
Another significant disadvantage is the slow execution speed and the need to launch a new process for almost every shell command executed. When a script's job can be accomplished by setting up apipelinein which efficientfiltercommands perform most of the work, the slowdown is mitigated, but a complex script is typically several orders of magnitude slower than a conventional compiled program that performs an equivalent task.
There are also compatibility problems between different platforms.Larry Wall, creator ofPerl, famously wrote that "It's easier to port a shell than a shell script."[11]
Similarly, more complex scripts can run into the limitations of the shell scripting language itself; the limits make it difficult to write quality code, and extensions by various shells to ameliorate problems with the original shell language can make problems worse.[12]
Many disadvantages of using some script languages are caused by design flaws within thelanguage syntaxor implementation, and are not necessarily imposed by the use of a text-based command-line; there are a number of shells which use other shell programming languages or even full-fledged languages likeScsh(which usesScheme).
Different scripting languages may share many common elements, largely due to being POSIX based, and some shells offer modes to emulate different shells. This allows a shell script written in one scripting language to be adapted into another.
One example of this is Bash, which offers the same grammar and syntax as the Bourne shell, and which also provides a POSIX-compliant mode.[13]As such, most shell scripts written for the Bourne shell can be run in BASH, but the reverse may not be true since BASH has extensions which are not present in the Bourne shell. As such, these features are known asbashisms.[14]
Interoperability software such asCygwin, theMKS Toolkit,Interix(which is available in the Microsoft Windows Services for UNIX),Hamilton C shell,UWIN(AT&T Unix for Windows) and others allow Unix shell programs to be run on machines running Windows NT and its successors, with some loss of functionality on theMS-DOS-Windows 95branch, as well as earlier MKS Toolkit versions for OS/2. At least three DCL implementations for Windows type operating systems—in addition toXLNT, a multiple-use scripting language package which is used with the command shell,Windows Script HostandCGIprogramming—are available for these systems as well. Mac OS X and subsequent are Unix-like as well.[15]
In addition to the aforementioned tools, somePOSIXand OS/2 functionality can be used with the corresponding environmental subsystems of the Windows NT operating system series up to Windows 2000 as well. A third,16-bitsubsystem often called the MS-DOS subsystem uses the Command.com provided with these operating systems to run the aforementioned MS-DOS batch files.[16]
The console alternatives4DOS,4OS2,FreeDOS,Peter Norton'sNDOSand4NT / Take Commandwhich add functionality to the Windows NT-style cmd.exe, MS-DOS/Windows 95 batch files (run by Command.com), OS/2's cmd.exe, and 4NT respectively are similar to the shells that they enhance and are more integrated with the Windows Script Host, which comes with three pre-installed engines, VBScript,JScript, andVBAand to which numerous third-party engines can be added, with Rexx, Perl, Python, Ruby, and Tcl having pre-defined functions in 4NT and related programs.PC DOSis quite similar to MS-DOS, whilstDR DOSis more different. Earlier versions of Windows NT are able to run contemporary versions of 4OS2 by the OS/2 subsystem.
Scripting languages are, by definition, able to be extended; for example, a MS-DOS/Windows 95/98 and Windows NT type systems allows for shell/batch programs to call tools likeKiXtart,QBasic, variousBASIC,Rexx,Perl, andPythonimplementations, theWindows Script Hostand its installed engines. On Unix and otherPOSIX-compliant systems,awkandsedare used to extend the string and numeric processing ability of shell scripts.Tcl, Perl, Rexx, and Python have graphics toolkits and can be used to code functions and procedures for shell scripts which pose a speed bottleneck (C, Fortran, assembly language &c are much faster still) and to add functionality not available in the shell language such as sockets and other connectivity functions, heavy-duty text processing, working with numbers if the calling script does not have those abilities, self-writing and self-modifying code, techniques likerecursion, direct memory access, various types ofsortingand more, which are difficult or impossible in the main script, and so on.Visual Basic for ApplicationsandVBScriptcan be used to control and communicate with such things as spreadsheets, databases, scriptable programs of all types, telecommunications software, development tools, graphics tools and other software which can be accessed through theComponent Object Model.
|
https://en.wikipedia.org/wiki/Shell_script
|
Exceptionalitymay refer to:
|
https://en.wikipedia.org/wiki/Exceptionality_(disambiguation)
|
Exemptionmay refer to:
|
https://en.wikipedia.org/wiki/Exemption_(disambiguation)
|
Acceptoften refers to:
Acceptcan also refer to:
|
https://en.wikipedia.org/wiki/Accept_(disambiguation)
|
Incomputeroperating systems,demand paging(as opposed toanticipatory paging) is a method ofvirtual memorymanagement. In a system that uses demand paging, the operating system copies a diskpageinto physical memory only when an attempt is made to access it and that page is not already in memory (i.e., if apage faultoccurs). It follows that aprocessbegins execution with none of its pages in physical memory, and triggers many page faults until most of itsworking setof pages are present in physical memory. This is an example of alazy loadingtechnique.
Demand paging only brings pages into memory when an executing process demands them. This is often referred to aslazy loading, as only those pages demanded by the process are swapped fromsecondary storagetomain memory. Contrast this to pure swapping, where all memory for a process is swapped from secondary storage to main memory when the process starts up or resumes execution.
Commonly, to achieve this process amemory management unitis used. The memory management unit mapslogical memorytophysical memory. Entries in the memory management unit include a bit that indicates whether a page is valid or invalid. A valid page is one that currently resides in main memory. An invalid page is one that currently resides in secondary memory. When a process tries to access a page, the following steps are generally followed:
Demand paging, as opposed to loading all pages immediately:
|
https://en.wikipedia.org/wiki/Demand_paging
|
InDOS memory management,expanded memoryis a system ofbank switchingthat provided additional memory toDOSprograms beyond the limit ofconventional memory(640 KiB).
Expanded memoryis an umbrella term for several incompatible technology variants. The most widely used variant was theExpanded Memory Specification(EMS), which was developed jointly byLotus Software,Intel, andMicrosoft, so that this specification was sometimes referred to as "LIM EMS". LIM EMS had three versions: 3.0, 3.2, and 4.0. The first widely implemented version was EMS 3.2, which supported up to 8 MiB of expanded memory and uses parts of the address space normally dedicated to communication with peripherals (upper memory) to map portions of the expanded memory.EEMS, an expanded-memory management standard competing with LIM EMS 3.x, was developed byAST Research,QuadramandAshton-Tate("AQA"); it could map any area of the lower 1 MiB. EEMS ultimately was incorporated in LIM EMS 4.0, which supported up to 32 MiB of expanded memory and provided some support for DOS multitasking as well. IBM, however, created its own expanded-memory standard calledXMA.
The use of expanded memory became common with games and business programs such asLotus 1-2-3in the late 1980s through the mid-1990s, but its use declined as users switched from DOS toprotected-modeoperating systems such asLinux,IBM OS/2, andMicrosoft Windows.
The8088processor of theIBM PCandIBM PC/XTcan address onemegabyte(MiB, or 220bytes) of memory. It inherited this limit from the 20-bit external address bus (and overall memory addressing architecture) of theIntel 8086. The designers of the PC allocated the lower 640KiB(655360bytes) of address space for read-write program memory (RAM), calledconventional memory, and the remaining 384 KiB of memory space is reserved for uses such as the systemBIOS, video memory, and memory on expansion peripheral boards.
Even though theIBM PC AT, introduced in 1984, uses the80286chip that can address up to 16 MiB of RAM asextended memory, it can only do so inprotected mode. The scarcity of software compatible with protected mode (no standardDOSapplications can run in it) meant that the market was still open for another solution.[1]
To make more memory accessible, abank switchingscheme was devised, where only selected parts of the additional memory is accessible at any given time. Originally, a single 64 KiB (216bytes) window of memory, called apage frame, was used; later this was made more flexible. Programs are written in a specific way to access expanded memory. Thewindowbetween conventional memory and expanded memory can be adjusted to access different locations within the expanded memory.
A first attempt to use a bank switching technique was made by Tall Tree Systems with its JRAM boards,[2]but these did not catch on.[1](Tall Tree Systems later made EMS-based boards using the same JRAM brand.)
Lotus Development,Intel, andMicrosoftcooperated to develop the EMS standard (aka LIM EMS). The first publicly available version of EMS, version 3.0 allows access of up to 4 MiB of expanded memory.[citation needed]This was increased to 8 MiB with version 3.2 of the specification. The final version of EMS, version 4.0 increased the maximum amount of expanded memory to 32 MiB and supports additional functionality.
Microsoft thought that bank switching was an inelegant and temporary, but necessary stopgap measure. Slamming his fist on the table during an interviewBill Gatessaid of expanded memory, "It's garbage! It's akludge! … But we're going to do it". The companies planned to launch the standard at the Spring 1985COMDEX, with many expansion-card and software companies announcing their support.[3][4]AST Research,STB Systems,Persyst,Quadram, andTecmarquickly designed EMS-compliant cards to compete with Intel's own Above Board expansion card. By mid-1985 some already called EMS ade facto standard.[5]
The first public version of the EMS standard, called EMS 3.0 was released in 1985; EMS 3.0, however, saw almost no hardware implementations before being superseded by EMS 3.2. EMS 3.2 uses a 64 KiB region in the upper 384 KiB (upper memoryarea) divided into four 16 KiB pages, which can be used to map portions of the expanded memory.[1]
Quadram, AST, andAshton-Tatecreated the Enhanced EMS (EEMS) standard. EEMS, a superset of EMS 3.2, allows any 16 KiB region in lower RAM to be mapped to expanded memory, as long as it was not associated with interrupts or dedicated I/O memory such as network or video cards. Thus, entire programs can be switched in and out of the extra RAM. EEMS also added support for two sets of mapping registers. These features are used by early DOS multitasker software such asDESQview. 1987's LIM EMS 4.0 specification incorporated practically all features of EEMS.[1]
A new feature added in LIM EMS 4.0 was that EMS boards can have multiple sets of page-mapping registers (up to 64 sets). This allows a primitive form of DOSmultitasking. The caveat is, however, that the standard does not specify how many register sets a board should have, so there is great variability between hardware implementations in this respect.[6]
The Expanded Memory Specification (EMS) is the specification describing the use of expanded memory. EMS functions are accessible through softwareinterrupt67h. Programs using EMS must first establish the presence of an installed expanded memory manager (EMM) by checking for a device driver with the device nameEMMXXXX0.
IBM remained silent as the industry widely adopted the LIM standard.[5]The company developed its own memory standard called Expanded Memory Adapter (XMA); the IBM DOS driver for it is XMAEM.SYS. Unlike EMS, the IBM expansion boards can be addressed both using an expanded memory model and asextended memory.[7]The expanded memory hardware interface used by XMA boards is, however, incompatible with EMS,[8]but a XMA2EMS.SYS driver provides EMS emulation for XMA boards.[7]XMA boards were first introduced for the 1986 (revamped) models of the3270 PC.[8]
This insertion of a memory window into the peripheral address space could originally be accomplished only through specific expansion boards, plugged into theISAexpansion bus of the computer. Famous 1980s expanded memory boards wereASTRAMpage, IBM PS/2 80286 Memory Expansion Option,AT&TExpanded Memory Adapter and theIntelAbove Board. Given the price of RAM during the period, up to several hundred dollars per MiB, and the quality and reputation of the above brand names, an expanded memory board was very expensive.
Later, somemotherboardchipsetsofIntel 80286-based computers implemented an expanded memory scheme that did not require add-on boards, notably theNEAT chipset. Typically, software switches determined how much memory should be used asexpanded memoryand how much should be used asextended memory.
An expanded-memory board, being a hardware peripheral, needed a softwaredevice driver, which exported its services. Such a device driver was calledexpanded-memory manager. Its name was variable; the previously mentioned boards used REMM.SYS (AST), PS2EMM.SYS (IBM), AEMM.SYS (AT&T) and EMM.SYS (Intel) respectively. Later, the expression became associated with software-only solutions requiring theIntel 80386processor, for exampleQuarterdeck'sQEMM,Qualitas'386MAXor the defaultEMM386in MS-DOS, PC DOS and DR-DOS.
Beginning in 1986, the built-in memory management features ofIntel 80386processor freely modeled the address space when running legacy real-mode software, making hardware solutions unnecessary. Expanded memory could be simulated in software.
The first software expanded-memorymanagement(emulation) program wasCEMM, available in September 1986 as a utility for theCompaq Deskpro 386. A popular and well-featured commercial solution was Quarterdeck's QEMM. A contender was Qualitas'386MAX. Functionality was later incorporated intoMS-DOS 4.01in 1989 and intoDR DOS 5.0in 1990, asEMM386.
Software expanded-memory managers in general offered additional, but closely related functionality. Notably, they allowed using parts of theupper memory area(UMA) (the upper 384 KiB of real-mode address space) calledupper memory blocks(UMBs) and provided tools for loading small programs, typicallyterminate-and-stay-resident programsinside ("LOADHI" or "LOADHIGH").
Interaction betweenextended memory, expanded-memory emulation and DOS extenders ended up being regulated by the XMS,Virtual Control Program Interface(VCPI),DOS Protected Mode Interface(DPMI) andDOS Protected Mode Services(DPMS) specifications.
Certain emulation programs, colloquially known as LIMulators, did not rely on motherboard or 80386 features at all. Instead, they reserved 64 KiB of the base RAM for the expanded memory window, where they copied data to and from either extended memory or the hard disk when application programs requested page switches. This was programmatically easy to implement, but performance was low. This technique was offered by AboveDisk from Above Software and by severalsharewareprograms.
It is also possible to emulate EMS by using XMS memory on 286 CPUs using 3rd party utilities like EMM286 (.SYS driver).
Expanded Memory usage declined in the 1990s. New operating systems likeLinux,Windows 9x,Windows NT,OS/2, andBSD/OSsupported protected mode "out of the box". These and similar developments rendered Expanded Memory an obsolete concept.
Other platforms have implemented the same basic concept – additional memory outside of the main address space – but in technically incompatible ways:
|
https://en.wikipedia.org/wiki/Expanded_memory
|
Memory segmentationis anoperating systemmemory managementtechnique of dividing acomputer'sprimary memoryintosegmentsorsections. In acomputer systemusing segmentation, a reference to a memory location includes a value that identifies a segment and anoffset(memory location) within that segment. Segments or sections are also used inobject filesof compiled programs when they arelinkedtogether into aprogram imageand when the image isloadedinto memory.
Segments usually correspond to natural divisions of a program such as individual routines or data tables[1]so segmentation is generally more visible to the programmer thanpagingalone.[2]Segments may be created for programmodules, or for classes of memory usage such ascode segmentsanddata segments.[3]Certain segments may be shared between programs.[1][2]
Segmentation was originally invented as a method by whichsystem softwarecould isolate softwareprocesses(tasks) and data they are using. It was intended to increase reliability of the systems running multiple processes simultaneously.[4]
In a system using segmentation, computer memory addresses consist of a segment id and an offset within the segment.[3]A hardwarememory management unit(MMU) is responsible for translating the segment and offset into aphysical address, and for performing checks to make sure the translation can be done and that the reference to that segment and offset is permitted.
Each segment has a length and set of permissions (for example,read,write,execute) associated with it.[3]Aprocessis only allowed to make a reference into a segment if the type of reference is allowed by the permissions, and if the offset within the segment is within the range specified by the length of the segment. Otherwise, ahardware exceptionsuch as asegmentation faultis raised.
Segments may also be used to implementvirtual memory. In this case each segment has an associated flag indicating whether it is present in main memory or not. If a segment is accessed that is not present in main memory, an exception is raised, and theoperating systemwill read the segment into memory from secondary storage.
Segmentation is one method of implementingmemory protection.[5]Pagingis another, and they can be combined. The size of a memory segment is generally not fixed and may be as small as a singlebyte.[6]
Segmentation has been implemented several ways on various hardware, with or without paging. Intelx86 memory segmentationdoes not fit either model and is discussed separately below, and also in greater detail in a separate article.
Associated with each segment is information that indicates where the segment is located in memory— thesegment base. When a program references a memory location, the offset is added to the segment base to generate a physical memory address.
An implementation of virtual memory on a system using segmentation without paging requires that entire segments be swapped back and forth between main memory and secondary storage. When a segment is swapped in, the operating system has to allocate enough contiguous free memory to hold the entire segment. Oftenmemory fragmentationresults if there is not enough contiguous memory even though there may be enough in total.
Instead of a memory location, the segment information includes the address of apage tablefor the segment.
When a program references a memory location the offset is translated to a memory address using the page table. A segment can be extended by allocating another memory page and adding it to the segment's page table.
An implementation ofvirtual memoryon a system using segmentation with paging usually only moves individual pages back and forth between main memory and secondary storage, similar to a paged non-segmented system. Pages of the segment can be located anywhere in main memory and need not be contiguous. This usually results in a reduced amount of input/output between primary and secondary storage and reduced memory fragmentation.
TheBurroughs CorporationB5000computer was one of the first to implement segmentation, and "perhaps the first commercial computer to provide virtual memory"[7]based on segmentation. The B5000 is equipped with a segment information table called the Program Reference Table (PRT) which is used to indicate whether the corresponding segment resides in the main memory, to maintain thebase addressand the size of the segment.[8]The laterB6500computer also implemented segmentation; a version of its architecture is still in use today on the Unisys ClearPath Libra servers.[citation needed]
TheGE 645computer, a modification of theGE-635with segmentation and paging support added, was designed in 1964 to supportMultics.
TheIntel iAPX 432,[9]begun in 1975, attempted to implement a true segmented architecture with memory protection on a microprocessor.
The 960MX version of theIntel i960processors supported load and store instructions with the source or destination being an "access descriptor" for an object, and an offset into the object, with the access descriptor being in a 32-bit register and with the offset computed from a base offset in the next register and from an additional offset and, optionally, an index register specified in the instruction. An access descriptor contains permission bits and a 26-bit object index; the object index is an index into a table of object descriptors, giving an object type, an object length, and a physical address for the object's data, a page table for the object, or the top-level page table for a two-level page table for the object, depending on the object type.[10]
Prime,Stratus,Apollo,IBM System/38, andIBM AS/400(includingIBM i) computers use memory segmentation.
Words in theB5000, B5500 and B5700are 48 bits long.[11]Descriptorshave the uppermost bit set in the word. They reside in either the Program Reference Table (PRT) or the stack, and contain apresence bitindicating whether the data are present in memory. There are distinct data and program descriptors.[11]: 4-2–4-4
Words in the B6500 and its successors have 48 bits of data and 3 tag bits.[12]: 2-1The tag bits indicate the type of data contained in the word; there are several descriptor types, indicated by different tag bit values.[12]: 6-5–6-10The line includes the B6500, B6700, B7700, B6800, B6900, B5900, the A-series Burroughs and Unisys machines, and the current Clearpath MCP systems (Libra). While there have been a few enhancements over the years, particularly hardware advances, the architecture has changed little. The segmentation scheme has remained the same, seeSegmented memory.
In theIBM System/370models[a]with virtual storage[13][14](DAT) and 24-bit addresses,control register0 specifies a segment size of either 64 KiB or 1 MiB and a page size of either 2 KiB or 4 KiB; control register 1 contains a Segment Table Designator (STD), which specifies the length and real address of the segment table. Each segment table entry contains a page table location, a page table length and an invalid bit. IBM later expanded the address size to 31 bits and added two bits to the segment table entries:
Each of IBM's DAT implementations includes a translation cache, which IBM called a Translation Lookaside Buffer (TLB). While Principles of Operation discusses the TLB in general terms, the details are not part of the architecture and vary from model to model.
Starting with the3031, 3032, and 3033processor complexes, IBM offered a feature calledDual-address Space[14]: 5-13–5-17, Dual-Address-Space Control: 5-17–5-20, DAS Authorization Mechanisms: 5-21–5-24, PC-Number Translation[15](DAS), which allows a program to switch between the translation tables for two address spaces, referred to asprimary address space(CR1) andsecondary address space(CR7), and to move data between the address spaces subject to protection key. DAS supports a translation table to convert a 16-bit address space number (ASN) to an STD, with privileged instructions to load the STD into CR1 (primary) or CR7 (secondary).
Earlyx86processors, beginning with theIntel 8086, provide crude memory segmentation and nomemory protection. (Every byte of every segment is always available to any program.) The 16-bit segment registers allow for 65,536 segments; each segment begins at a fixed offset equal to 16 times the segment number; the segment starting address granularity is 16 bytes. Each segment grants read-write access to 64 KiB (65,536 bytes) of address space (this limit is set by the 16-bit PC and SP registers; the processor does no bounds checking). Offset+address exceeding 0xFFFFF wraps around to 0x00000. Each 64 KiB segment overlaps the next 4,095 segments; each physical address can be denoted by 4,096 segment–offset pairs. This scheme can address only 1 MiB (1024 KiB) of physical memory (and memory-mapped i/o). (Optionalexpanded memoryhardware can add bank-switched memory under software control.) Intel retroactively named the sole operating mode of these x86 CPU models "real mode".
TheIntel 80286and later processors add "286protected mode", which retains 16-bit addressing, and adds segmentation (without paging) and per-segment memory protection. For backward compatibility, all x86 CPUs start up in "real mode", with the same fixed overlapping 64 KiB segments, no memory protection, only 1 MiB physical address space, and some subtle differences (high memory area,unreal mode). In order to use its full 24-bit (16 MiB) physical address space and advancedMMUfeatures, an 80286 or later processor must be switched into "protected mode" by software, usually the operating system or aDOS extender. If a program does not use the segment registers, or only puts values into them that it receives from the operating system, then identical code can run in real mode or protected mode, but most real-mode software computes new values for the segment registers, breaking this compatibility.
TheIntel i386and later processors add "386protected mode", which uses 32-bit addressing, retains segmentation, and addsmemory paging. In these processors, the segment table, rather than pointing to a page table for the segment, contains the segment address inlinear memory. When paging is enabled, addresses in linear memory are then mapped to physical addresses using a separate page table. Most operating systems did not use the segmentation capability, opting to keep the base address in all segment registers equal to 0 at all times and provide per-page memory protection and swapping using only paging. Some use the CS register to provideexecutable space protectionon processors lacking theNX bitor use the FS or GS registers to access thread-local storage.[16][17]
Thex86-64architecture does not support segmentation in "long mode" (64-bit mode).[18]Four of the segment registers: CS, SS, DS, and ES are forced to 0, and the limit to 264. The segment registers FS and GS can still have a nonzero base address. This allows operating systems to use these segments for special purposes such as thread-local storage.[16][17]
|
https://en.wikipedia.org/wiki/Memory_segmentation
|
Apage,memory page, orvirtual pageis a fixed-length contiguous block ofvirtual memory, described by a single entry in apage table. It is the smallest unit of data for memory management in an operating system that uses virtual memory. Similarly, apage frameis the smallest fixed-length contiguous block ofphysical memoryinto which memory pages are mapped by the operating system.[1][2][3]
A transfer of pages between main memory and an auxiliary store, such as ahard disk drive, is referred to aspagingor swapping.[4]
Computer memory is divided into pages so that information can be found more quickly.
The concept is named by analogy to the pages of a printed book. If a reader wanted to find, for example, the 5,000th word in the book, they could count from the first word. This would be time-consuming. It would be much faster if the reader had a listing of how many words are on each page. From this listing they could determine which page the 5,000th word appears on, and how many words to count on that page. This listing of the words per page of the book is analogous to apage tableof a computerfile system.[5]
Page size is usually determined by the processor architecture. Traditionally, pages in a system had uniform size, such as 4,096bytes. However, processor designs often allow two or more, sometimes simultaneous, page sizes due to its benefits. There are several points that can factor into choosing the best page size.[6]
A system with a smaller page size uses more pages, requiring apage tablethat occupies more space. For example, if a 232virtual address space is mapped to 4KiB(212bytes) pages, the number of virtual pages is 220= (232/ 212). However, if the page size is increased to 32 KiB (215bytes), only 217pages are required. A multi-level paging algorithm can decrease the memory cost of allocating a large page table for each process by further dividing the page table up into smaller tables, effectively paging the page table.
Since every access to memory must be mapped from virtual to physical address, reading the page table every time can be quite costly. Therefore, a very fast kind of cache, thetranslation lookaside buffer(TLB), is often used. The TLB is of limited size, and when it cannot satisfy a given request (aTLB miss) the page tables must be searched manually (either in hardware or software, depending on the architecture) for the correct mapping. Larger page sizes mean that a TLB cache of the same size can keep track of larger amounts of memory, which avoids the costly TLB misses.
Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes lead to a large amount of wasted memory, as more potentially unused portions of memory are loaded into the main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation.
As an example, assume the page size is 1024 B. If a process allocates 1025 B, two pages must be used, resulting in 1023 B of unused space (where one page fully consumes 1024 B and the other only 1 B).
When transferring from a rotational disk, much of the delay is caused by seek time, the time it takes to correctly position the read/write heads above the disk platters. Because of this, large sequential transfers are more efficient than several smaller transfers. Transferring the same amount of data from disk to memory often requires less time with larger pages than with smaller pages.
Most operating systems allow programs to discover the page size atruntime. This allows programs to use memory more efficiently by aligning allocations to this size and reducing overall internal fragmentation of pages.
UnixandPOSIX-based systems may use the system functionsysconf(),[7][8][9][10][11]as illustrated in the following example written inthe C programming language.
In many Unix systems, the command-line utilitygetconfcan be used.[12][13][14]For example,getconf PAGESIZEwill return the page size in bytes.
Win32-based operating systems, such as those in theWindows 9xandWindows NTfamilies, may use the system functionGetSystemInfo()[15][16]fromkernel32.dll.
Someinstruction set architecturescan support multiple page sizes, including pages significantly larger than the standard page size. The available page sizes depend on the instruction set architecture, processor type, and operating (addressing) mode. The operating system selects one or more sizes from the sizes supported by the architecture. Note that not all processors implement all defined larger page sizes. This support for larger pages (known as "huge pages" inLinux, "superpages" inFreeBSD, and "large pages" inMicrosoft WindowsandIBM AIXterminology) allows for "the best of both worlds", reducing the pressure on theTLB cache(sometimes increasing speed by as much as 15%) for large allocations while still keeping memory usage at a reasonable level for small allocations.
Starting with thePentium Pro, and theAMD Athlon,x86processors support 4 MiB pages (calledPage Size Extension) (2 MiB pages if usingPAE) in addition to their standard 4 KiB pages; newerx86-64processors, such asAMD's newer AMD64 processors andIntel'sWestmere[27]and laterXeonprocessors can use 1 GiB pages inlong mode.IA-64supports as many as eight different page sizes, from 4 KiB up to 256 MiB, and some other architectures have similar features.[specify]
Larger pages, despite being available in the processors used in most contemporarypersonal computers, are not in common use except in large-scale applications, the applications typically found in large servers and incomputational clusters, and in the operating system itself. Commonly, their use requires elevated privileges, cooperation from the application making the large allocation (usually setting a flag to ask the operating system for huge pages), or manual administrator configuration; operating systems commonly, sometimes by design, cannot page them out to disk.
However,SGI IRIXhas general-purpose support for multiple page sizes. Each individual process can provide hints and the operating system will automatically use the largest page size possible for a given region of address space.[28]Later work proposed transparent operating system support for using a mix of page sizes for unmodified applications through preemptible reservations, opportunistic promotions, speculative demotions, and fragmentation control.[29]
Linuxhas supported huge pages on several architectures since the 2.6 series via thehugetlbfsfilesystem[30]and withouthugetlbfssince 2.6.38.[31]Windows Server 2003(SP1 and newer),Windows VistaandWindows Server 2008support huge pages under the name of large pages.[32]Windows 2000andWindows XPsupport large pages internally, but do not expose them to applications.[33]Reserving large pages under Windows requires a corresponding right that the system administrator must grant to the user because large pages cannot be swapped out under Windows. Beginning with version 9,Solarissupports large pages onSPARCand x86.[34][35]FreeBSD 7.2-RELEASE features superpages.[36]Note that until recently in Linux, applications needed to be modified in order to use huge pages. The 2.6.38 kernel introduced support for transparent use of huge pages.[31]On Linux kernels supporting transparent huge pages, as well as FreeBSD andSolaris, applications take advantage of huge pages automatically, without the need for modification.[36]
|
https://en.wikipedia.org/wiki/Page_(computer_memory)
|
In computing, apage cache, sometimes also calleddisk cache,[1]is a transparentcachefor thepagesoriginating from asecondary storagedevice such as ahard disk drive(HDD) or asolid-state drive(SSD). Theoperating systemkeeps a page cache in otherwise unused portions of themain memory(RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented inkernelswith thepagingmemory management, and is mostly transparent to applications.
Usually, all physical memory not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is easily reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as "free" or "available".
When compared to main memory, hard disk drive read/writes are slow andrandom accessesrequire expensivedisk seeks; as a result, larger amounts of main memory bring performance improvements as more data can be cached in memory.[2]Separate disk caching is provided on the hardware side, by dedicated RAM orNVRAMchips located either in thedisk controller(in which case the cache is integrated into a hard disk drive and usually calleddisk buffer[3]), or in adisk array controller, such memory should not be confused with the page cache. Theoperating systemmay also use some ofmain memoryas filesystem write buffer, it may be calledpage buffer.[4]
Pages in the page cache modified after being brought in are called dirty pages.[5]Since non-dirty pages in the page cache have identical copies insecondary storage(e.g. hard disk drive or solid-state drive), discarding and reusing their space is much quicker than paging out application memory, and is often preferred over flushing the dirty pages into secondary storage and reusing their space. Executablebinaries, such as applications and libraries, are also typically accessed through page cache and mapped to individualprocessspaces usingvirtual memory(this is done through themmapsystem call on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be flushed out of main memory eventually, leading to memory conservation.
Since cached pages can be easily evicted and re-used, some operating systems, notablyWindows NT, even report the page cache usage as "available" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.
The page cache also aids in writing to a disk. Pages in the main memory that have been modified during writing data to disk are marked as "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the cached page for the particular block is looked up. If it is already found in the page cache, the write is done to that page in the main memory. If it is not found in the page cache, then, when the write perfectly falls onpage sizeboundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise, the page(s) are fetched from disk and requested modifications are done. A file that is created or opened in the page cache, but not written to, might result in azero-byte fileat a later read.
However, not all cached pages can be written to as program code is often mapped asread-onlyorcopy-on-write; in the latter case, modifications to code will only be visible to the process itself and will not be written to disk.
In 2019, security researchers demonstratedside-channel attacksagainst the page cache: it's possible to bypassprivilege separationand exfiltrate data about other processes by systematically monitoring whether some file pages (for exampleexecutableorlibraryfiles) are present in the cache or not.[6]
|
https://en.wikipedia.org/wiki/Page_cache
|
Computer data storageordigital data storageis a technology consisting ofcomputercomponents andrecording mediathat are used to retaindigital data. It is a core function and fundamental component of computers.[1]: 15–16
Thecentral processing unit(CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use astorage hierarchy,[1]: 468–473which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast[a]technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs,Charles Babbage'sAnalytical EngineandPercy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in theVon Neumann architecture, where the CPU consists of two main parts: Thecontrol unitand thearithmetic logic unit(ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic andlogical operationson data.
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as deskcalculators,digital signal processors, and other specialized devices.Von Neumannmachines differ in having a memory in which they store their operatinginstructionsand data.[1]: 20Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply bereprogrammedwith new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keepstatebetween successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
A moderndigital computerrepresentsdatausing thebinary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string ofbits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is thebyte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodatethe binary representation of the piece of information, or simplydata. For example, thecomplete works of Shakespeare, about 1250 pages in print, can be stored in about fivemegabytes(40 million bits) with one byte per character.
Data areencodedby assigning a bit pattern to eachcharacter,digit, ormultimediaobject. Many standards exist for encoding (e.g.character encodingslikeASCII, image encodings likeJPEG, and video encodings likeMPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due torandombit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A randombit flip(e.g. due to randomradiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). Thecyclic redundancy check(CRC) method is typically used in communications and storage forerror detection. A detected error is then retried.
Data compressionmethods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
Forsecurity reasons, certain types of data (e.g.credit cardinformation) may be keptencryptedin storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Generally, the lower a storage is in the hierarchy, the lesser itsbandwidthand the greater its accesslatencyis from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
In contemporary usage,memoryis usually fast but temporarysemiconductorread-write memory, typicallyDRAM(dynamic RAM) or other such devices.Storageconsists of storage devices and their media not directly accessible by theCPU(secondaryortertiary storage), typicallyhard disk drives,optical discdrives, and other devices slower than RAM butnon-volatile(retaining contents when powered down).[2]
Historically,memoryhas, depending on technology, been calledcentral memory,core memory,core storage,drum,main memory,real storage, orinternal memory. Meanwhile, slower persistent storage devices have been referred to assecondary storage,external memory, orauxiliary/peripheral storage.
Primary storage(also known asmain memory,internal memory, orprime memory), often referred to simply asmemory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner.
Historically,early computersuseddelay lines,Williams tubes, or rotatingmagnetic drumsas primary storage. By 1954, those unreliable methods were mostly replaced bymagnetic-core memory. Core memory remained dominant until the 1970s, when advances inintegrated circuittechnology allowedsemiconductor memoryto become economically competitive.
This led to modernrandom-access memory(RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage arevolatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves asdisk cacheandwrite bufferto improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software.[3]Spare memory can be utilized asRAM drivefor temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Main memory is directly or indirectly connected to the central processing unit via amemory bus. It is actually two buses (not on the diagram): anaddress busand adata bus. The CPU firstly sends a number through an address bus, a number calledmemory address, that indicates the desired location of data. Then it reads or writes the data in thememory cellsusing the data bus. Additionally, amemory management unit(MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction ofvirtual memoryor other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence,non-volatile primary storagecontaining a small startup program (BIOS) is used tobootstrapthe computer, that is, to read a larger program from non-volatilesecondarystorage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, forread-only memory(the terminology may be somewhat confusing as most ROM types are also capable ofrandom access).
Many types of "ROM" are not literallyread only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Someembedded systemsrun programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently,primary storageandsecondary storagein some uses refer to what was historically called, respectively,secondary storageandtertiary storage.[4]
The primary storage, includingROM,EEPROM,NOR flash, andRAM,[5]are usuallybyte-addressable.
Secondary storage(also known asexternal memoryorauxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers,hard disk drives(HDDs) orsolid-state drives(SSDs) are usually used as secondary storage. Theaccess timeper byte for HDDs or SSDs is typically measured inmilliseconds(thousandths of a second), while the access time per byte for primary storage is measured innanoseconds(billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotatingoptical storagedevices, such asCDandDVDdrives, have even longer access times. Other examples of secondary storage technologies includeUSB flash drives,floppy disks,magnetic tape,paper tape,punched cards, andRAM disks.
Once thedisk read/write headon HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory.[6]
Secondary storage is often formatted according to afile systemformat, which provides the abstraction necessary to organize data intofilesanddirectories, while also providingmetadatadescribing the owner of a certain file, the access time, the access permissions, and other information.
Most computeroperating systemsuse the concept ofvirtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
The secondary storage, includingHDD,ODDandSSD, are usually block-addressable.
Tertiary storageortertiary memory[7]is a level below secondary storage. Typically, it involves a robotic mechanism which willmount(insert) anddismountremovable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples includetape librariesandoptical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalogdatabaseto determine which tape or disc contains the information. Next, the computer will instruct arobotic armto fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known asnearline storagebecause it is "near to online". The formal distinction between online, nearline, and offline storage is:[8]
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such astape cartridgesthat can be automatically loaded, as intape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storageis computer data storage on a medium or a device that is not under the control of aprocessing unit.[9]The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-linestorage is used totransfer informationsince the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enablingdisaster recovery. Off-line storage increases generalinformation securitysince it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards.
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Non-volatile memoryretains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information.Volatile memoryrequires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
Dynamic random-access memoryis a form of volatile memory that also requires the stored information to be periodically reread and rewritten, orrefreshed, otherwise it would vanish.Static random-access memoryis a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
Anuninterruptible power supply(UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for exampleEMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Utilities such ashdparmandsarcan be used to measure IO performance in Linux.
Full disk encryption,volume and virtual disk encryption, andor file/folder encryptionis readily available for most storage devices.[17]
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME).[18][19]and inSPARCM7 generation since October 2015.[20]
Distinct types of data storage have different points of failure and various methods ofpredictive failure analysis.
Vulnerabilities that can instantly lead to total loss arehead crashingon mechanical hard drives andfailure of electronic componentson flash storage.
Impending failure onhard disk drivesis estimable using S.M.A.R.T. diagnostic data that includes thehours of operationand the count of spin-ups, though its reliability is disputed.[21]
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which theflash memory controllerattempts to correct.
The health ofoptical mediacan be determined bymeasuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models ofoptical drivessupport error scanning.[22]
As of 2011[update], the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
Semiconductor memoryusessemiconductor-basedintegrated circuit(IC) chips to store information. Data are typically stored inmetal–oxide–semiconductor(MOS)memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tinyMOS field-effect transistors(MOSFETs) and/orMOS capacitors. Bothvolatileandnon-volatileforms of semiconductor memory exist, the former using standard MOSFETs and the latter usingfloating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductorrandom-access memory(RAM), particularlydynamic random-access memory(DRAM). Since the turn of the century, a type of non-volatilefloating-gatesemiconductor memory known asflash memoryhas steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006,notebookanddesktop computermanufacturers started using flash-basedsolid-state drives(SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.[23][24][25][26][27]
Magnetic storageuses different patterns ofmagnetizationon amagneticallycoated surface to store information. Magnetic storage isnon-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
In early computers, magnetic storage was also used as:
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.[28][29]
Optical storage, the typicaloptical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with alaser diodeand observing the reflection. Optical disc storage isnon-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use as of 2009[update]:[30]
Magneto-optical disc storageis optical disc storage where the magnetic state on aferromagneticsurface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage isnon-volatile,sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storagehas also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.[31]
Paper data storage, typically in the form ofpaper tapeorpunched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole.Barcodesmake it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as amatrix barcodefor very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.[32][33]
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above).
A secondary or tertiary storage may connect to a computer utilizingcomputer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known astape libraries, and in optical storage fieldoptical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to asautoloadersorautochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provideterabytesorpetabytesof near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used forbackups, and for high-capacity archives in imaging, medical, and video industries.Hierarchical storage managementis a most known archiving strategy of automaticallymigratinglong-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they areretrievedback to disk.
This article incorporatespublic domain materialfromFederal Standard 1037C.General Services Administration. Archived fromthe originalon 22 January 2022.
|
https://en.wikipedia.org/wiki/Physical_memory
|
Incomputing,virtual memory, orvirtual storage,[b]is amemory managementtechnique that provides an "idealized abstraction of the storage resources that are actually available on a given machine"[3]which "creates the illusion to users of a very large (main) memory".[4]
The computer'soperating system, using a combination of hardware and software, mapsmemory addressesused by a program, calledvirtual addresses, intophysical addressesincomputer memory.Main storage, as seen by a process or task, appears as a contiguousaddress spaceor collection of contiguoussegments. The operating system managesvirtual address spacesand the assignment of real memory to virtual memory.[5]Address translation hardware in the CPU, often referred to as amemory management unit(MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities, utilizing, e.g.,disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.
The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, ability to share memory used bylibrariesbetween processes, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique ofpagingor segmentation.
Virtual memory makes application programming easier by hidingfragmentationof physical memory; by delegating to the kernel the burden of managing thememory hierarchy(eliminating the need for the program to handleoverlaysexplicitly); and, when each process is run in its own dedicated address space, by obviating the needto relocateprogram code or to access memory withrelative addressing.
Memory virtualizationcan be considered a generalization of the concept of virtual memory.
Virtual memory is an integral part of a moderncomputer architecture; implementations usually require hardware support, typically in the form of amemory management unitbuilt into theCPU. While not necessary,emulatorsandvirtual machinescan employ hardware support to increase performance of their virtual memory implementations.[6]Older operating systems, such as those for themainframesof the 1960s, and those for personal computers of the early to mid-1980s (e.g.,DOS),[7]generally have no virtual memory functionality,[dubious–discuss]though notable exceptions for mainframes of the 1960s include:
During the 1960s and early '70s, computer memory was very expensive. The introduction of virtual memory provided an ability for software systems with large memory demands to run on computers with less real memory. The savings from this provided a strong incentive to switch to virtual memory for all systems. The additional capability of providing virtual address spaces added another level of security and reliability, thus making virtual memory even more attractive to the marketplace.
Most modern operating systems that support virtual memory also run eachprocessin its own dedicatedaddress space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such asOS/VS1andOS/VS2 SVS) and even modern ones (such asIBM i) aresingle address space operating systemsthat run all processes in a single address space composed of virtualized memory.
Embedded systemsand other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreaseddeterminism; virtual memory systems trigger unpredictabletrapsthat may produce unwanted and unpredictable delays in response to input, especially if the trap requires that data be read into main memory from secondary memory. The hardware to translate virtual addresses to physical addresses typically requires a significant chip area to implement, and not all chips used in embedded systems include that hardware, which is another reason some of those systems do not use virtual memory.
In the 1950s, all larger programs had to contain logic for managing primary and secondary storage, such asoverlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use.[8]To allow formultiprogrammingandmultitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of thePDP-10viaregisters.
A claim that the concept of virtual memory was first developed by German physicistFritz-Rudolf Güntschat theTechnische Universität Berlinin 1956 in his doctoral thesis,Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation,[9][10]does not stand up to careful scrutiny. The computer proposed by Güntsch (but never built) had an address space of 105words which mapped exactly onto the 105words of the drums, i.e. the addresses were real addresses and there was no form of indirect mapping, a key feature of virtual memory. What Güntsch did invent was a form ofcache memory, since his high-speed memory was intended to contain a copy of some blocks of code or data taken from the drums. Indeed, he wrote (as quoted in translation[11]): "The programmer need not respect the existence of the primary memory (he need not even know that it exists), for there is only one sort of addresses [sic] by which one can program as if there were only one storage." This is exactly the situation in computers with cache memory, one of the earliest commercial examples of which was the IBM System/360 Model 85.[12]In the Model 85 all addresses were real addresses referring to the main core store. A semiconductor cache store, invisible to the user, held the contents of parts of the main store in use by the currently executing program. This is exactly analogous to Güntsch's system, designed as a means to improve performance, rather than to solve the problems involved in multi-programming.
The first true virtual memory system was that implemented at theUniversity of Manchesterto create a one-level storage system[13]as part of theAtlas Computer. It used apagingmechanism to map the virtual addresses available to the programmer onto the real memory that consisted of 16,384 words of primarycore memorywith an additional 98,304 words of secondarydrum memory.[14]The addition of virtual memory into the Atlas also eliminated a looming programming problem: planning and scheduling data transfers between main and secondary memory and recompiling programs for each change of size of main memory.[15]The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.[8]: 2[16][17]
As early as 1958,Robert S. Barton, working at Shell Research, suggested that main storage should be allocated automatically rather than have the programmer being concerned with overlays from secondary memory, in effect virtual memory.[18]: 49[19]By 1960 Barton was lead architect on the Burroughs B5000 project. From 1959 to 1961, W. R. Lonergan was manager of the Burroughs Product Planning Group which included Barton,Donald Knuthas consultant, and Paul King. In May 1960, UCLA ran a two-week seminar "Using and Exploiting Giant Computers" to which Paul King and two others were sent. Stan Gill gave a presentation on virtual memory in the Atlas I computer. Paul King took the ideas back to Burroughs and it was determined that virtual memory should be designed into the core of the B5000.[18]: 3. Burroughs Corporation released the B5000 in 1964 as the first commercial computer with virtual memory.[20]
IBM developed[c]the concept ofhypervisorsin theirCP-40andCP-67, and in 1972 provided it for theS/370as Virtual Machine Facility/370.[22]IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions ofVMto exploit it.
Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult-to-build specialized hardware; initial implementations slowed down access to memory slightly.[8]There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over;[8]anIBMresearch team led byDavid Sayreshowed that their virtual memory overlay system consistently worked better than the best manually controlled systems.[23]Throughout the 1970s, the IBM 370 series running their virtual-storage based operating systems provided a means for business users to migrate multiple older systems into fewer, more powerful, mainframes that had improved price/performance. The firstminicomputerto introduce virtual memory was the NorwegianNORD-1; during the 1970s, other minicomputers implemented virtual memory, notablyVAXmodels runningVMS.
Virtual memory was introduced to thex86architecture with theprotected modeof theIntel 80286processor, but its segment swapping technique scaled poorly to larger segment sizes. TheIntel 80386introduced paging support underneath the existingsegmentationlayer, enabling the page fault exception to chain with other exceptions withoutdouble fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation.[24]
Nearly all current implementations of virtual memory divide avirtual address spaceintopages, blocks of contiguous virtual memory addresses. Pages on contemporary[d]systems are usually at least 4kilobytesin size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes.[25]
Page tablesare used to translate the virtual addresses seen by the application intophysical addressesused by thehardwareto process instructions;[26]such hardware that handles this specific translation is often known as thememory management unit. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises apage faultexception, invoking the paging supervisor component of theoperating system.
Systems can have, e.g., one page table for the whole system, separate page tables for each address space or process, separate page tables for each segment; similarly, systems can have, e.g., no segment table, one segment table for the whole system, separate segment tables for each address space or process, separate segment tables for eachregionin a tree[e]of region tables for each address space or process. If there is only one page table, different applicationsrunning at the same timeuse different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses.
Some earlier systems with smaller real memory sizes, such as theSDS 940, usedpage registersinstead of page tables in memory for address translation.
This part of the operating system creates and manages page tables and lists of free page frames. In order to ensure that there will be enough free page frames to quickly resolve page faults, the system may periodically steal allocated page frames, using apage replacement algorithm, e.g., aleast recently used(LRU) algorithm. Stolen page frames that have been modified are written back to auxiliary storage before they are added to the free queue. On some systems the paging supervisor is also responsible for managing translation registers that are not automatically loaded from page tables.
Typically, a page fault that cannot be resolved results in an abnormal termination of the application. However, some systems allow the application to have exception handlers for such errors. The paging supervisor may handle a page fault exception in several different ways, depending on the details:
In most cases, there will be an update to the page table, possibly followed by purging the Translation Lookaside Buffer (TLB), and the system restarts the instruction that causes the exception.
If the free page frame queue is empty then the paging supervisor must free a page frame using the samepage replacement algorithmfor page stealing.
Operating systems have memory areas that arepinned(never swapped to secondary storage). Other terms used arelocked,fixed, orwiredpages. For example,interruptmechanisms rely on an array of pointers to their handlers, such asI/Ocompletion andpage fault. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable.
Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example:
In IBM's operating systems forSystem/370and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consumingSupervisor Call instruction).
Multicsused the term "wired".OpenVMSandWindowsrefer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable. TheSingle UNIX Specificationalso uses the term "locked" in the specification formlock(), as do themlock()man pageson manyUnix-likesystems.
InOS/VS1and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used forinterruptmechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed).[citation needed]
Whenpagingandpage stealingare used, a problem called "thrashing"[28]can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task'sworking setis the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes.
A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token[29]is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and to release the memory pages to other processes. A time stamp is used to handover the token one by one. The first version of swap-token was implemented in Linux 2.6.[30]The second version is called preempt swap-token and is also in Linux 2.6.[30]In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be.
Some systems, such as theBurroughsB5500,[31]and the current Unisys MCP systems[32]use segmentation instead of paging, dividing virtual address spaces into variable-length segments. Using segmentation matches the allocated memory blocks to the logical needs and requests of the programs, rather than the physical view of a computer, although pages themselves are an artificial division in memory. The designers of the B5000 would have found the artificial size of pages to beProcrusteanin nature, a story they would later use for the exact data sizes in theB1000.[33]
In the Burroughs and Unisys systems, each memory segment is described by a masterdescriptorwhich is a single absolute descriptor which may be referenced by other relative (copy) descriptors, effecting sharing either within a process or between processes. Descriptors are central to the working of virtual memory in MCP systems. Descriptors contain not only the address of a segment, but the segment length and status in virtual memory indicated by the 'p-bit' or 'presence bit' which indicates if the address is to a segment in main memory or to a secondary-storage block. When a non-resident segment (p-bit is off) is accessed, an interrupt occurs to load the segment from secondary storage at the given address, or if the address itself is 0 then allocate a new block. In the latter case, the length field in the descriptor is used to allocate a segment of that length.
A further problem to thrashing in using a segmented scheme is checkerboarding,[34]where all free segments become too small to satisfy requests for new segments. The solution is to perform memory compaction to pack all used segments together and create a large free block from which further segments may be allocated. Since there is a single master descriptor for each segment the new block address only needs to be updated in a single descriptor, since all copies refer to the master descriptor.
Paging is not free from fragmentation – the fragmentation is internal to pages (internal fragmentation). If a requested block is smaller than a page, then some space in the page will be wasted. If a block requires larger than a page, a small area in another page is required resulting in large wasted space. The fragmentation thus becomes a problem passed to programmers who may well distort their program to match certain page sizes. With segmentation, the fragmentation is external to segments (external fragmentation) and thus a system problem, which was the aim of virtual memory in the first place, to relieve programmers of such memory considerations. In multi-processing systems, optimal operation of the system depends on the mix of independent processes at any time. Hybrid schemes of segmentation and paging may be used.
TheIntel 80286supports a similar segmentation scheme as an option, but it is rarely used.
Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such asMulticsandIBM System/38, are usually paging-predominant, segmentation providing memory protection.[35][36][37]
In theIntel 80386and laterIA-32processors, the segments reside in a32-bitlinear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assistedx86 virtualizationsolutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM, guest OS or guest application stack needs three.[38]: 22The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces.
This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space.[39]
This is not the same as the mechanisms provided by calls such asmmapandWin32's MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap.[40]This eliminates the need for alinkercompletely[8]and works when different processes map the same file into different places in their private address spaces.[41]
Some operating systems provide for swapping entireaddress spaces, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation.
IBM'sMVS, fromOS/VS2 Release 2throughz/OS, provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible[42]main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable using a SYSEVENTSupervisor Call instruction(SVC); certain changes[43]in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP.[44]
Swapping does not necessarily require memory management hardware, if, for example, multiple jobs are swapped in and out of the same area of storage.
|
https://en.wikipedia.org/wiki/Virtual_memory
|
Chu spacesgeneralize the notion oftopological spaceby dropping the requirements that the set ofopen setsbe closed underunionand finiteintersection, that the open sets be extensional, and that the membership predicate (of points in open sets) be two-valued. The definition ofcontinuous functionremains unchanged other than having to be worded carefully to continue to make sense after these generalizations.
The name is due to Po-Hsiang Chu, who originally constructed a verification of autonomous categories as a graduate student under the direction ofMichael Barrin 1979.[1]
Understood statically, a Chu space (A,r,X) over a setKconsists of a setAof points, a setXof states, and a functionr:A×X→K. This makes it anA×Xmatrixwith entries drawn fromK, or equivalently aK-valuedbinary relationbetweenAandX(ordinary binary relations being 2-valued).
Understood dynamically, Chu spaces transform in the manner of topological spaces, withAas the set of points,Xas the set of open sets, andras the membership relation between them, whereKis the set of all possible degrees of membership of a point in an open set. The counterpart of a continuous function from (A,r,X) to (B,s,Y) is a pair (f,g) of functionsf:A→B,g:Y→Xsatisfying theadjointness conditions(f(a),y) =r(a,g(y)) for alla∈Aandy∈Y. That is,fmaps points forwards at the same time asgmaps states backwards. The adjointness condition makesgthe inverse image functionf−1, while the choice ofXfor thecodomainofgcorresponds to the requirement for continuous functions that the inverse image of open sets be open. Such a pair is called a Chu transform or morphism of Chu spaces.
A topological space (X,T) whereXis the set of points andTthe set of open sets, can be understood as a Chu space (X,∈,T) over {0, 1}. That is, the points of the topological space become those of the Chu space while the open sets become states and the membership relation " ∈ " between points and open sets is made explicit in the Chu space. The condition that the set of open sets be closed under arbitrary (including empty) union and finite (including empty) intersection becomes the corresponding condition on the columns of the matrix. A continuous functionf:X→X'between two topological spaces becomes an adjoint pair (f,g) in whichfis now paired with a realization of the continuity condition constructed as an explicit witness functiongexhibiting the requisite open sets in the domain off.
The category of Chu spaces overKand their maps is denoted byChu(Set,K). As is clear from the symmetry of the definitions, it is aself-dual category: it is equivalent (in fact isomorphic) to its dual, the category obtained by reversing all the maps. It is furthermore a*-autonomous categorywith dualizing object (K, λ, {*}) where λ :K× {*} →Kis defined by λ(k, *) =k(Barr 1979). As such it is a model ofJean-Yves Girard'slinear logic(Girard 1987).
The more generalenriched categoryChu(V,k) originally appeared in an appendix to Barr (1979). The Chu space concept originated withMichael Barrand the details were developed by his student Po-Hsiang Chu, whose master's thesis formed the appendix. Ordinary Chu spaces arise as the caseV=Set, that is, when themonoidal categoryVis specialized to thecartesian closed categorySetof sets and their functions, but were not studied in their own right until more than a decade after the appearance of the more general enriched notion. A variant of Chu spaces, calleddialectica spaces, due tode Paiva (1989)replaces the map condition (1) with the map condition (2):
The categoryTopof topological spaces and their continuous functions embeds inChu(Set, 2) in the sense that there exists a full and faithful functorF:Top→Chu(Set, 2) providing for each topological space (X,T) itsrepresentationF((X,T)) = (X, ∈,T) as noted above. This representation is moreover arealizationin the sense of Pultr andTrnková(1980), namely that the representing Chu space has the same set of points as the represented topological space and transforms in the same way via the same functions.
Chu spaces are remarkable for the wide variety of familiar structures they realize. Lafont and Streicher (1991) point out that Chu spaces over 2 realize both topological spaces andcoherent spaces(introduced by J.-Y. Girard (1987) to model linear logic), while Chu spaces overKrealize any category of vector spaces over a field whose cardinality is at most that ofK. This was extended byVaughan Pratt(1995) to the realization ofk-ary relational structures by Chu spaces over 2k. For example, the categoryGrpof groups and their homomorphisms is realized byChu(Set,8) since the group multiplication can be organized as aternary relation.Chu(Set, 2) realizes a wide range of "logical" structures such as semilattices, distributive lattices, complete and completely distributive lattices, Boolean algebras, complete atomic Boolean algebras, etc. Further information on this and other aspects of Chu spaces, including their application to the modeling of concurrent behavior, may be found atChu Spaces.
Chu spaces can serve as a model of concurrent computation inautomata theoryto express branching time and trueconcurrency. Chu spaces exhibit the quantum mechanical phenomena of complementarity and uncertainty. The complementarity arises as the duality of information and time, automata and schedules, and states and events. Uncertainty arises when a measurement is defined to be amorphismsuch that increasing structure in the observed object reduces the clarity of observation. This uncertainty can be calculated numerically from its form factor to yield the usualHeisenberg uncertaintyrelation. Chu spaces correspond towavefunctionsas vectors ofHilbert space.[2]
|
https://en.wikipedia.org/wiki/Chu_space
|
Theclient–server modelis adistributed applicationstructure that partitions tasks or workloads between the providers of a resource or service, calledservers, and service requesters, calledclients.[1]Often clients and servers communicate over acomputer networkon separate hardware, but both client and server may be on the same device. A serverhostruns one or more server programs, which share their resources with clients. A client usually does not share its computing resources, but it requests content or service from a server and may share its own content as part of the request. Clients, therefore, initiate communication sessions with servers, which await incoming requests.
Examples of computer applications that use the client–server model areemail, network printing, and theWorld Wide Web.
The server component provides a function or service to one or many clients, which initiate requests for such services.
Servers are classified by the services they provide. For example, aweb serverservesweb pagesand afile serverservescomputer files. Ashared resourcemay be any of the server computer's software and electronic components, fromprogramsanddatatoprocessorsandstorage devices. The sharing of resources of a server constitutes aservice.
Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer.[2]Communication between servers, such as to synchronize data, is sometimes calledinter-serverorserver-to-servercommunication.
Generally, a service is anabstractionof computer resources and a client does not have to beconcernedwith how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevantapplication protocol, i.e. the content and the formatting of the data for the requested service.
Clients and servers exchange messages in arequest–responsemessaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example ofinter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in acommunications protocol. All protocols operate in theapplication layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement anapplication programming interface(API).[3]The API is anabstraction layerfor accessing a service. By restricting communication to a specificcontent format, it facilitatesparsing. By abstracting access, it facilitates cross-platform data exchange.[4]
A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number oftasksat any moment, and relies on aschedulingsystem to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximizeavailability, the server software may limit the availability to clients.Denial of service attacksare designed to exploit a server's obligation to process requests by overloading it with excessive request rates.
Encryption should be applied if sensitive information is to be communicated between the client and the server.
When abankcustomer accessesonline bankingservices with aweb browser(the client), the client initiates a request to the bank's web server. The customer'slogincredentialsare compared against adatabase, and the webserver accesses thatdatabase serveras a client. Anapplication serverinterprets the returned data by applying the bank'sbusiness logicand provides theoutputto the webserver. Finally, the webserver returns the result to the client web browser for display.
In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete.
This example illustrates adesign patternapplicable to the client–server model:separation of concerns.
Server-side refers to programs and operations that run on theserver. This is in contrast to client-side programs and operations which run on theclient.
"Server-side software" refers to acomputer application, such as aweb server, that runs on remoteserver hardware, reachable from auser's localcomputer,smartphone, or other device.[5]Operations may be performed server-side because they require access to information or functionality that is not available on theclient, or because performing such operations on theclient sidewould be slow, unreliable, orinsecure.
Client and server programs may be commonly available ones such as free or commercialweb serversandweb browsers, communicating with each other using standardizedprotocols. Or,programmersmay write their own server, client, andcommunications protocolwhich can only be used with one another.
Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks.[6][7]
In acomputer securitycontext, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, orin between the two. For example, an attacker might exploit anSQL injectionvulnerability in aweb applicationin order to maliciously change or gain unauthorized access to data in the server'sdatabase. Alternatively, an attacker might break into a server system using vulnerabilities in the underlyingoperating systemand then be able to access database and other files in the same manner as authorized administrators of the server.[8][9][10]
In the case ofdistributed computingprojects such asSETI@homeand theGreat Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application likeGoogle Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc.
Web applications andservicescan be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use.
Client-side refers to operations that are performed by theclientin acomputer network.
Typically, a client is acomputer application, such as aweb browser, that runs on auser's localcomputer,smartphone, or other device, and connects to aserveras necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use lessbandwidth, and incur a lessersecurityrisk.
When the server serves data in a commonly used manner, for example according to standardprotocolssuch asHTTPorFTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications,programmersmay write their own server, client, andcommunications protocolwhich can only be used with one another.
Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations.
In acomputer securitycontext, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on theserver side, orin between the two. As an example, if a server contained anencryptedfile or message which could only be decrypted using akeyhoused on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might causemalwareto be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employcross-site scriptingvulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware.[8][9][10]
Distributed computingprojects such asSETI@homeand the Great Internet Mersenne Prime Search, as well as Internet-dependent applications likeGoogle Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (aserver-sideoperation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home).
An early form of client–server architecture isremote job entry, dating at least toOS/360(announced 1964), where the request was to run ajob, and the response was the output.
While formulating the client–server model in the 1960s and 1970s,computer scientistsbuildingARPANET(at theStanford Research Institute) used the termsserver-host(orserving host) anduser-host(orusing-host), and these appear in the early documents RFC 5[11]and RFC 4.[12]This usage was continued atXerox PARCin the mid-1970s.
One context in which researchers used these terms was in the design of acomputer network programminglanguage called Decode-Encode Language (DEL).[11]The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that theUnited States Department of Defenseestablished ARPANET (predecessor ofInternet).
Client-hostandserver-hosthave subtly different meanings thanclientandserver. A host is any computer connected to a network. Whereas the wordsserverandclientmay refer either to a computer or to a computer program,server-hostandclient-hostalways refer to computers. The host is a versatile, multifunction computer;clientsandserversare just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.
An early use of the wordclientoccurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client).[13]By 1992, the wordserverhad entered into general parlance.[14][15]
The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts.Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be.[16]It relies heavily on network resources (servers and infrastructure) for computation and storage. Adiskless nodeloads even itsoperating systemfrom the network, and acomputer terminalhas no operating system at all; it is only an input/output interface to the server. In contrast, arich client, such as apersonal computer, has many resources and does not rely on a server for essential functions.
Asmicrocomputersdecreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such asmainframesandminicomputers, to rich clients.[17]This afforded greater, more individualized dominion over computer resources, but complicatedinformation technology management.[16][18][19]During the 2000s,web applicationsmatured enough to rivalapplication softwaredeveloped for a specificmicroarchitecture. This maturation, more affordablemass storage, and the advent ofservice-oriented architecturewere among the factors that gave rise to thecloud computingtrend of the 2010s.[20][failed verification]
In addition to the client-server model,distributed computingapplications often use thepeer-to-peer(P2P) application architecture.
In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload.Load-balancingandfailoversystems are often employed to scale the server beyond a single physical machine.[21][22]
Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.
In apeer-to-peernetwork, two or more computers (peers) pool their resources and communicate in adecentralized system. Peers are coequal, or equipotentnodesin a non-hierarchical network. Unlike clients in a client-server orclient-queue-clientnetwork, peers communicate with each other directly.[23]In peer-to-peer networking, analgorithmin the peer-to-peer communications protocol balancesload, and even peers with modest resources can help to share the load.[24]If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achievehigh availabilitybecause other,redundantpeers make up for any resourcedowntime; as the availability and load capacity of peers change, the protocol reroutes requests.
Both client-server andmaster-slaveare regarded as sub-categories of distributed peer-to-peer systems.[25]
|
https://en.wikipedia.org/wiki/Client%E2%80%93server
|
Clojure(/ˈkloʊʒər/, likeclosure)[17][18]is adynamicandfunctionaldialectof theprogramming languageLispon theJavaplatform.[19][20]
Like most other Lisps, Clojure'ssyntaxis built onS-expressionsthat are firstparsedintodata structuresby aLisp readerbefore beingcompiled.[21][17]Clojure's reader supports literal syntax formaps, sets, andvectorsalong with lists, and these are compiled to the mentioned structures directly.[21]Clojure treatscode as dataand has aLisp macrosystem.[22]Clojure is aLisp-1and is not intended to be code-compatible with other dialects of Lisp, since it uses its own set of data structures incompatible with other Lisps.[22]
Clojure advocatesimmutabilityandimmutable data structuresand encourages programmers to be explicit about managing identity and its states.[23]This focus on programming with immutable values and explicit progression-of-time constructs is intended to facilitate developing more robust, especiallyconcurrent, programs that are simple and fast.[24][25][17]While its type system is entirelydynamic, recent efforts have also sought the implementation of adependent type system.[26]
The language was created byRich Hickeyin the mid-2000s, originally for the Java platform; the language has since been ported to other platforms, such as theCommon Language Runtime(.NET). Hickey continues to lead development of the language as itsbenevolent dictator for life.
Rich Hickeyis the creator of the Clojure language.[19]Before Clojure, he developeddotLisp, a similar project based on the.NETplatform,[27]and three earlier attempts to provide interoperability between Lisp andJava: aJava foreign language interface forCommon Lisp(jfli),[28]AForeign Object Interface for Lisp(FOIL),[29]and aLisp-friendly interface to Java Servlets(Lisplets).[30]
Hickey spent about two and a half years working on Clojure before releasing it publicly in October 2007,[31]much of that time working exclusively on Clojure with no outside funding. At the end of this time, Hickey sent an email announcing the language to some friends in the Common Lisp community.
Clojure's name, according to Hickey, is aword playon the programming concept "closure" incorporating the letters C, L, and J forC#,Lisp, andJavarespectively—three languages which had a major influence on Clojure's design.[18]
Rich Hickey developed Clojure because he wanted a modernLispforfunctional programming, symbiotic with the establishedJavaplatform, and designed forconcurrency.[24][25][32][17]He has also stressed the importance of simplicity in programming language design and software architecture, advocating forloose coupling,polymorphismviaprotocols and type classesinstead ofinheritance,stateless functionsthat arenamespacedinstead ofmethodsorreplacing syntax with data.[33][34][35]
Clojure's approach tostateis characterized by the concept of identities,[23]which are represented as a series of immutable states over time. Since states are immutable values, any number of workers can operate on them in parallel, and concurrency becomes a question of managing changes from one state to another. For this purpose, Clojure provides several mutablereference types, each having well-defined semantics for the transition between states.[23]
Clojure runs on theJavaplatform and as a result, integrates withJavaand fully supports calling Java code from Clojure,[36][17]and Clojure code can be called from Java, too.[37]The community uses tools such as Clojurecommand-line interface(CLI)[38]orLeiningenfor project automation, providing support forMavenintegration. These tools handle project package management and dependencies and are configured using Clojure syntax.
As a Lisp dialect, Clojure supportsfunctionsasfirst-class objects, aread–eval–print loop(REPL), and a macro system.[6]Clojure'sLisp macrosystem is very similar to that ofCommon Lispwith the exception that Clojure's version of thebackquote(termed "syntax quote") qualifies symbols with theirnamespace. This helps prevent unintended name capture, as binding to namespace-qualified names is forbidden. It is possible to force a capturing macro expansion, but it must be done explicitly. Clojure does not allow user-defined reader macros, but the reader supports a more constrained form of syntactic extension.[39]Clojure supportsmultimethods[40]and forinterface-like abstractions has aprotocol[41]based polymorphism and data type system usingrecords,[42]providing high-performance and dynamic polymorphism designed to avoid theexpression problem.
Clojure has support forlazy sequencesand encourages the principle ofimmutabilityandpersistent data structures. As afunctional language, emphasis is placed onrecursionandhigher-order functionsinstead of side-effect-based looping. Automatictail calloptimization is not supported as the JVM does not support it natively;[43][44][45]it is possible to do so explicitly by using therecurkeyword.[46]Forparallelandconcurrentprogramming Clojure providessoftware transactional memory,[47]a reactiveagent system,[1]andchannel-based concurrent programming.[48]
Clojure 1.7 introduced reader conditionals by allowing the embedding of Clojure, ClojureScript and ClojureCLR code in the same namespace.[49][21]Transducers were added as a method for composing transformations. Transducers enable higher-order functions such asmapandfoldto generalize over any source of input data. While traditionally these functions operate onsequences, transducers allow them to work on channels and let the user define their own models for transduction.[50][51][52]
Extensible Data Notation, oredn,[53]is a subset of the Clojure language intended as a data transfer format. It can be used to serialize and deserialize Clojure data structures, and Clojure itself uses a superset of edn to represent programs.
edn is used in a similar way toJSONorXML, but has a relatively large list of built-in elements, shown here with examples:
In addition to those elements, it supports extensibility through the use oftags, which consist of the character#followed by a symbol. When encountering a tag, the reader passes the value of the next element to the corresponding handler, which returns a data value. For example, this could be a tagged element:#myapp/Person {:first "Fred" :last "Mertz"}, whose interpretation will depend on the appropriate handler of the reader.
This definition of extension elements in terms of the others avoids relying on either convention or context to convey elements not included in the base set.
The primary platform of Clojure isJava,[20][36]but other target implementations exist. The most notable of these is ClojureScript,[54]which compiles toECMAScript3,[55]and ClojureCLR,[56]a full port on the.NETplatform, interoperable with its ecosystem.
Other implementations of Clojure on different platforms include:
Tooling for Clojure development has seen significant improvement over the years. The following is a list of some popularIDEsandtext editorswith plug-ins that add support for programming in Clojure:[69]
In addition to the tools provided by the community, the official Clojurecommand-line interface(CLI) tools[38]have also become available onLinux,macOS, andWindowssince Clojure 1.9.[83]
The development process is restricted to the Clojure core team, though issues are publicly visible at the ClojureJIRAproject page.[84]Anyone can ask questions or submit issues and ideas at ask.clojure.org.[85]If it's determined that a new issue warrants a JIRA ticket, a core team member will triage it and add it. JIRA issues are processed by a team of screeners and finally approved by Rich Hickey.[86][87]
With continued interest in functional programming, Clojure's adoption by software developers using the Java platform has continued to increase.[88]The language has also been recommended by software developers such as Brian Goetz,[89][90][91]Eric Evans,[92][93]James Gosling,[94]Paul Graham,[95]andRobert C. Martin.[96][97][98][99]ThoughtWorks, while assessing functional programming languages for their Technology Radar,[100]described Clojure as "a simple, elegant implementation of Lisp on the JVM" in 2010 and promoted its status to "ADOPT" in 2012.[101]
In the "JVM Ecosystem Report 2018" (which was claimed to be "the largest survey ever of Java developers"), that was prepared in collaboration by Snyk and Java Magazine, ranked Clojure as the 2nd most used programming language on the JVM for "main applications".[102]Clojure is used in industry by firms[103]such asApple,[104][105]Atlassian,[106]Funding Circle,[107]Netflix,[108]Nubank,[109]Puppet,[110]andWalmart[111]as well as government agencies such asNASA.[112]It has also been used for creative computing, including visual art, music, games, and poetry.[113]
In the 2023 edition ofStack OverflowDeveloper Survey, Clojure was the fourth mostadmiredin the category of programming and scripting languages, with 68.51% of the respondents who have worked with it last year saying they would like to continue using it. In thedesiredcategory, however it was marked as such by only 2.2% of the surveyed, whereas the highest scoringJavaScriptwasdesiredby 40.15% of the developers participating in the survey.[114]
|
https://en.wikipedia.org/wiki/Clojure
|
Acomputer clusteris a set ofcomputersthat work together so that they can be viewed as a single system. Unlikegrid computers, computer clusters have eachnodeset to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing iscloud computing.
The components of a cluster are usually connected to each other through fastlocal area networks, with eachnode(computer used as a server) running its own instance of anoperating system. In most circumstances, all of the nodes use the same hardware[1][better source needed]and the same operating system, although in some setups (e.g. usingOpen Source Cluster Application Resources(OSCAR)), different operating systems can be used on each computer, or different hardware.[2]
Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[3]
Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performancedistributed computing.[citation needed]They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastestsupercomputersin the world such asIBM's Sequoia.[4]Prior to the advent of clusters, single-unitfault tolerantmainframeswithmodular redundancywere employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.[5]
The desire to get more computing power and better reliability by orchestrating a number of low-costcommercial off-the-shelfcomputers has given rise to a variety of architectures and configurations.
The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fastlocal area network.[6]The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via asingle system imageconcept.[6]
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such aspeer-to-peerorgrid computingwhich also use many nodes, but with a far moredistributed nature.[6]
A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fastsupercomputer. A basic approach to building a cluster is that of aBeowulfcluster which may be built with a few personal computers to produce a cost-effective alternative to traditionalhigh-performance computing. An early project that showed the viability of the concept was the 133-nodeStone Soupercomputer.[7]The developers usedLinux, theParallel Virtual Machinetoolkit and theMessage Passing Interfacelibrary to achieve high performance at a relatively low cost.[8]
Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. TheTOP500organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was theK computerwhich has adistributed memory, cluster architecture.[9]
Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup.[10]Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented byGene AmdahlofIBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing:Amdahl's Law.
The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.
The first production system designed as a cluster was the BurroughsB5700in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.
The first commercial loosely coupled clustering product wasDatapoint Corporation's"Attached Resource Computer" (ARC) system, developed in 1977, and usingARCnetas the cluster interface. Clustering per se did not really take off untilDigital Equipment Corporationreleased theirVAXclusterproduct in 1984 for theVMSoperating system. The ARC and VAXcluster products not only supportedparallel computing, but also sharedfile systemsandperipheraldevices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were theTandem NonStop(a 1976 high-availability commercial product)[11][12]and theIBM S/390 Parallel Sysplex(circa 1994, primarily for business use).
Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network,supercomputersbegan to use them within the same computer. Following the success of theCDC 6600in 1964, theCray 1was delivered in 1976, and introduced internal parallelism viavector processing.[13]While early supercomputers excluded clusters and relied onshared memory, in time some of the fastest supercomputers (e.g. theK computer) relied on cluster architectures.
Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use ahigh-availabilityapproach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.
"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized.[14]However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simpleround-robin methodby assigning each new request to a different node.[14]
Computer clusters are used for computation-intensive purposes, rather than handlingIO-orientedoperations such as web service or databases.[15]For instance, a computer cluster might supportcomputational simulationsof vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing".
"High-availability clusters" (also known asfailoverclusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundantnodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminatesingle points of failure. There are commercial implementations of High-Availability clusters for many operating systems. TheLinux-HAproject is one commonly usedfree softwareHA package for theLinuxoperating system.
Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enablesscalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g.,RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.[16][17]
In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.
When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.
If you have a large number of computers clustered together, this lends itself to the use ofdistributed file systemsandRAID, both of which can increase the reliability and speed of a cluster.
One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approachinggrid computing.
In aBeowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves.[15]In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization.[15]The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.[15]
A special purpose 144-nodeDEGIMA clusteris tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations.[18]
Due to the increasing computing power of each generation ofgame consoles, a novel use has emerged where they are repurposed intoHigh-performance computing(HPC) clusters. Some examples of game console clusters areSony PlayStation clustersandMicrosoftXboxclusters. Another example of consumer game product is theNvidia Tesla Personal Supercomputerworkstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).[2]
Computer clusters have historically run on separate physicalcomputerswith the sameoperating system. With the advent ofvirtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[19][citation needed][clarification needed]The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation isXenas the virtualization manager withLinux-HA.[19]
As the computer clusters were appearing during the 1980s, so weresupercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied onshared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.
However, the use of aclustered file systemis essential in modern computer clusters.[citation needed]Examples include theIBM General Parallel File System, Microsoft'sCluster Shared Volumesor theOracle Cluster File System.
Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).[20]
PVM was developed at theOak Ridge National Laboratoryaround 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.[20][21]
MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported byARPAandNational Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically useTCP/IPand socket connections.[20]MPI is now a widely available communications model that enables parallel programs to be written in languages such asC,Fortran,Python, etc.[21]Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such asMPICHandOpen MPI.[21][22]
One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes.[23]In some cases this provides an advantage toshared memory architectureswith lower administration costs.[23]This has also madevirtual machinespopular, due to the ease of administration.[23]
When a large multi-user cluster needs to access very large amounts of data,task schedulingbecomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges.[24]This is an area of ongoing research; algorithms that combine and extendMapReduceandHadoophave been proposed and studied.[24]
When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational.[25][26]Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.[25]
TheSTONITHmethod stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance,power fencinguses a power controller to turn off an inoperable node.[25]
Theresources fencingapproach disallows access to resources without powering off the node. This may includepersistent reservation fencingvia theSCSI3, fibre channel fencing to disable thefibre channelport, orglobal network block device(GNBD) fencing to disable access to the GNBD server.
Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achievingtask parallelismwithout multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.[27]
Automatic parallelizationof programs remains a technical challenge, butparallel programming modelscan be used to effectuate a higherdegree of parallelismvia the simultaneous execution of separate portions of a program on different processors.[27][28]
Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by theHigh Performance Debugging Forum(HPDF) which resulted in the HPD specifications.[21][29]Tools such asTotalViewwere then developed to debug parallel implementations on computer clusters which useMessage Passing Interface(MPI) orParallel Virtual Machine(PVM) for message passing.
TheUniversity of California, BerkeleyNetwork of Workstations(NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.[21]
Application checkpointingcan be used to restore a given state of the system when a node fails during a long multi-node computation.[30]This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.[30]
The Linux world supports various cluster software; for application clustering, there isdistcc, andMPICH.Linux Virtual Server,Linux-HA– director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes.MOSIX,LinuxPMI,Kerrighed,OpenSSIare full-blown clusters integrated into thekernelthat provide for automatic process migration among homogeneous nodes.OpenSSI,openMosixandKerrighedaresingle-system imageimplementations.
Microsoft Windowscomputer cluster Server 2003 based on theWindows Serverplatform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools.
gLiteis a set of middleware technologies created by theEnabling Grids for E-sciencE(EGEE) project.
slurmis also used to schedule and manage some of the largest supercomputer clusters (see top500 list).
Although most computer clusters are permanent fixtures, attempts atflash mob computinghave been made to build short-lived clusters for specific computations. However, larger-scalevolunteer computingsystems such asBOINC-based systems have had more followers.
Basic concepts
Distributed computing
Specific systems
Computer farms
|
https://en.wikipedia.org/wiki/Cluster_computing
|
Ininformation technologyandcomputer science, especially in the fields ofcomputer programming,operating systems,multiprocessors, anddatabases,concurrency controlensures that correct results forconcurrentoperations are generated, while getting those results as quickly as possible.
Computer systems, bothsoftwareandhardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules. When components that operate concurrently interact by messaging or by sharing accessed data (inmemoryorstorage), a certain component's consistency may be violated by another component. The general area of concurrency control provides rules, methods, design methodologies, andtheoriesto maintain the consistency of components operating concurrently while interacting, and thus the consistency and correctness of the whole system. Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction. Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels. Concurrency control can require significant additional complexity and overhead in aconcurrent algorithmcompared to the simplersequential algorithm.
For example, a failure in concurrency control can result indata corruptionfromtorn read or write operations.
Comments:
Concurrency control inDatabase management systems(DBMS; e.g.,Bernstein et al. 1987,Weikum and Vossen 2001), othertransactionalobjects, and related distributed applications (e.g.,Grid computingandCloud computing) ensures thatdatabase transactionsare performedconcurrentlywithout violating thedata integrityof the respectivedatabases. Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s. A well established concurrency controltheoryfor database systems is outlined in the references mentioned above:serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms. An alternative theory for concurrency control of atomic transactions overabstract data typesis presented in (Lynch et al. 1993), and not utilized below. This theory is more refined, complex, with a wider scope, and has been less utilized in the Database literature than the classical theory above. Each theory has its pros and cons, emphasis andinsight. To some extent they are complementary, and their merging may be useful.
To ensure correctness, a DBMS usually guarantees that onlyserializabletransaction schedulesare generated, unlessserializabilityisintentionally relaxedto increase performance, but only in cases where application correctness is not harmed. For maintaining correctness in cases of failed (aborted) transactions (which can always happen for many reasons) schedules also need to have therecoverability(from abort) property. A DBMS also guarantees that no effect ofcommittedtransactions is lost, and no effect ofaborted(rolled back) transactions remains in the related database. Overall transaction characterization is usually summarized by theACIDrules below. As databases have becomedistributed, or needed to cooperate in distributed environments (e.g.,Federated databasesin the early 1990, andCloud computingcurrently), the effective distribution of concurrency control mechanisms has received special attention.
The concept of adatabase transaction(oratomic transaction) has evolved in order to enable both a well understood database system behavior in a faulty environment where crashes can happen any time, andrecoveryfrom a crash to a well understood database state. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading adatabase object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands). Every database transaction obeys the following rules (by support in the database system; i.e., a database system is designed to guarantee them for the transactions it runs):
The concept of atomic transaction has been extended during the years to what has becomeBusiness transactionswhich actually implement types ofWorkflowand are not atomic. However also such enhanced transactions typically utilize atomic transactions as components.
If transactions are executedserially, i.e., sequentially with no overlap in time, no transaction concurrency exists. However, if concurrent transactions with interleaving operations are allowed in an uncontrolled manner, some unexpected, undesirable results may occur, such as:
Most high-performance transactional systems need to run transactions concurrently to meet their performance requirements. Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently.
The main categories of concurrency control mechanisms are:
Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance.
The mutual blocking between two transactions (where each one blocks the other) or more results in adeadlock, where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low.
Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories.
Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods,[1]which have each many variants, and in some cases may overlap or be combined, are:
Other major concurrency control types that are utilized in conjunction with the methods above include:
The most common mechanism type in database systems since their early days in the 1970s has beenStrong strict Two-phase locking(SS2PL; also calledRigorous schedulingorRigorous 2PL) which is a special case (variant) ofTwo-phase locking(2PL). It is pessimistic. In spite of its long name (for historical reasons) the idea of theSS2PLmechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these SS2PL (or Rigorous) schedules have the SS2PL (or Rigorousness) property.
Concurrency control mechanisms firstly need to operate correctly, i.e., to maintain each transaction's integrity rules (as related to concurrency; application-specific integrity rule are out of the scope here) while transactions are running concurrently, and thus the integrity of the entire transactional system. Correctness needs to be achieved with as good performance as possible. In addition, increasingly a need exists to operate effectively while transactions aredistributedoverprocesses,computers, andcomputer networks. Other subjects that may affect concurrency control arerecoveryandreplication.
For correctness, a common major goal of most concurrency control mechanisms is generatingscheduleswith theSerializabilityproperty. Without serializability undesirable phenomena may occur, e.g., money may disappear from accounts, or be generated from nowhere.Serializabilityof a schedule means equivalence (in the resulting database values) to someserialschedule with the same transactions (i.e., in which transactions are sequential with no overlap in time, and thus completely isolated from each other: No concurrent access by any two transactions to the same data is possible). Serializability is considered the highest level ofisolationamongdatabase transactions, and the major correctness criterion for concurrent transactions. In some cases compromised,relaxed formsof serializability are allowed for better performance (e.g., the popularSnapshot isolationmechanism) or to meetavailabilityrequirements in highly distributed systems (seeEventual consistency), but only if application's correctness is not violated by the relaxation (e.g., no relaxation is allowed formoneytransactions, since by relaxation money can disappear, or appear from nowhere).
Almost all implemented concurrency control mechanisms achieve serializability by providingConflict serializability, a broad special case of serializability (i.e., it covers, enables most serializable schedules, and does not impose significant additional delay-causing constraints) which can be implemented efficiently.
Concurrency control typically also ensures theRecoverabilityproperty of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons).Recoverability(from abort) means that no committed transaction in a schedule has read data written by an aborted transaction. Such data disappear from the database (upon the abort) and are parts of an incorrect database state. Reading such data violates the consistency rule of ACID. Unlike Serializability, Recoverability cannot be compromised, relaxed at any case, since any relaxation results in quick database integrity violation upon aborts. The major methods listed above provide serializability mechanisms. None of them in its general form automatically provides recoverability, and special considerations and mechanism enhancements are needed to support recoverability. A commonly utilized special case of recoverability isStrictness, which allows efficient database recovery from failure (but excludes optimistic implementations.
With the fast technological development of computing the difference between local and distributed computing over low latencynetworksorbusesis blurring. Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., incomputer clustersandmulti-core processors. However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale. This often turns transactions into distributed ones, if they themselves need to span multi-processes. In these cases most local concurrency control techniques do not scale well.
All systems are prone to failures, and handlingrecoveryfrom failure is a must. The properties of the generated schedules, which are dictated by the concurrency control mechanism, may affect the effectiveness and efficiency of recovery. For example, the Strictness property (mentioned in the sectionRecoverabilityabove) is often desirable for an efficient recovery.
For high availability database objects are oftenreplicated. Updates of replicas of a same database object need to be kept synchronized. This may affect the way concurrency control is done (e.g., Gray et al. 1996[2]).
Multitaskingoperating systems, especiallyreal-time operating systems, need to maintain the illusion that all tasks running on top of them are all running at the same time, even though only one or a few tasks really are running at any given moment due to the limitations of the hardware the operating system is running on. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task ofconcurrent computingis to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such asdeadlock. Other solutions areNon-blocking algorithmsandRead-copy-update.
|
https://en.wikipedia.org/wiki/Concurrency_control
|
Concurrent computingis a form ofcomputingin which severalcomputationsare executedconcurrently—during overlapping time periods—instead ofsequentially—with one completing before the next starts.
This is a property of a system—whether aprogram,computer, or anetwork—where there is a separate execution point or "thread of control" for each process. Aconcurrent systemis one where a computation can advance without waiting for all other computations to complete.[1]
Concurrent computing is a form ofmodular programming. In itsparadigman overall computation isfactoredinto subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing includeEdsger Dijkstra,Per Brinch Hansen, andC.A.R. Hoare.[2]
The concept of concurrent computing is frequently confused with the related but distinct concept ofparallel computing,[3][4]although both can be described as "multiple processes executingduring the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separateprocessorsof amulti-processormachine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a]By contrast, concurrent computing consists of processlifetimesoverlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.[5]: 1
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process viatime-sharingslices: only one process runs at a time, and if it does not complete during its time slice, it ispaused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.[citation needed]
Concurrent computationsmaybe executed in parallel,[3][6]for example, by assigning each process to a separate processor or processor core, ordistributinga computation across a network.
The exact timing of when tasks in a concurrent system are executed depends on thescheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:[citation needed]
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished,concurrent/sequentialandparallel/serialare used as opposing pairs.[7]A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called aserial schedule. A set of tasks that can be scheduled serially isserializable, which simplifiesconcurrency control.[citation needed]
The main challenge in designing concurrent programs isconcurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[6]Potential problems includerace conditions,deadlocks, andresource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resourcebalance:
Supposebalance = 500, and two concurrentthreadsmake the callswithdraw(300)andwithdraw(350). If line 3 in both operations executes before line 5 both operations will find thatbalance >= withdrawalevaluates totrue, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, ornon-blocking algorithms.
There are advantages of concurrent computing:
Introduced in 1962,Petri netswere an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, andDataflow architectureswere created to physically implement the ideas of dataflow theory. Beginning in the late 1970s,process calculisuch asCalculus of Communicating Systems(CCS) andCommunicating Sequential Processes(CSP) were developed to permit algebraic reasoning about systems composed of interacting components. Theπ-calculusadded the capability for reasoning about dynamic topologies.
Input/output automatawere introduced in 1987.
Logics such as Lamport'sTLA+, and mathematical models such astracesandActor event diagrams, have also been developed to describe the behavior of concurrent systems.
Software transactional memoryborrows fromdatabase theorythe concept ofatomic transactionsand applies them to memory accesses.
Concurrent programming languages and multiprocessor programs must have aconsistency model(also known as a memory model). The consistency model defines rules for how operations oncomputer memoryoccur and how results are produced.
One of the first consistency models wasLeslie Lamport'ssequential consistencymodel. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".[10]
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as anoperating system process, or implementing the computational processes as a set ofthreadswithin a single operating system process.
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by usingfutures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
Concurrent computing developed out of earlier work on railroads andtelegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as viatime-division multiplexing(1870s).
The academic study of concurrent algorithms started in the 1960s, withDijkstra (1965)credited with being the first paper in this field, identifying and solvingmutual exclusion.[11]
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.
At the programming language level:
At the operating system level:
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
Concurrent programming languagesare programming languages that use language constructs forconcurrency. These constructs may involvemulti-threading, support fordistributed computing,message passing,shared resources(includingshared memory) orfutures and promises. Such languages are sometimes described asconcurrency-oriented languagesorconcurrency-oriented programming languages(COPL).[12]
Today, the most commonly used programming languages that have specific constructs for concurrency areJavaandC#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided bymonitors(although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model,Erlangis probably the most widely used in industry at present.[citation needed]
Many concurrent programming languages have been developed more as research languages (e.g.Pict) rather than as languages for production use. However, languages such asErlang,Limbo, andoccamhave seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.
|
https://en.wikipedia.org/wiki/Concurrent_computing
|
Concurrent object-oriented programmingis aprogramming paradigmwhich combinesobject-oriented programming(OOP) together withconcurrency. While numerous programming languages, such asJava, combine OOP with concurrency mechanisms likethreads, the phrase "concurrent object-oriented programming" primarily refers to systems where objects themselves are a concurrency primitive, such as when objects are combined with theactor model.
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Concurrent_object-oriented_programming
|
Insoftware engineering,concurrencypatternsare those types ofdesign patternsthat deal with themulti-threadedprogramming paradigm.
Examples of this class of patterns include:
Recordingsabout concurrency patterns from Software Engineering Radio:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Concurrency_pattern
|
CADP[1](Construction and Analysis of Distributed Processes) is a toolbox for the design of communication protocols and distributed systems. CADP is developed by the CONVECS team (formerly by the VASY team) atINRIARhone-Alpes and connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects.
The purpose of the CADP toolkit is to facilitate the design of reliable systems by use of formal description techniques together with software tools for simulation,rapid application development, verification, and test generation.
CADP can be applied to any system that comprises asynchronous concurrency, i.e., any system whose behavior can be modeled as a set of parallel processes governed by interleaving semantics. Therefore, CADP can be used to design hardware architecture, distributed algorithms, telecommunications protocols, etc.
The enumerative verification (also known as explicit state verification) techniques implemented in CADP, though less general that theorem proving, enable an automatic, cost-efficient detection of design errors in complex systems.
CADP includes tools to support use of two approaches in formal methods, both of which are needed for reliable systems design:
Work began on CADP in 1986, when the development of the first two tools, CAESAR and ALDEBARAN, was undertaken. In 1989, the CADP acronym was coined, which stood forCAESAR/ALDEBARAN Distribution Package. Over time, several tools were added, including programming interfaces that enabled tools to be contributed: the CADP acronym then became theCAESAR/ALDEBARAN Development Package. Currently CADP contains over 50 tools. While keeping the same acronym, the name of the toolbox has been changed to better indicate its purpose:Construction and Analysis of Distributed Processes.
The releases of CADP have been successively named with alphabetic letters (from "A" to "Z"), then with the names of cities hosting academic research groups actively working on theLOTOSlanguage and, more generally, the names of cities in which major contributions toconcurrency theoryhave been made.
Between major releases, minor releases are often available, providing early access to new features and improvements. For more information, see thechange listpage on the CADP website.
CADP offers a wide set of functionalities, ranging from step-by-step simulation to massively parallelmodel checking. It includes:
CADP is designed in a modular way and puts the emphasis on intermediate formats and programming interfaces (such as the BCG and OPEN/CAESAR software environments), which allow the CADP tools to be combined with other tools and adapted to various specification languages.
Verification is comparison of a complex system against a set of properties characterizing the intended functioning of the system (for instance, deadlock freedom, mutual exclusion, fairness, etc.).
Most of the verification algorithms in CADP are based on the labeled transition systems (or, simply, automata or graphs) model, which consists of a set of states, an initial state, and a transition relation between states. This model is often generated automatically from high level descriptions of the system under study, then compared against the system properties using various decision procedures. Depending on the formalism used to express the properties, two approaches are possible:
Although these techniques are efficient and automated, their main limitation is the state explosion problem, which occurs when models are too large to fit in computer memory. CADP provides software technologies for handling models in two complementary ways:
Accurate specification of reliable, complex systems requires a language that is executable (for enumerative verification) and has formal semantics (to avoid any as language ambiguities that could lead to interpretation divergences between designers and implementors). Formal semantics are also required when it is necessary to establish the correctness of an infinite system; this cannot be done using enumerative techniques because they deal only with finite abstractions, so must be done using theorem proving techniques, which only apply to languages with a formal semantics.
CADP acts on aLOTOSdescription of the system. LOTOS is an international standard for protocol description (ISO/IEC standard 8807:1989), which combines the concepts of process algebras (in particularCCSandCSPand algebraic abstract data types. Thus, LOTOS can describe both asynchronous concurrent processes and complex data structures.
LOTOS was heavily revised in 2001, leading to the publication of E-LOTOS (Enhanced-Lotos, ISO/IEC standard 15437:2001), which tries to provide a greater expressiveness (for instance, by introducing quantitative time to describe systems with real-time constraints) together with a better user friendliness.
Several tools exist to convert descriptions in other process calculi or intermediate format into LOTOS, so that the CADP tools can then be used for verification.
CADP is distributed free of charge to universities and public research centers. Users in industry can obtain an evaluation license for non-commercial use during a limited period of time, after which a full license is required. To request a copy of CADP, complete the registration form at.[3]After the license agreement has been signed, you will receive details of how to download and install CADP.
The toolbox contains several tools:
A number of tools have been developed within the OPEN/CAESAR environment:
The connection between explicit models (such as BCG graphs) and implicit models (explored on the fly) is ensured by OPEN/CAESAR-compliant compilers including:
The CADP toolbox also includes additional tools, such as ALDEBARAN and TGV (Test Generation based on Verification) developed by the Verimag laboratory (Grenoble) and the Vertecs project-team of INRIA Rennes.
The CADP tools are well-integrated and can be accessed easily using either the EUCALYPTUS graphical interface or the SVL[10]scripting language. Both EUCALYPTUS and SVL provide users with an easy, uniform access to the CADP tools by performing file format conversions automatically whenever needed and by supplying appropriate command-line options as the tools are invoked.
|
https://en.wikipedia.org/wiki/Construction_and_Analysis_of_Distributed_Processes
|
GDC,
D, also known asdlang, is amulti-paradigmsystemprogramming languagecreated byWalter BrightatDigital Marsand released in 2001.Andrei Alexandrescujoined the design and development effort in 2007. Though it originated as a re-engineering ofC++, D is now a very different language. As it has developed, it has drawn inspiration from otherhigh-level programming languages. Notably, it has been influenced byJava,Python,Ruby,C#, andEiffel.
The D language reference describes it as follows:
D is a general-purpose systems programming language with a C-like syntax that compiles to native code. It is statically typed and supports both automatic (garbage collected) and manual memory management. D programs are structured as modules that can be compiled separately and linked with external libraries to create native libraries or executables.[11]
D is notsource-compatiblewith C and C++ source code in general. However, any code that is legal in both C/C++ and D should behave in the same way.
Like C++, D hasclosures,anonymous functions,compile-time function execution,design by contract, ranges, built-in container iteration concepts, andtype inference. D's declaration, statement and expressionsyntaxesalso closely match those of C++.
Unlike C++, D also implementsgarbage collection,first classarrays(std::arrayin C++ are technically not first class),array slicing,nested functionsandlazy evaluation. D uses Java-stylesingle inheritancewithinterfacesandmixinsrather than C++-stylemultiple inheritance.
D is a systems programming language. Like C++, and unlike application languages such asJavaandC#, D supportslow-level programming, includinginline assembler. Inline assembler allows programmers to enter machine-specificassembly codewithin standard D code. System programmers use this method to access the low-level features of theprocessorthat are needed to run programs that interface directly with the underlyinghardware, such asoperating systemsanddevice drivers. Low-level programming is also used to write higherperformancecode than would be produced by acompiler.
D supportsfunction overloadingandoperator overloading. Symbols (functions,variables,classes) can be declared in any order;forward declarationsare not needed.
In D, text character strings are arrays of characters, and arrays in D are bounds-checked.[12]D hasfirst class typesfor complex and imaginary numbers.[13]
D supports five mainprogramming paradigms:
Imperative programming in D is almost identical to that in C. Functions, data, statements, declarations and expressions work just as they do in C, and the C runtime library may be accessed directly. On the other hand, unlike C, D'sforeachloop construct allows looping over a collection. D also allowsnested functions, which are functions that are declared inside another function, and which may access the enclosing function'slocal variables.
Object-oriented programming in D is based on a singleinheritancehierarchy, with all classes derived from class Object. D does not support multiple inheritance; instead, it uses Java-styleinterfaces, which are comparable to C++'s pure abstract classes, andmixins, which separate common functionality from the inheritance hierarchy. D also allows the defining of static and final (non-virtual) methods in interfaces.
Interfaces and inheritance in D supportcovariant typesfor return types of overridden methods.
D supports type forwarding, as well as optional customdynamic dispatch.
Classes (and interfaces) in D can containinvariantswhich are automatically checked before and after entry to public methods, in accordance with thedesign by contractmethodology.
Many aspects of classes (and structs) can beintrospectedautomatically at compile time (a form ofreflective programming(reflection) usingtype traits) and at run time (RTTI /TypeInfo), to facilitate generic code or automatic code generation (usually using compile-time techniques).
D supportsfunctional programmingfeatures such asfunction literals,closures, recursively-immutable objects and the use ofhigher-order functions. There are two syntaxes for anonymous functions, including a multiple-statement form and a "shorthand" single-expression notation:[14]
There are two built-in types for function literals,function, which is simply a pointer to a stack-allocated function, anddelegate, which also includes a pointer to the relevantstack frame, the surrounding ‘environment’, which contains the current local variables. Type inference may be used with an anonymous function, in which case the compiler creates adelegateunless it can prove that an environment pointer is not necessary. Likewise, to implement a closure, the compiler places enclosed local variables on theheaponly if necessary (for example, if a closure is returned by another function, and exits that function's scope). When using type inference, the compiler will also add attributes such aspureandnothrowto a function's type, if it can prove that they apply.
Other functional features such ascurryingand common higher-order functions such asmap,filter, andreduceare available through the standard library modulesstd.functionalandstd.algorithm.
Alternatively, the above function compositions can be expressed using Uniform function call syntax (UFCS) for more natural left-to-right reading:
Parallel programming concepts are implemented in the library, and do not require extra support from the compiler. However the D type system and compiler ensure that data sharing can be detected and managed transparently.
iota(11).parallelis equivalent tostd.parallelism.parallel(iota(11))by using UFCS.
The same module also supportstaskPoolwhich can be used for dynamic creation of parallel tasks, as well as map-filter-reduce and fold style operations on ranges (and arrays), which is useful when combined with functional operations.std.algorithm.mapreturns a lazily evaluated range rather than an array. This way, the elements are computed by each worker task in parallel automatically.
Concurrency is fully implemented in the library, and it does not require support from the compiler. Alternative implementations and methodologies of writing concurrent code are possible. The use of D typing system does help ensure memory safety.
Metaprogrammingis supported through templates, compile-time function execution,tuples, and string mixins. The following examples demonstrate some of D's compile-time features.
Templates in D can be written in a more imperative style compared to the C++ functional style for templates. This is a regular function that calculates thefactorialof a number:
Here, the use ofstatic if, D's compile-time conditional construct, is demonstrated to construct a template that performs the same calculation using code that is similar to that of the function above:
In the following two examples, the template and function defined above are used to compute factorials. The types of constants need not be specified explicitly as the compilerinfers their typesfrom the right-hand sides of assignments:
This is an example ofcompile-time function execution(CTFE). Ordinary functions may be used in constant, compile-time expressions provided they meet certain criteria:
Thestd.string.formatfunction performsprintf-like data formatting (also at compile-time, through CTFE), and the "msg"pragmadisplays the result at compile time:
String mixins, combined with compile-time function execution, allow for the generation of D code using string operations at compile time. This can be used to parsedomain-specific languages, which will be compiled as part of the program:
Memory is usually managed withgarbage collection, but specific objects may be finalized immediately when they go out of scope. This is what the majority of programs and libraries written in D use.
In case more control over memory layout and better performance is needed, explicit memory management is possible using theoverloaded operatornew, by callingC'smalloc and freedirectly, or implementing custom allocator schemes (i.e. on stack with fallback, RAII style allocation, reference counting, shared reference counting). Garbage collection can be controlled: programmers may add and exclude memory ranges from being observed by the collector, can disable and enable the collector and force either a generational or full collection cycle.[15]The manual gives many examples of how to implement different highly optimized memory management schemes for when garbage collection is inadequate in a program.[16]
In functions,structinstances are by default allocated on the stack, whileclassinstances by default allocated on the heap (with only reference to the class instance being on the stack). However this can be changed for classes, for example using standard library templatestd.typecons.scoped, or by usingnewfor structs and assigning to a pointer instead of a value-based variable.[17]
In functions, static arrays (of known size) are allocated on the stack. For dynamic arrays, one can use thecore.stdc.stdlib.allocafunction (similar toallocain C), to allocate memory on the stack. The returned pointer can be used (recast) into a (typed) dynamic array, by means of a slice (however resizing array, including appending must be avoided; and for obvious reasons they must not be returned from the function).[17]
Ascopekeyword can be used both to annotate parts of code, but also variables and classes/structs, to indicate they should be destroyed (destructor called) immediately on scope exit. Whatever the memory is deallocated also depends on implementation and class-vs-struct differences.[18]
std.experimental.allocatorcontains a modular and composable allocator templates, to create custom high performance allocators for special use cases.[19]
SafeD[20]is the name given to the subset of D that can be guaranteed to bememory safe. Functions marked@safeare checked at compile time to ensure that they do not use any features, such as pointer arithmetic and unchecked casts, that could result in corruption of memory. Any other functions called must also be marked as@safeor@trusted. Functions can be marked@trustedfor the cases where the compiler cannot distinguish between safe use of a feature that is disabled in SafeD and a potential case of memory corruption.[21]
Initially under the banners of DIP1000[22]and DIP25[23](now part of the language specification[24]), D provides protections against certain ill-formed constructions involving the lifetimes of data.
The current mechanisms in place primarily deal with function parameters and stack memory however it is a stated ambition of the leadership of the programming language to provide a more thorough treatment of lifetimes within the D programming language[25](influenced by ideas fromRust programming language).
Within @safe code, the lifetime of an assignment involving areference typeis checked to ensure that the lifetime of the assignee is longer than that of the assigned.
For example:
When applied to function parameter which are either of pointer type or references, the keywordsreturnandscopeconstrain the lifetime and use of that parameter.
The language standard dictates the following behaviour:[26]
An annotated example is given below.
C'sapplication binary interface (ABI)is supported, as well as all of C's fundamental and derived types, enabling direct access to existing C code and libraries. Dbindingsare available for many popular C libraries. Additionally, C's standardlibraryis part of standard D.
On Microsoft Windows, D can accessComponent Object Model(COM) code.
As long as memory management is properly taken care of, many other languages can be mixed with D in a single binary. For example, the GDC compiler allows to link and intermix C, C++, and other supported language codes such as Objective-C. D code (functions) can also be marked as using C, C++, Pascal ABIs, and thus be passed to the libraries written in these languages ascallbacks. Similarly data can be interchanged between the codes written in these languages in both ways. This usually restricts use to primitive types, pointers, some forms of arrays,unions, structs, and only some types of function pointers.
Because many other programming languages often provide the C API for writing extensions or running the interpreter of the languages, D can interface directly with these languages as well, using standard C bindings (with a thin D interface file). For example, there are bi-directional bindings for languages likePython,[27]Lua[28][29]and other languages, often using compile-time code generation and compile-time type reflection methods.
For D code marked asextern(C++), the following features are specified:
C++ namespaces are used via the syntaxextern(C++, namespace)wherenamespaceis the name of the C++ namespace.
The C++ side
The D side
The D programming language has an official subset known as "Better C".[30]This subset forbids access to D features requiring use of runtime libraries other than that of C.
Enabled via the compiler flags "-betterC" on DMD and LDC, and "-fno-druntime" on GDC,Better Cmay only call into D code compiled under the same flag (and linked code other than D) but code compiled without theBetter Coption may call into code compiled with it: this will, however, lead to slightly different behaviours due to differences in how C and D handle asserts.
Walter Brightstarted working on a new language in 1999. D was first released in December 2001[1]and reached version 1.0 in January 2007.[31]The first version of the language (D1) concentrated on the imperative, object oriented and metaprogramming paradigms,[32]similar to C++.
Some members of the D community dissatisfied with Phobos, D's officialruntimeandstandard library, created an alternative runtime and standard library named Tango. The first public Tango announcement came within days of D 1.0's release.[33]Tango adopted a different programming style, embracing OOP and high modularity. Being a community-led project, Tango was more open to contributions, which allowed it to progress faster than the official standard library. At that time, Tango and Phobos were incompatible due to different runtime support APIs (the garbage collector, threading support, etc.). This made it impossible to use both libraries in the same project. The existence of two libraries, both widely in use, has led to significant dispute due to some packages using Phobos and others using Tango.[34]
In June 2007, the first version of D2 was released.[35]The beginning of D2's development signaled D1's stabilization. The first version of the language has been placed in maintenance, only receiving corrections and implementation bugfixes. D2 introducedbreaking changesto the language, beginning with its first experimentalconst system. D2 later added numerous other language features, such asclosures,purity, and support for the functional and concurrent programming paradigms. D2 also solved standard library problems by separating the runtime from the standard library. The completion of a D2 Tango port was announced in February 2012.[36]
The release ofAndrei Alexandrescu's bookThe D Programming Languageon 12 June 2010, marked the stabilization of D2, which today is commonly referred to as just "D".
In January 2011, D development moved from a bugtracker / patch-submission basis toGitHub. This has led to a significant increase in contributions to the compiler, runtime and standard library.[37]
In December 2011, Andrei Alexandrescu announced that D1, the first version of the language, would be discontinued on 31 December 2012.[38]The final D1 release, D v1.076, was on 31 December 2012.[39]
Code for the official D compiler, theDigital Mars D compilerby Walter Bright, was originally released under a customlicense, qualifying assource availablebut not conforming to theOpen Source Definition.[40]In 2014, the compilerfront-endwasre-licensedasopen sourceunder theBoost Software License.[3]This re-licensed code excluded the back-end, which had been partially developed atSymantec. On 7 April 2017, the whole compiler was made available under the Boost license after Symantec gave permission to re-license the back-end, too.[4][41][42][43]On 21 June 2017, the D Language was accepted for inclusion in GCC.[44]
Most current D implementationscompiledirectly intomachine code.
Production ready compilers:
Toy and proof-of-concept compilers:
Using above compilers and toolchains, it is possible to compile D programs to target many different architectures, includingIA-32,amd64,AArch64,PowerPC,MIPS64,DEC Alpha,Motorola m68k,SPARC,s390,WebAssembly. The primary supported operating systems areWindowsandLinux, but various compilers also supportMac OS X,FreeBSD,NetBSD,AIX,Solaris/OpenSolarisandAndroid, either as a host or target, or both.WebAssemblytarget (supported via LDC and LLVM) can operate in any WebAssembly environment, like modern web browser (Google Chrome,Mozilla Firefox,Microsoft Edge,Apple Safari), or dedicated Wasm virtual machines.
Editors andintegrated development environments(IDEs) supportingsyntax highlightingand partialcode completionfor the language includeSlickEdit,Emacs,vim,SciTE,Smultron, Zeus,[57]andGeanyamong others.[58]
Open sourceD IDEs forWindowsexist, some written in D, such as Poseidon,[71]D-IDE,[72]and Entice Designer.[73]
D applications can be debugged using any C/C++ debugger, likeGNU Debugger(GDB) orWinDbg, although support for various D-specific language features is extremely limited. On Windows, D programs can be debugged usingDdbg, or Microsoft debugging tools (WinDBG and Visual Studio), after having converted the debug information usingcv2pdb. TheZeroBUGSArchived23 December 2017 at theWayback Machinedebugger for Linux has experimental support for the D language. Ddbg can be used with various IDEs or from the command line; ZeroBUGS has its owngraphical user interface(GUI).
DustMiteis a tool for minimizing D source code, useful when finding compiler or tests issues.[74]
dubis a popular package and build manager for D applications and libraries, and is often integrated into IDE support.[75]
This example program prints its command line arguments. Themainfunction is the entry point of a D program, andargsis an array of strings representing the command line arguments. Astringin D is an array of characters, represented byimmutable(char)[].
Theforeachstatement can iterate over any collection. In this case, it is producing a sequence of indexes (i) and values (arg) from the arrayargs. The indexiand the valuearghave their types inferred from the type of the arrayargs.
The following shows several D capabilities and D design trade-offs in a short program. It iterates over the lines of a text file namedwords.txt, which contains a different word on each line, and prints all the words that are anagrams of other words.
Notable organisations that use the D programming language for projects includeFacebook,[76]eBay,[77]andNetflix.[78]
D has been successfully used forAAA games,[79]language interpreters, virtual machines,[80][81]anoperating systemkernel,[82]GPUprogramming,[83]web development,[84][85]numerical analysis,[86]GUI applications,[87][88]apassenger information system,[89]machine learning,[90]text processing, web and application servers and research.
The notorious North Korean hacking group known asLazarusexploited CVE-2021-44228, aka "Log4Shell," to deploy threemalwarefamilies written in DLang.[91]
The lack of transparency, agility and predictability in the process of getting corrections of known flaws and errors incorporated, and the difficulty of introducing minor and major changes to the D language, is eminently described in a blog post article[92]by a former contributor. The apparent frustration described there has led to theOpenDfork[93]on January 1, 2024.
|
https://en.wikipedia.org/wiki/D_(programming_language)
|
Distributed computingis a field ofcomputer sciencethat studiesdistributed systems, defined ascomputer systemswhose inter-communicating components are located on differentnetworked computers.[1][2]
The components of a distributed system communicate and coordinate their actions bypassing messagesto one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintainingconcurrencyof components, overcoming thelack of a global clock, and managing the independent failure of components.[1]When a component of one system fails, the entire system does not fail.[3]Examples of distributed systems vary fromSOA-based systemstomicroservicestomassively multiplayer online gamestopeer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on.[4]Also, distributed systems are prone tofallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than amonolithic applicationdeployed on a single machine.[5]According to Marc Brooker: "a system is scalable in the range wheremarginal costof additional workload is nearly constant."Serverlesstechnologies fit this definition but the total cost of ownership, and not just the infra cost must be considered.[6]
Acomputer programthat runs within a distributed system is called adistributed program,[7]anddistributed programmingis the process of writing such programs.[8]There are many different types of implementations for the message passing mechanism, including pure HTTP,RPC-likeconnectors andmessage queues.[9]
Distributed computingalso refers to the use of distributed systems to solve computational problems. Indistributed computing, a problem is divided into many tasks, each of which is solved by one or more computers,[10]which communicate with each other via message passing.[11]
The worddistributedin terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area.[12]The terms are nowadays used in a much wider sense, even referring to autonomousprocessesthat run on the same physical computer and interact with each other by message passing.[11]
While there is no single definition of a distributed system,[13]the following defining properties are commonly used as:
A distributed system may have a common goal, such as solving a large computational problem;[16]the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.[17]
Other typical properties of distributed systems include the following:
Here are commonarchitectural patternsused for distributed computing:[21]
In distributed systems,eventsrepresent a fact or state change (e.g.,OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself.[22][23]
In contrast,messagesserve a broader role, encompassing commands (e.g.,ProcessPayment), events (e.g.,PaymentProcessed), and documents (e.g.,DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics.[22][23]
Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination.[22][23]
Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements.[22][23]
Distributed systems are groups of networked computers which share a common goal for their work.
The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them.[24]The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel.[25]Parallel computing may be seen as a particularly tightly coupled form of distributed computing,[26]and distributed computing may be seen as a loosely coupled form of parallel computing.[13]Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:
The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.
The situation is further complicated by the traditional uses of the terms parallel and distributedalgorithmthat do not quite match the above definitions of parallel and distributedsystems(seebelowfor more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.[29]
The use of concurrent processes which communicate through message-passing has its roots inoperating systemarchitectures studied in the 1960s.[30]The first widespread distributed systems werelocal-area networkssuch asEthernet, which was invented in the 1970s.[31]
ARPANET, one of the predecessors of theInternet, was introduced in the late 1960s, and ARPANETe-mailwas invented in the early 1970s. E-mail became the most successful application of ARPANET,[32]and it is probably the earliest example of a large-scaledistributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks includedUsenetandFidoNetfrom the 1980s, both of which were used to support distributed discussion systems.[33]
The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field,Symposium on Principles of Distributed Computing(PODC), dates back to 1982, and its counterpartInternational Symposium on Distributed Computing(DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs.[34]
Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnectprocessesrunning on those CPUs with some sort ofcommunication system.[35]
Whether these CPUs share resources or not determines a first distinction between three types of architecture:
Distributed programming typically falls into one of several basic architectures:client–server,three-tier,n-tier, orpeer-to-peer; or categories:loose coupling, ortight coupling.[36]
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in amain/subrelationship. Alternatively, a"database-centric" architecturecan enable distributed computing to be done without any form of directinter-process communication, by utilizing a shareddatabase.[39]Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.[40]
Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells. Each cell operates independently, processing requests while maintaining scalability, fault isolation, and availability.[41][42][43]
A cell typically consists of multiple services or application components and functions as an autonomous unit. Some implementations replicate entire sets ofservicesacross multiple cells, while others partition workloads between cells. In replicated models, requests may be rerouted to an operational cell if another experiences a failure. This design is intended to enhance system resilience by reducing the impact of localized failures.[44][45][46]
Some implementations employcircuit breakerswithin and between cells. Within a cell, circuit breakers may be used to prevent cascading failures among services, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational.[47][48][49]
Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives.[50][51][52]
Reasons for using distributed systems and distributed computing may include:
Examples of distributed systems and applications of distributed computing include the following:[54]
According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive.[56]
Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. Intheoretical computer science, such tasks are calledcomputational problems. Formally, a computational problem consists ofinstancestogether with asolutionfor each instance. Instances are questions that we can ask, and solutions are desired answers to these questions.
Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design analgorithmthat produces a correct solution for any given instance. Such an algorithm can be implemented as acomputer programthat runs on a general-purpose computer: the program reads a problem instance frominput, performs some computation, and produces the solution asoutput. Formalisms such asrandom-access machinesoruniversal Turing machinescan be used as abstract models of a sequential general-purpose computer executing such an algorithm.[57][58]
The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer?[citation needed]
The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer.
Three viewpoints are commonly used:
In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer networkisthe problem instance. This is illustrated in the following example.[63]
Consider the computational problem of finding a coloring of a given graphG. Different fields might take the following approaches:
While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, theCole–Vishkin algorithmfor graph coloring[64]was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm.
Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing).[65]The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing).
In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (seespeedup). If a decision problem can be solved inpolylogarithmic timeby using a polynomial number of processors, then the problem is said to be in the classNC.[66]The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa.[67]
In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During eachcommunication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task.[68]
This complexity measure is closely related to thediameterof the network. LetDbe the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2Dcommunication rounds: simply gather all information in one location (Drounds), solve the problem, and inform each node about the solution (Drounds).
On the other hand, if the running time of the algorithm is much smaller thanDcommunication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in theirlocal D-neighbourhood. Many distributed algorithms are known with the running time much smaller thanDrounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field.[69]Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model.
Another commonly used measure is the total number of bits transmitted in the network (cf.communication complexity).[70]The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits.
Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including thedining philosophers problemand other similarmutual exclusionproblems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts ordeadlocksoccur.
There are also fundamental challenges that are unique to distributed computing, for example those related tofault-tolerance. Examples of related problems includeconsensus problems,[71]Byzantine fault tolerance,[72]andself-stabilisation.[73]
Much research is also focused on understanding theasynchronousnature of distributed systems:
Note that in distributed systems,latencyshould be measured through "99th percentile" because "median" and "average" can be misleading.[77]
Coordinator election(orleader election) is the process of designating a singleprocessas the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator.[78]
The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator.[78]
The definition of this problem is often attributed toLeLann, who formalized it as a method to create a new token in a tokenring networkin which the token has been lost.[79]
Coordinator election algorithms are designed to be economical in terms of totalbytestransmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira[80]for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won theDijkstra Prizefor an influential paper in distributed computing.
Many other algorithms were suggested for different kinds of networkgraphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran.[81]
In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist.[82]
So far the focus has been ondesigninga distributed system that solves a given problem. A complementary research problem isstudyingthe properties of a given distributed system.[83][84]
Thehalting problemis an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem isundecidablein the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer.[85]
However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem isPSPACE-complete,[86]i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
|
https://en.wikipedia.org/wiki/Distributed_computing
|
Elixiris afunctional,concurrent,high-levelgeneral-purposeprogramming languagethat runs on theBEAMvirtual machine, which is also used to implement theErlangprogramming language.[3]Elixir builds on top of Erlang and shares the same abstractions for buildingdistributed,fault-tolerantapplications. Elixir also provides tooling and anextensibledesign. The latter is supported by compile-timemetaprogrammingwithmacrosandpolymorphismvia protocols.[4]
The community organizes yearly events in the United States,[5]Europe,[6]and Japan,[7]as well as minor local events and conferences.[8][9]
José Valim created the Elixir programming language as aresearch and developmentproject at Plataformatec. His goals were to enable higher extensibility and productivity in the Erlang VM while maintaining compatibility with Erlang's ecosystem.[10][11]
Elixir is aimed at large-scale sites and apps. It uses features ofRuby, Erlang, andClojureto develop a high-concurrency and low-latency language. It was designed to handle large data volumes. Elixir is also used in telecommunications, e-commerce, and finance.[12]
In 2021, the Numerical Elixir effort was announced with the goal of bringing machine learning, neural networks, GPU compilation, data processing, and computational notebooks to the Elixir ecosystem.[13]
Each of the minor versions supports a specific range of Erlang/OTPversions.[14]The current stable release version is 1.18.3[1].
The following examples can be run in aniexshellor saved in a file and run from thecommand lineby typingelixir<filename>.
ClassicHello worldexample:
Pipe operator:
Pattern matching(a.k.a. destructuring):
Pattern matching with multiple clauses:
List comprehension:
Asynchronously reading files with streams:
Multiple function bodies withguards:
Relational databases with the Ecto library:
Sequentially spawning a thousand processes:
Asynchronouslyperforming a task:
[citation needed]
|
https://en.wikipedia.org/wiki/Elixir_(programming_language)
|
Erlang(/ˈɜːrlæŋ/UR-lang) is ageneral-purpose,concurrent,functionalhigh-levelprogramming language, and agarbage-collectedruntime system. The term Erlang is used interchangeably with Erlang/OTP, orOpen Telecom Platform(OTP), which consists of the Erlangruntime system, several ready-to-use components (OTP) mainly written in Erlang, and a set ofdesign principlesfor Erlang programs.[5]
The Erlangruntime systemis designed for systems with these traits:
The Erlangprogramming languagehasimmutabledata,pattern matching, andfunctional programming.[7]The sequential subset of the Erlang language supportseager evaluation,single assignment, anddynamic typing.
A normal Erlang application is built out of hundreds of small Erlang processes.
It was originallyproprietary softwarewithinEricsson, developed byJoe Armstrong, Robert Virding, and Mike Williams in 1986,[8]but was released asfree and open-source softwarein 1998.[9][10]Erlang/OTP is supported and maintained by the Open Telecom Platform (OTP) product unit atEricsson.
The nameErlang, attributed to Bjarne Däcker, has been presumed by those working on the telephony switches (for whom the language was designed) to be a reference to Danish mathematician and engineerAgner Krarup Erlangand asyllabic abbreviationof "Ericsson Language".[8][11][12]Erlang was designed with the aim of improving the development of telephony applications.[13]The initial version of Erlang was implemented inPrologand was influenced by the programming languagePLEXused in earlier Ericsson exchanges. By 1988 Erlang had proven that it was suitable for prototyping telephone exchanges, but the Prolog interpreter was far too slow. One group within Ericsson estimated that it would need to be 40 times faster to be suitable for production use. In 1992, work began on theBEAMvirtual machine (VM), which compiles Erlang to C using a mix of natively compiled code andthreaded codeto strike a balance between performance and disk space.[14]According to co-inventor Joe Armstrong, the language went from laboratory product to real applications following the collapse of the next-generationAXE telephone exchangenamedAXE-Nin 1995. As a result, Erlang was chosen for the nextAsynchronous Transfer Mode(ATM) exchangeAXD.[8]
In February 1998, Ericsson Radio Systems banned the in-house use of Erlang for new products, citing a preference for non-proprietary languages.[15]The ban caused Armstrong and others to make plans to leave Ericsson.[16]In March 1998 Ericsson announced the AXD301 switch,[8]containing over a million lines of Erlang and reported to achieve ahigh availabilityofnine "9"s.[17]In December 1998, the implementation of Erlang was open-sourced and most of the Erlang team resigned to form a new company, Bluetail AB.[8]Ericsson eventually relaxed the ban and re-hired Armstrong in 2004.[16]
In 2006, nativesymmetric multiprocessingsupport was added to the runtime system and VM.[8]
Erlang applications are built of very lightweight Erlang processes in the Erlang runtime system. Erlang processes can be seen as "living" objects (object-oriented programming), with data encapsulation andmessage passing, but capable of changing behavior during runtime. The Erlang runtime system provides strictprocess isolationbetween Erlang processes (this includes data and garbage collection, separated individually by each Erlang process) and transparent communication between processes (seeLocation transparency) on different Erlang nodes (on different hosts).
Joe Armstrong, co-inventor of Erlang, summarized the principles of processes in hisPhDthesis:[18]
Joe Armstrong remarked in an interview with Rackspace in 2013: "IfJavais 'write once, run anywhere', then Erlang is 'write once, run forever'."[19]
In 2014,Ericssonreported Erlang was being used in its support nodes, and inGPRS,3GandLTEmobile networks worldwide and also byNortelandDeutsche Telekom.[20]
Erlang is used inRabbitMQ. AsTim Bray, director of Web Technologies atSun Microsystems, expressed in his keynote atO'Reilly Open Source Convention(OSCON) in July 2008:
If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in.
Erlang is the programming language used to codeWhatsApp.[21]
It is also the language of choice forEjabberd– anXMPPmessaging server.
Elixiris a programming language that compiles into BEAM byte code (via Erlang Abstract Format).[22]
Since being released as open source, Erlang has been spreading beyond telecoms, establishing itself in other vertical markets such as FinTech, gaming, healthcare, automotive, Internet of Things and blockchain. Apart from WhatsApp, there are other companies listed as Erlang's success stories, includingVocalink(a MasterCard company),Goldman Sachs,Nintendo, AdRoll,Grindr,BT Mobile,Samsung,OpenX, andSITA.[23][24]
Afactorialalgorithm implemented in Erlang:
A tail recursive algorithm that produces theFibonacci sequence:
Omitting the comments gives a much shorter program.
Quicksortin Erlang, usinglist comprehension:[25]
The above example recursively invokes the functionqsortuntil nothing remains to be sorted. The expression[Front || Front <- Rest, Front < Pivot]is alist comprehension, meaning "Construct a list of elementsFrontsuch thatFrontis a member ofRest, andFrontis less thanPivot."++is the list concatenation operator.
A comparison function can be used for more complicated structures for the sake of readability.
The following code would sort lists according to length:
APivotis taken from the first parameter given toqsort()and the rest ofListsis namedRest. Note that the expression
is no different in form from
(in the previous example) except for the use of a comparison function in the last part, saying "Construct a list of elementsXsuch thatXis a member ofRest, andSmalleris true", withSmallerbeing defined earlier as
Theanonymous functionis namedSmallerin the parameter list of the second definition ofqsortso that it can be referenced by that name within that function. It is not named in the first definition ofqsort, which deals with the base case of an empty list and thus has no need of this function, let alone a name for it.
Erlang has eight primitivedata types:
And three compound data types:
Two forms ofsyntactic sugarare provided:
Erlang has no method to define classes, although there are externallibrariesavailable.[27]
Erlang is designed with a mechanism that makes it easy for external processes to monitor for crashes (or hardware failures), rather than an in-process mechanism likeexception handlingused in many other programming languages. Crashes are reported like other messages, which is the only way processes can communicate with each other,[28]and subprocesses can be spawned cheaply (seebelow). The "let it crash" philosophy prefers that a process be completely restarted rather than trying to recover from a serious failure.[29]Though it still requires handling of errors, this philosophy results in less code devoted todefensive programmingwhere error-handling code is highly contextual and specific.[28]
A typical Erlang application is written in the form of a supervisor tree. This architecture is based on a hierarchy of processes in which the top level process is known as a "supervisor". The supervisor then spawns multiple child processes that act either as workers or more, lower level supervisors. Such hierarchies can exist to arbitrary depths and have proven to provide a highly scalable and fault-tolerant environment within which application functionality can be implemented.
Within a supervisor tree, all supervisor processes are responsible for managing the lifecycle of their child processes, and this includes handling situations in which those child processes crash. Any process can become a supervisor by first spawning a child process, then callingerlang:monitor/2on that process. If the monitored process then crashes, the supervisor will receive a message containing a tuple whose first member is the atom'DOWN'. The supervisor is responsible firstly for listening for such messages and for taking the appropriate action to correct the error condition.
Erlang's main strength is support forconcurrency. It has a small but powerful set of primitives to create processes and communicate among them. Erlang is conceptually similar to the languageoccam, though it recasts the ideas ofcommunicating sequential processes(CSP) in a functional framework and uses asynchronous message passing.[30]Processes are the primary means to structure an Erlang application. They are neitheroperating systemprocessesnorthreads, butlightweight processesthat are scheduled by BEAM. Like operating system processes (but unlike operating system threads), they share no state with each other. The estimated minimal overhead for each is 300words.[31]Thus, many processes can be created without degrading performance. In 2005, a benchmark with 20 million processes was successfully performed with 64-bit Erlang on a machine with 16 GBrandom-access memory(RAM; total 800 bytes/process).[32]Erlang has supportedsymmetric multiprocessingsince release R11B of May 2006.
Whilethreadsneed external library support in most languages, Erlang provides language-level features to create and manage processes with the goal of simplifying concurrent programming. Though all concurrency is explicit in Erlang, processes communicate usingmessage passinginstead of shared variables, which removes the need for explicitlocks(a locking scheme is still used internally by the VM).[33]
Inter-process communicationworks via ashared-nothingasynchronousmessage passingsystem: every process has a "mailbox", aqueueof messages that have been sent by other processes and not yet consumed. A process uses thereceiveprimitive to retrieve messages that match desired patterns. A message-handling routine tests messages in turn against each pattern, until one of them matches. When the message is consumed and removed from the mailbox the process resumes execution. A message may comprise any Erlang structure, including primitives (integers, floats, characters, atoms), tuples, lists, and functions.
The code example below shows the built-in support for distributed processes:
As the example shows, processes may be created on remote nodes, and communication with them is transparent in the sense that communication with remote processes works exactly as communication with local processes.
Concurrency supports the primary method of error-handling in Erlang. When a process crashes, it neatly exits and sends a message to the controlling process which can then take action, such as starting a new process that takes over the old process's task.[34][35]
The official reference implementation of Erlang usesBEAM.[36]BEAM is included in the official distribution of Erlang, called Erlang/OTP. BEAM executesbytecodewhich is converted tothreaded codeat load time. It also includes a native code compiler on most platforms, developed by the High Performance Erlang Project (HiPE) atUppsala University. Since October 2001 the HiPE system is fully integrated in Ericsson's Open Source Erlang/OTP system.[37]It also supports interpreting, directly from source code viaabstract syntax tree, via script as of R11B-5 release of Erlang.
Erlang supports language-levelDynamic Software Updating. To implement this, code is loaded and managed as "module" units; the module is acompilation unit. The system can keep two versions of a module in memory at the same time, and processes can concurrently run code from each. The versions are referred to as the "new" and the "old" version. A process will not move into the new version until it makes an external call to its module.
An example of the mechanism of hot code loading:
For the second version, we add the possibility to reset the count to zero.
Only when receiving a message consisting of the atomcode_switchwill the loop execute an external call to codeswitch/1 (?MODULEis a preprocessor macro for the current module). If there is a new version of thecountermodule in memory, then its codeswitch/1 function will be called. The practice of having a specific entry-point into a new version allows the programmer to transform state to what is needed in the newer version. In the example, the state is kept as an integer.
In practice, systems are built up using design principles from the Open Telecom Platform, which leads to more code upgradable designs. Successful hot code loading is exacting. Code must be written with care to make use of Erlang's facilities.
In 1998, Ericsson released Erlang asfree and open-source softwareto ensure its independence from a single vendor and to increase awareness of the language. Erlang, together with libraries and the real-time distributed databaseMnesia, forms the OTP collection of libraries. Ericsson and a few other companies support Erlang commercially.
Since the open source release, Erlang has been used by several firms worldwide, includingNortelandDeutsche Telekom.[38]Although Erlang was designed to fill a niche and has remained an obscure language for most of its existence, its popularity is growing due to demand for concurrent services.[39][40]Erlang has found some use in fieldingmassively multiplayer online role-playing game(MMORPG) servers.[41]
|
https://en.wikipedia.org/wiki/Erlang_(programming_language)
|
Gois ahigh-levelgeneral purpose programming languagethat isstatically typedandcompiled. It is known for the simplicity of its syntax and the efficiency of development that it enables by the inclusion of a large standard library supplying many needs for common projects.[12]It was designed atGoogle[13]in 2007 byRobert Griesemer,Rob Pike, andKen Thompson, and publicly announced in November of 2009.[4]It issyntacticallysimilar toC, but also hasmemory safety,garbage collection,structural typing,[7]andCSP-styleconcurrency.[14]It is often referred to asGolangto avoid ambiguity and because of its former domain name,golang.org, but its proper name is Go.[15]
There are two major implementations:
A third-partysource-to-source compiler, GopherJS,[21]transpiles Go toJavaScriptforfront-end web development.
Go was designed atGooglein 2007 to improveprogramming productivityin an era ofmulticore,networkedmachinesand largecodebases.[22]The designers wanted to address criticisms of other languages in use at Google, but keep their useful characteristics:[23]
Its designers were primarily motivated by their shareddislike of C++.[25][26][27]
Go was publicly announced in November 2009,[28]and version 1.0 was released in March 2012.[29][30]Go is widely used in production at Google[31]and in many other organizations and open-source projects.
In retrospect the Go authors judged Go to be successful due to the overall engineering work around the language, including the runtime support for the language's concurrency feature.
Although the design of most languages concentrates on innovations in syntax, semantics, or typing, Go is focused on the software development process itself. ... The principal unusual property of the language itself—concurrency—addressed problems that arose with the proliferation of multicore CPUs in the 2010s. But more significant was the early work that established fundamentals for packaging, dependencies, build, test, deployment, and other workaday tasks of the software development world, aspects
that are not usually foremost in language design.[32]
TheGophermascotwas introduced in 2009 for theopen sourcelaunch of the language. The design, byRenée French, borrowed from a c. 2000WFMUpromotion.[33]
In November 2016, the Go and Go Mono fonts were released by type designersCharles BigelowandKris Holmesspecifically for use by the Go project. Go is ahumanist sans-serifresemblingLucida Grande, and Go Mono ismonospaced. Both fonts adhere to theWGL4character set and were designed to be legible with a largex-heightand distinctletterforms. Both Go and Go Mono adhere to theDIN1450 standard by having a slashed zero, lowercaselwith a tail, and an uppercaseIwith serifs.[34][35]
In April 2018, the original logo was redesigned by brand designer Adam Smith. The new logo is a modern, stylized GO slanting right with trailing streamlines. (The Gopher mascot remained the same.[36])
The lack of support forgeneric programmingin initial versions of Go drew considerable criticism.[37]The designers expressed an openness to generic programming and noted that built-in functionswerein fact type-generic, but are treated as special cases; Pike called this a weakness that might be changed at some point.[38]The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it.[39]
In August 2018, the Go principal contributors published draft designs for generic programming anderror handlingand asked users to submit feedback.[40][41]However, the error handling proposal was eventually abandoned.[42]
In June 2020, a new draft design document[43]was published that would add the necessary syntax to Go for declaring generic functions and types. A code translation tool,go2go, was provided to allow users to try the new syntax, along with a generics-enabled version of the online Go Playground.[44]
Generics were finally added to Go in version 1.18 on March 15, 2022.[45]
Go 1 guarantees compatibility[46]for the language specification and major parts of the standard library. All versions up through the current Go 1.24 release[47]have maintained this promise.
Go uses ago1.[major].[patch]versioning format, such asgo1.24.0and each major Go release is supported until there are two newer major releases. Unlike most software, Go calls the second number in a version the major, i.e., ingo1.24.0the24is the major version.[48]This is because Go plans to never reach 2.0, prioritizing backwards compatibility over potential breaking changes.[49]
Go is influenced byC(especially thePlan 9dialect[50][failed verification–see discussion]), but with an emphasis on greater simplicity and safety. It consists of:
Go's syntax includes changes fromCaimed at keeping code concise and readable. A combined declaration/initialization operator was introduced that allows the programmer to writei:=3ors:="Hello, world!",without specifying the typesof variables used. This contrasts with C'sinti=3;andconstchar*s="Hello, world!";. Go also removes the requirement to use parentheses in if statement conditions.
Semicolons still terminate statements;[a]but are implicit when the end of a line occurs.[b]
Methods may return multiple values, and returning aresult,errpair is the conventional way a method indicates an error to its caller in Go.[c]Go adds literal syntaxes for initializing struct parameters by name and for initializingmapsandslices. As an alternative to C's three-statementforloop, Go'srangeexpressions allow concise iteration over arrays, slices, strings, maps, and channels.[58]
fmt.Println("Hello World!")is a statement.
In Go, statements are separated by ending a line (hitting the Enter key) or by a semicolon ";".
Hitting the Enter key adds ";" to the end of the line implicitly (does not show up in the source code).
The left curly bracket{cannot come at the start of a line.[59]
Go has a number of built-in types, including numeric ones (byte,int64,float32, etc.),Booleans, and byte strings (string). Strings are immutable; built-in operators and keywords (rather than functions) provide concatenation, comparison, andUTF-8encoding/decoding.[60]Record typescan be defined with thestructkeyword.[61]
For each typeTand each non-negative integer constantn, there is anarray typedenoted[n]T; arrays of differing lengths are thus of different types.Dynamic arraysare available as "slices", denoted[]Tfor some typeT. These have a length and acapacityspecifying when new memory needs to be allocated to expand the array. Several slices may share their underlying memory.[38][62][63]
Pointersare available for all types, and the pointer-to-Ttype is denoted*T. Address-taking and indirection use the&and*operators, as in C, or happen implicitly through the method call or attribute access syntax.[64][65]There is no pointer arithmetic,[d]except via the specialunsafe.Pointertype in the standard library.[66]
For a pair of typesK,V, the typemap[K]Vis the type mapping type-Kkeys to type-Vvalues, though Go Programming Language specification does not give any performance guarantees or implementation requirements for map types. Hash tables are built into the language, with special syntax and built-in functions.chanTis achannelthat allows sending values of typeTbetweenconcurrent Go processes.[67]
Aside from its support forinterfaces, Go's type system isnominal: thetypekeyword can be used to define a newnamed type, which is distinct from other named types that have the same layout (in the case of astruct, the same members in the same order). Some conversions between types (e.g., between the various integer types) are pre-defined and adding a new type may define additional conversions, but conversions between named types must always be invoked explicitly.[68]For example, thetypekeyword can be used to define a type forIPv4addresses, based on 32-bit unsigned integers as follows:
With this type definition,ipv4addr(x)interprets theuint32valuexas an IP address. Simply assigningxto a variable of typeipv4addris a type error.[69]
Constant expressionsmay be either typed or "untyped"; they are given a type when assigned to a typed variable if the value they represent passes a compile-time check.[70]
Functiontypesare indicated by thefunckeyword; they take zero or moreparametersandreturnzero or more values, all of which are typed. The parameter and return values determine a function type; thus,func(string, int32) (int, error)is the type of functions that take astringand a 32-bit signed integer, and return a signed integer (of default width) and a value of the built-in interface typeerror.[71]
Any named type has amethodset associated with it. The IP address example above can be extended with a method for checking whether its value is a known standard:
Due to nominal typing, this method definition adds a method toipv4addr, but not onuint32. While methods have special definition and call syntax, there is no distinct method type.[72]
Go provides two features that replaceclass inheritance.[citation needed]
The first isembedding, which can be viewed as an automated form ofcomposition.[73]
The second are itsinterfaces, which providesruntime polymorphism.[74]: 266Interfaces are a class of types and provide a limited form ofstructural typingin the otherwise nominal type system of Go. An object which is of an interface type is also of another type, much likeC++objects being simultaneously of a base and derived class. The design of Go interfaces was inspired byprotocolsfrom the Smalltalk programming language.[75]Multiple sources use the termduck typingwhen describing Go interfaces.[76][77]Although the term duck typing is not precisely defined and therefore not wrong, it usually implies that type conformance is not statically checked. Because conformance to a Go interface is checked statically by the Go compiler (except when performing a type assertion), the Go authors prefer the termstructural typing.[78]
The definition of an interface type lists required methods by name and type. Any object of type T for which functions exist matching all the required methods of interface type I is an object of type I as well. The definition of type T need not (and cannot) identify type I. For example, ifShape,Squareand Circleare defined as
then both aSquareand aCircleare implicitly aShapeand can be assigned to aShape-typed variable.[74]: 263–268In formal language, Go's interface system providesstructuralrather thannominaltyping. Interfaces can embed other interfaces with the effect of creating a combined interface that is satisfied by exactly the types that implement the embedded interface and any methods that the newly defined interface adds.[74]: 270
The Go standard library uses interfaces to provide genericity in several places, including the input/output system that is based on the concepts ofReaderandWriter.[74]: 282–283
Besides calling methods via interfaces, Go allows converting interface values to other types with a run-time type check. The language constructs to do so are thetype assertion,[79]which checks against a single potential type:
and thetype switch,[80]which checks against multiple types:[citation needed]
Theempty interfaceinterface{}is an important base case because it can refer to an item ofanyconcrete type. It is similar to theObjectclass inJavaorC#and is satisfied by any type, including built-in types likeint.[74]: 284Code using the empty interface cannot simply call methods (or built-in operators) on the referred-to object, but it can store theinterface{}value, try to convert it to a more useful type via a type assertion or type switch, or inspect it with Go'sreflectpackage.[81]Becauseinterface{}can refer to any value, it is a limited way to escape the restrictions of static typing, likevoid*in C but with additional run-time type checks.[citation needed]
Theinterface{}type can be used to model structured data of any arbitrary schema in Go, such asJSONorYAMLdata, by representing it as amap[string]interface{}(map of string to empty interface). This recursively describes data in the form of a dictionary with string keys and values of any type.[82]
Interface values are implemented using pointer to data and a second pointer to run-time type information.[83]Like some other types implemented using pointers in Go, interface values arenilif uninitialized.[84]
Since version 1.18, Go supports generic code using parameterized types.[85]
Functions and types now have the ability to be generic using type parameters. These type parameters are specified within square brackets, right after the function or type name.[86]The compiler transforms the generic function or type into non-generic by substitutingtype argumentsfor the type parameters provided, either explicitly by the user or type inference by the compiler.[87]This transformation process is referred to as type instantiation.[88]
Interfaces now can define a set of types (known as type set) using|(Union) operator, as well as a set of methods. These changes were made to support type constraints in generics code. For a generic function or type, a constraint can be thought of as the type of the type argument: a meta-type. This new~Tsyntax will be the first use of~as a token in Go.~Tmeans the set of all types whose underlying type isT.[89]
Go uses theiotakeyword to create enumerated constants.[90][91]
In Go's package system, each package has a path (e.g.,"compress/bzip2"or"golang.org/x/net/html") and a name (e.g.,bzip2orhtml). By default other packages' definitions mustalwaysbe prefixed with the other package's name. However the name used can be changed from the package name, and if imported as_, then no package prefix is required. Only thecapitalizednames from other packages are accessible:io.Readeris public butbzip2.readeris not.[92]Thego getcommand can retrieve packages stored in a remote repository[93]and developers are encouraged to develop packages inside a base path corresponding to a source repository (such as example.com/user_name/package_name) to reduce the likelihood of name collision with future additions to the standard library or other external libraries.[94]
The Go language has built-in facilities, as well as library support, for writingconcurrent programs. The runtime isasynchronous: program execution that performs for example a network read will be suspended until data is available to process, allowing other parts of the program to perform other work. This is built into the runtime and does not require any changes in program code. The go runtime also automatically schedules concurrent operations (goroutines) across multiple CPUs; this can achieve parallelism for a properly written program.[95]
The primary concurrency construct is thegoroutine, a type ofgreen thread.[96]: 280–281A function call prefixed with thegokeyword starts a function in a new goroutine. The language specification does not specify how goroutines should be implemented, but current implementations multiplex a Go process's goroutines onto a smaller set ofoperating-system threads, similar to the scheduling performed inErlangandHaskell's GHC runtime implementation.[97]: 10
While a standard library package featuring most of the classicalconcurrency controlstructures (mutexlocks, etc.) is available,[97]: 151–152idiomatic concurrent programs instead preferchannels, whichsend messagesbetween goroutines.[98]Optional buffers store messages inFIFOorder[99]: 43and allow sending goroutines to proceed before their messages are received.[96]: 233
Channels are typed, so that a channel of typechanTcan only be used to transfer messages of typeT. Special syntax is used to operate on them;<-chis an expression that causes the executing goroutine to block until a value comes in over the channelch, whilech <- xsends the valuex(possibly blocking until another goroutine receives the value). The built-inswitch-likeselectstatement can be used to implement non-blocking communication on multiple channels; seebelowfor an example. Go has a memory model describing how goroutines must use channels or other operations to safely share data.[100]
The existence of channels does not by itself set Go apart fromactor model-style concurrent languages like Erlang, where messages are addressed directly to actors (corresponding to goroutines). In the actor model, channels are themselves actors, therefore addressing a channel just means to address an actor. The actor style can be simulated in Go by maintaining a one-to-one correspondence between goroutines and channels, but the language allows multiple goroutines to share a channel or a single goroutine to send and receive on multiple channels.[97]: 147
From these tools one can build concurrent constructs likeworker pools, pipelines (in which, say, a file is decompressed and parsed as it downloads), background calls with timeout, "fan-out" parallel calls to a set of services, and others.[101]Channels have also found uses further from the usual notion of interprocess communication, like serving as a concurrency-safe list of recycled buffers,[102]implementingcoroutines(which helped inspire the namegoroutine),[103]and implementingiterators.[104]
Concurrency-related structural conventions of Go (channelsand alternative channel inputs) are derived fromTony Hoare'scommunicating sequential processesmodel. Unlike previous concurrent programming languages such asOccamorLimbo(a language on which Go co-designer Rob Pike worked),[105]Go does not provide any built-in notion of safe or verifiable concurrency.[106]While the communicating-processes model is favored in Go, it is not the only one: all goroutines in a program share a single address space. This means that mutable objects and pointers can be shared between goroutines; see§ Lack of data race safety, below.
Although Go's concurrency features are not aimed primarily atparallel processing,[95]they can be used to programshared-memorymulti-processormachines. Various studies have been done into the effectiveness of this approach.[107]One of these studies compared the size (inlines of code) and speed of programs written by a seasoned programmer not familiar with the language and corrections to these programs by a Go expert (from Google's development team), doing the same forChapel,CilkandIntel TBB. The study found that the non-expert tended to writedivide-and-conqueralgorithms with onegostatement per recursion, while the expert wrote distribute-work-synchronize programs using one goroutine per processor core. The expert's programs were usually faster, but also longer.[108]
Go's approach to concurrency can be summarized as "don't communicate by sharing memory; share memory by communicating".[109]There are no restrictions on how goroutines access shared data, makingdata racespossible. Specifically, unless a program explicitly synchronizes via channels or other means, writes from one goroutine might be partly, entirely, or not at all visible to another, often with no guarantees about ordering of writes.[106]Furthermore, Go'sinternal data structureslike interface values, slice headers, hash tables, and string headers are not immune to data races, so type and memory safety can be violated in multithreaded programs that modify shared instances of those types without synchronization.[110][111]Instead of language support, safe concurrent programming thus relies on conventions; for example, Chisnall recommends an idiom called "aliasesxormutable", meaning that passing a mutable value (or pointer) over a channel signals a transfer of ownership over the value to its receiver.[97]: 155The gc toolchain has an optional data race detector that can check for unsynchronized access to shared memory during runtime since version 1.1,[112]additionally a best-effort race detector is also included by default since version 1.6 of the gc runtime for access to themapdata type.[113]
The linker in the gc toolchain creates statically linked binaries by default; therefore all Go binaries include the Go runtime.[114][115]
Go deliberately omits certain features common in other languages, including(implementation) inheritance,assertions,[e]pointer arithmetic,[d]implicit type conversions,untagged unions,[f]andtagged unions.[g]The designers added only those facilities that all three agreed on.[118]
Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use ofinterfacesto achievedynamic dispatch[h]andcompositionto reuse code. Composition anddelegationare in fact largely automated bystructembedding; according to researchers Schmageret al., this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance.[73]
Exception handlingwas initially omitted in Go due to lack of a "design that gives value proportionate to the complexity".[119]An exception-likepanic/recovermechanism that avoids the usualtry-catchcontrol structure was proposed[120]and released in the March 30, 2010 snapshot.[121]The Go authors advise using it for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package.[122][123]Across package boundaries, Go includes a canonical error type, and multi-value returns using this type are the standard idiom.[4]
The Go authors put substantial effort into influencing the style of Go programs:
The main Go distribution includes tools forbuilding,testing, andanalyzingcode:
It also includesprofilinganddebuggingsupport,fuzzingcapabilities to detect bugs,runtimeinstrumentation (for example, to trackgarbage collectionpauses), and adata racedetector.
Another tool maintained by the Go team but is not included in Go distributions isgopls, a language server that providesIDEfeatures such asintelligent code completiontoLanguage Server Protocolcompatible editors.[132]
An ecosystem of third-party tools adds to the standard distribution, such asgocode, which enables code autocompletion in many text editors,goimports, which automatically adds/removes package imports as needed, anderrcheck, which detects code that might unintentionally ignore errors.
where "fmt" is the package forformattedI/O, similar to C'sC file input/output.[133]
The following simple program demonstrates Go'sconcurrency featuresto implement an asynchronous program. It launches two lightweight threads ("goroutines"): one waits for the user to type some text, while the other implements a timeout. Theselectstatement waits for either of these goroutines to send a message to the main routine, and acts on the first message to arrive (example adapted from David Chisnall's book).[97]: 152
The testing package provides support for automated testing of go packages.[134]Target function example:
Test code (note thatassertkeyword is missing in Go; tests live in <filename>_test.go at the same package):
It is possible to run tests in parallel.
Thenet/http[135]package provides support for creating web applications.
This example would show "Hello world!" when localhost:8080 is visited.
Go has found widespread adoption in various domains due to its robust standard library and ease of use.[136]
Popular applications include:Caddy, a web server that automates the process of setting up HTTPS,[137]Docker, which provides a platform for containerization, aiming to ease the complexities of software development and deployment,[138]Kubernetes, which automates the deployment, scaling, and management of containerized applications,[139]CockroachDB, a distributed SQL database engineered for scalability and strong consistency,[140]andHugo, a static site generator that prioritizes speed and flexibility, allowing developers to create websites efficiently.[141]
The interface system, and the deliberate omission of inheritance, were praised by Michele Simionato, who likened these characteristics to those ofStandard ML, calling it "a shame that no popular language has followed [this] particular route".[142]
Dave Astels atEngine Yardwrote in 2009:[143]
Go is extremely easy to dive into. There are a minimal number of fundamental language concepts and thesyntaxis clean and designed to be clear and unambiguous.
Goisstill experimental and still a little rough around the edges.
Go was named Programming Language of the Year by theTIOBE Programming Community Indexin its first year, 2009, for having a larger 12-month increase in popularity (in only 2 months, after its introduction in November) than any other language that year, and reached 13th place by January 2010,[144]surpassing established languages likePascal. By June 2015, its ranking had dropped to below 50th in the index, placing it lower thanCOBOLandFortran.[145]But as of January 2017, its ranking had surged to 13th, indicating significant growth in popularity and adoption. Go was again awarded TIOBE Programming Language of the Year in 2016.[146]
Bruce Eckelhas stated:[147]
The complexity ofC++(even more complexity has been added in the new C++), and the resulting impact on productivity, is no longer justified. All the hoops that the C++ programmer had to jump through in order to use a C-compatible language make no sense anymore -- they're just a waste of time and effort. Go makes much more sense for the class of problems that C++ was originally intended to solve.
A 2011 evaluation of the language and itsgcimplementation in comparison to C++ (GCC), Java andScalaby a Google engineer found:
Go offers interesting language features, which also allow for a concise and standardized notation. The compilers for this language are still immature, which reflects in both performance and binary sizes.
The evaluation got a rebuttal from the Go development team. Ian Lance Taylor, who had improved the Go code for Hundt's paper, had not been aware of the intention to publish his code, and says that his version was "never intended to be an example of idiomatic or efficient Go"; Russ Cox then optimized the Go code, as well as the C++ code, and got the Go code to run almost as fast as the C++ version and more than an order of magnitude faster than the code in the paper.[149]
On November 10, 2009, the day of the general release of the language, Francis McCabe, developer of theGo! programming language(note the exclamation point), requested a name change of Google's language to prevent confusion with his language, which he had spent 10 years developing.[156]McCabe raised concerns that "the 'big guy' will end up steam-rollering over" him, and this concern resonated with the more than 120 developers who commented on Google's official issues thread saying they should change the name, with some[157]even saying the issue contradicts Google's motto of:Don't be evil.[158]
On October 12, 2010, the filed public issue ticket was closed by Google developer Russ Cox (@rsc) with the custom status "Unfortunate" accompanied by the following comment:
"There are many computing products and services named Go. In the 11 months since our release, there has been minimal confusion of the two languages."[158]
|
https://en.wikipedia.org/wiki/Go_(programming_language)
|
Andrew Gordon Speedie Pask(28 June 1928 – 29 March 1996) was a Britishcybernetician, inventor andpolymathwho made multiple contributions to cybernetics,educational psychology,educational technology,applied epistemology,chemical computing,architecture, andsystems art. During his life, he gained three doctorate degrees. He was an avid writer, with more than two hundred and fifty publications which included a variety of journal articles, books, periodicals, patents, and technical reports (many of which can be found at the main Pask archive at theUniversity of Vienna).[Footnote 1]He worked as an academic and researcher for a variety of educational settings, research institutes, and private stakeholders including but not limited to theUniversity of Illinois,Concordia University, theOpen University,Brunel Universityand theArchitectural Association School of Architecture.[1][2]He is known for the development ofconversation theory.
Pask was born inDerby, England, on 28 June 1928, to his parents Percy and Mary Pask.[3]His father was a partner in Pask, Cornish and Smart, a wholesale fruit business in Covent Garden.[4]He had two older siblings: Alfred, who trained as an engineer before becoming a Methodist minister, and Edgar, a professor of anesthetics.[5][Footnote 2]His family moved to theIsle of Wightshortly after his birth.[3]He was educated atRydal Penrhos. According toAndrew Pickeringand G. M. Furtado Cardoso Lopes, school taught Pask to "be a gangster" and he was noted for having designed bombs during his time at Rydal Penrhos which was delivered to a government ministry in relation to the war effort during theSecond World War.[6][7]He later went on to complete two diplomas in Geology and Mining Engineering fromLiverpool PolytechnicandBangor Universityrespectively.[3]
Pask later attendedCambridge Universityaround 1949 to study for a bachelor's degree,[Footnote 3]where he met his future associate and business partner Robin McKinnon-Wood, who was studying his undergraduate in Maths and Physics at the time.[8][9]At the time, Pask was living in Jordan's Yard, Cambridge under the supervision of the scientist and engineer John Brickell. During this time, Pask was more known for his work in the arts and musical theatre rather than his later pursuits in science and education.[8]He became interested in cybernetics and information theory in the early 1950s whenNorbert Wienerwas asked to give a presentation on the subject for the university.[10][9][Footnote 4]
He eventually obtained an MA in natural sciences from the university in 1952,[3]and met his future wife Elizabeth Pask (née Poole) around this time at the birthday party of a mutual friend when she was studying atLiverpool Universityand he was visiting his father inWallasey, Mersey.[11]They married in 1956 and later had two daughters together.[3]
In 1953, Pask formally founded alongside his wife Elizabeth and Robin McKinnon-Wood the research organization System Research Ltd., inRichmond, Surrey.[3][12]According to McKinnon-Wood, his and Pask's early forays in musical comedy production at Cambridge through their earlier company Sirelelle lay the groundwork for his later company which they viewed as being "wholly consistent with the development of self-adaptive systems, self-organizing systems, man-machine interactions[,] etc".[8][Footnote 5]After rebranding the company to System Research Ltd., the company became non-profit in 1961 with significant funding being derived from theUnited States ArmyandAirforce.[3][13]
Throughout the company's existence, it conducted a variety of research and development initiatives on behalf ofcivil serviceorganizations and research councils in both the United States and the United Kingdom.[3][14]During the active period of System Research Ltd., he and his associates worked on a number of projects including SAKI (self-adaptive keyboard machine), MusiColour (a light show where the colored lights would reduce their responsiveness to a given keyboard input over time so as to induce the keyboard player to play a different range of notes),[15]and finally educational technologies such as CASTE (Couse Assembly System Tutorial Environment) and Thoughtsticker (both of which were developed in the context of what becameconversation theory).[3][16]
During this period, Pask and McKinnon-Wood were asked to demonstrate their proof of concept for MusiColour on behalf ofBilly Butlin.[17][18]While the machine initially worked when the duo sought to demonstrate the technology to Butlin's deputy, after his arrival "it exploded in a cloud of white smoke",[17]due to McKinnon-Wood "buying junk electronic capacitors".[17]The duo managed to restart the machine; after which McKinnon-Wood purports Butlin to have remarked if such a machine could withstand an explosion like that, it must be reliable.[17]
Stafford Beeralso claims to have met Pask sometime during this period at a dinner party inSheffield,[19][Footnote 6]and notes of both his genius, the difficulty in following his thought, and getting hold of; remarking both that "[Pask's] conception of things is not anyone else's perception of things",[20]and that "The man can be quite infuriating".[21]Between the early to mid-1950s, Pask began to developelectrochemical devicesdesigned to find their own "relevance criteria".[22][23]Pask performed experiments utilizing "electrochemical assemblages, passing current through various aqueous solutions of metallic salts (e.g., ferrous sulfate) in order to construct ananalog control system".[22]During the late 1950s, Pask managed to get a prototype device working.[24]Oliver Selfridgenoted that it was the second such mechanism, whereby "a machine build a machine electronically without any physical motion", actually worked.[25]
In September 1958 inNamur, Belgium, he attended the second International Congress of Cybernetics. Pask was first introduced toHeinz von Foersterduring this time, who were both informed by the attendees of the conference of having submitted similar papers.[26][27]After searching for Pask through the streets of Namur, von Foerster described his first observation of Pask as that of a "leprechaun in a black double-breasted jacket over a white shirt with a black bow tie, puffing a cigarette through a long cigarette holder, and fielding questions, always with a polite smile, that were tossed at him from all directions".[28]von Foerster later asked Pask to join him at theBiological Computer Laboratoryat theUniversity of Illinois;[29][27]subsequently describing him after his death as both being difficult and yet a genius.[30]He also this year produced SAKI (self-adaptive keyboard machine) for the instruction and development of keyboard skills aimed at the commercial marketplace.[1]
His former research assistant Bernard Scott argues that "The Mechanisation of Thought Processes" conference at theNational Physics LaboratoryinTeddington,[Footnote 7]London represented a critical point in the development of Pask's thinking:[Footnote 8]It was here Pask first published his paper "Physical Analogues to the Growth of a Concept" (1959) which contained a theoretical discussion on how the "growth of crystals [through the use of]electrodessuspended in an electronic solution", could be used to represent in purely physical phenomenon the growth of a concept.[27]Warren McCullochwrote in relation to the presentation that: "[Pask's] gadget does work; it does "take habbits" by a mechanism thatCharles Peirceproposed".[31][Footnote 9]During the later years of this period, Pask had begun to describe himself as amechanic philosopherto emphasize both the theoretical and experimental aspects of his role.[1][Footnote 10]
During the 1960s, Pask worked significantly with psychologist B. N. Lewis and computer scientistG. L. Mallen.[13][Footnote 11]In 1961, Pask publishedAn Approach to Cybernetics.[32]According toRanulph Glanville, the work argued in favour of the notion that cybernetics was at its heart the art of creating defensible metaphors; this being in reference to the cross-disciplinary nature of the early cybernetics movement, which specifically stressed how analogous forms of control and communication could be found operating between disciplines.[33]
Mallen joined System Research Ltd., in 1964 as a research associate on a project to analyse decision-making incrime investigation. This led to the development of SIMPOL (SIMulation of a POLice system), which was an information management game. Results from the project were reported back to thehome officeand were believed by Mallen to have had some impact on policy decisions taken by the police.[34]Mallen described Gordon as "a great gadgeteer and had built adaptiveteaching machines, for example, to train teleprinter operators, and he used these as a way into understanding human skill learning processes".[35]Mallen suggests that also during this year, Pask presented a lecture toEaling College of Arton system theory and cybernetics.[36]He writes this influenced several students there, and represented a general ethos in the 1960s regarding the breaking ofdisciplinary boundariesfor which Systems Research Ltd., became a central convergence point.[37]One notable project Pask became involved with involved theFun Palace, conceived of with the aid ofJoan LittlewoodandCedric Price.[38]
Sometime during this period, Pask metGeorge Spencer-Brownwho became alodgerat the Pask family's home while working at Stafford Beer andRoger Eddison'soperational research consultancy SIGMA (Science in General Management)viastrong recommendation fromBertrand Russell.[39]It was here where Spencer-Brown is said to have written hisLaws of Formfor long hours whilst inebriated in the Pask family's bathtub.[15][39]According to Vanilla Beer, Stafford's daughter, Pask is purported to have claimed while reminiscing about Spencer-Brown's time at his and his wife's household, that "When [Spencer-Brown] bathed, it wasn't often. He used my gin, to wash in".[39]His wife Elizabeth is also purported to have said, in reference to Spencer-Brown having forgot her name after he ceased to be a lodger, "I wouldn't mind, but I cooked for him for six months".[39]
Pask later earned a PhD in psychology from theUniversity of Londonin 1964,[3]and later joinedBrunel Universityin 1968 as one of the founding Professors of the Cybernetics Department at Brunel.[40]The department was originally intended to be aresearch institutethat was originally spearheaded by the media proprietorCecil Harmsworth King, who was influenced by Stafford Beer's work in management consulting. King died however shortly before its opening, meaning that the Brunel enterprise mostly became a post-graduate teaching department rather than a research institute.[40]Since Pask could not find a viable solution for intersecting his work at System Research Ltd., with the department's permission decided to become a part-time Professor there whileFrank Georgebecame full-time head of the Cybernetics Department.[40]It was here he recruited Bernard Scott who he was introduced to by David Stuart, a newly appointed lecturer at Brunel in the Department of Psychology.[41]Scott later went on a sixth-month internship as a research assistant at System Research Ltd., who himself would later be a major contributor to the development of conversation theory.[42][43]
Pask later discontinued his work onchemical computers.[44]This may have happened during the early 1960s, or during the mid-1960s.[45]According toPeter Cariani, funding for alternative approaches toartificial intelligencehad dried up. This turn in direction was triggered by a greater emphasis on research utilizingsymbolic artificial intelligence. Previous approaches to artificial intelligence, which included the use ofneural nets,evolutionary programming,cybernetics,bionics, andbio-inspired computing, were side-lined by various funding bodies and interest groups. This placed greater pressure on System Research Ltd., to use more orthodoxdigital computerapproaches to technology-based issues.[46]Peter Cariani has expressed the view, that if we were to build physical devicesa laPask, we would replicate a kind of electrochemical assemblages, which would "have properties radically different from contemporary neural networks".[47]
Mallen documents that in 1968, Pask arrived to "create an exhibit for Jasia Reichardt's planned Cybernetic Serendipity project at theInstitute of Contemporary Arts".[38]It was here where Pask's Colloquy of Mobiles was firstexhibited. The figures in the exhibit would dance and rotate when spectators entered their vicinity. The system was built by Mark Dowson and Tony Watts, based on Pask's initial conception and with Mallen helping to install it.[38]According to Mallen, " It proved popular when it worked, but was a mite unreliable".[38]
In 1970, Mallen and others designed Ecogame, asystem dynamicsmodel of a hypothetical national economy,[48]which encouraged participants to reflect on their own behavior in the system. Thepedagogicalfunction was influenced by Pask's research and activity in cybernetics andmedia-art.[49]According to Claudia Costa Pederson, Pask understood and put emphasis on the view that learning was a self-organized, mutual and participatory process. Ecogame was therefore a pedagogicalsimulation, that was supposed to engage the viewer with an intuitive interface.[49]It was successfully demonstrated in September 1970 at the Computer '70trade showat the Olympia conference centre in London. Ecogame was subsequently incorporated into the program of the First European Management Forum during February 1971, which later emerged as the forerunner to theWorld Economic Forumin Davos.[49]A version of Ecogame was sold toIBMfor management education in theBlaricumIBM center. The slide projection technology of Ecogame was incorporated byStafford BeerintoProject Cybersyn, implemented bySalvador AllendeinChile.[49]
During the early 1970s, Pask became heavily involved in joint initiatives between his company and theCentre for the Study of Human Learning(CSHL) alongside Laurie Thomas and Shelia Harri-Augstein at Brunel on behalf of theMinistry of Defenceto examine conversational approaches to anger, where he exhibited alongside his associates at his company his CASTE and BOSS technologies.[50]By 1972, Pask began the process of compiling his work into the form of "a formal theory of conversational processes".[51]Due to the academic environment, Pask was working in, he decided early on from 1972 to 1973 to report on the experimental contents of his research due to theemphasis on empirical studiesand general distrust ofgrand theory.[52]Whilst visiting professor of educational technology, he obtained aDScin cybernetics from theOpen Universityin 1974.[3]
The collective work on Pask'sinterest in conversationat this time culminated in three major publications with the aid of Bernard Scott, Dionysius Kallikourdis, and others. At the same time Pask, with the assistance of the computer scientist Nick Green and others, had begun to work on military contracts on behalf of theUnited States Armyand theUnited States Army Air Forcesrespectively.[53]In 1975, Pask's team at System Research Ltd. had written and publishedThe Cybernetics of Human Learning & PerformanceandConversation, Cognition and Learning: A Cybernetic Theory and Methodology.[54][55]In the subsequent year 1976, they publishedConversation Theory: Applications in Education and Epistemology.[56]It has been claimed that due to the prevailing orthodox attitudes of psychological research at the time, his work did not gain widespread acceptance in the area but found more success ineducational research.[57][58]Pask also sometime between 1975 and 1978, received funding from theScience and Engineering Research Councilto develop the "Spy Ring" test in relation to his theory of learning styles.[53]
Around 1978, Pask became more heavily involved in Ministry of Defence projects; yet he was struggling to keep his own company viable.[59]The company later disbanded in the early 1980s, whereby he moved on to teach for a time atConcordia Universityand then theUniversity of Amsterdam(in the Centre for Innovation and Co-operative Technology), and theArchitectural Associationin London,[60][61]where he acted as a doctoral supervisor for Ranulph Glanville.[62]During the early 1980s, Pask co-authoredCalculator Saturnalia(1980) with the help of Ranulph Glanville and Mike Robinson, which consisted of a collection of games to play on a calculator; he also co-authoredMicroman Living and Growing with Computers(1982) with Susan Curran Macmillan.[61]Edward Barnes asserts that during this period, his work onconversation theory"was further refined during the 1980s and until Pask's death in 1996 by his research group in Amsterdam. This latter refinement is calledinteraction of actors(IA) theory".[63][Footnote 12]
According to Glanville, Pask semi-retired on 28 June 1993.[62]During the last few years of his life, Pask set up the company Pask Associates, a management consultancy firm, whose clients included theClub of Rome,Hydro Aluminium, and the Architecture Association.[53][64]He also provided some preliminary work for a project on behalf of theLondon Undergroundand received initial support fromGreenpeace Internationalat theImperial College London'sDepartment of Electronics for a project in quantitativechemical analysis.[53]He obtained aScDfrom his college,DowningCambridge in 1995,[3]and later died on 29 March 1996 at the London Clinic.[65]
Pask's primary contributions tocybernetics,educational psychology,learning theory, andsystems theory, as well as to numerous other fields, were his emphasis on the personal nature of reality, and on the process of learning as stemming from the consensual agreement of interacting actors in a given environment ("conversation").[citation needed]
In later life, Pask benefited less often from the critical feedback of research peers, reviewers of proposals, or reports to government bodies in the US and UK. Nevertheless, his publications were considered a storehouse ofideasthat are not fully theorized.[66]
Ted Nelson, who coined the concept ofhypermedia, references Pask inComputer Lib/Dream Machines.[citation needed]
Pask acted as a consultant toNicholas Negroponte, whose earliest research efforts at theArchitecture Machine GrouponIdiosyncrasyand software-based partners for design have their roots in Pask's work.[citation needed]
Andrew Pickeringargues that Pask was a "character" in the traditional British sense of the term, as he likens bothStafford BeerandGrey Walter. His dress sense was eccentric and flamboyant for his time, adopting the dress of anEdwardiandandywith his signaturebow tie,double-breasted jacket, andcape.[67]His sleep pattern, later in life, was described as "nocturnal" and would often begin his work at night and sleep during the day.[68]Mallen meanwhile has suggested: "He ran his life on a 36-hour rhythm which meant sleep times and meal times seldom coincided with those of us on normal 24-hourdiurnal rhythms. Nevertheless the theories and ideas which came of the resulting late night conversations were intellectually very stimulating, if physically demanding".[69]Furtado Cardoso Lopes notes that even from an early age, it was "Pask's curiosity, interdisciplinarity and interest in the complex nature of things that fuelled his incursion into cybernetics".[7]
Pask's "power to inspire [others] was evident throughout his working life".[70]He was noted by his former colleagues as being capable of great kindness and generosity,[Footnote 13]yet also sometimes the utter disregard for the individuals he associated himself with.[70][4]Part of this was due to his view that "conflict is a source of cognitive energy and thereby a means for moving a system forward more rapidly".[4]According to Luis Rocha, "Conflict was in fact one of his preferred tools to achieve consensual understanding between participants in a conversation".[71]
This generation of conflict, however, is noted to have sometimes driven those around him further away than he would have preferred.[4]This is evidenced in his own technological pursuits, where "His touch-typing tutor pushed the learner harder and harder, to the point where the rate of learning is greatest but also closest to the brink of system collapse".[4]While his friends and colleagues often recognized his genius, they would also acknowledge him as being at times difficult to get along with,[21][30]as well as "some need[ing] time to recover".[4]
He mellowed in later years and, inspired by his wife Elizabeth, converted toRoman Catholicism,[72]which according to Scott, "deeply satisfied his need for understandings that address the great mysteries of life".[70]Even with this mellowing, however, his innate intensity of character and interests was nonetheless always there.[15]
According toPaul Pangaro, a former collaborator and PhD student of his, Pask was critical of certain interpretations ofartificial intelligencewhich were common during the eras he was active in.[4]Alex Andrew has argued that Pask's interest in what is now labelled as "artificial intelligence", came from his general interest "in constructing artefacts with brain-like properties".[73]Pangaro claims that Pask had managed to simulate intelligence-like behaviours with electro-mechanical machines in the 1950s, with Pangaro further arguing "By realising that intelligence resides in interaction, not inside a head or box, his path was clear. To those who didn't understand his philosophical stance, the value of his work was invisible [to them]".[4]The emphasis for Pask, according to Pangaro, was that human intellectual activity existed as part of a kind of resonance that looped from a human individual through an environment or apparatus, back through to the individual.[4][15][Footnote 14]
Pask took a broad understanding of whatcyberneticsentailed. Unlikephysics, cybernetics had in Pask's mind no necessary commitment to a particular image as to what constitutes the environment. Instead, the focus is on the observations one makes viaobservation.[74]Pask saw it as mistaken to view cybernetics reductively. For him, cybernetics was not merely a derivative of other disciplines orapplied science.[12]Instead, Pask held true toNorbert Wiener's original vision by acknowledging that cybernetics attempts to provide a unifying framework for various disciplines by establishing "a common language and set of sharedprinciplesfor understanding the organization of complex systems".[70][12]
Pask participated in the seminal exhibition "Cybernetic Serendipity" (ICA London, 1968) with the interactive installation "Colloquy of Mobiles", continuing his ongoing dialogue with the visual and performing arts. (cf Rosen 2008, and Dreher's History of Computer Art)
Pask collaborated with architectCedric Priceand theatre directorJoan Littlewoodon the radical Fun Palace project during the 1960s, setting up the project's 'Cybernetics Subcommittee'.
Musicolour was an interactive light installation developed by Pask in 1953.[75]It responded to musicians' variations and, if they did not vary their playing, it would become 'bored' and stop responding, prompting the musicians to respond.
Musicolour was influential onCedric Price's Generator project, via the work of consultants Julia and John Frazer.[76][77]
SAKI (self-adaptive keyboard machine) was an adaptable keyboard machine created by Pask which fostered interactivity between user and machine.
Thoughtsticker(written as THOUGHTSTICKER) was described by Pask and his fellow collaborators in the 1970s as a special type of educationaloperating system.[Footnote 15][78]In the operating system, a user makes aconcrete modelor collection of concrete models in theconcrete modeling facilityof that operating system.[79]The user then sets out to describe why and how the model or collection of models relates to satisfying some overarchinggoalor thesisviadescribing theircognitive modelorpersonal constructof that relation in thecognitive modeling facilityof that operating system.[79]In explaining why and how the model or collection of models satisfies the goal or thesis, the user may add to their original concrete model, or provide new descriptions of topics for their cognitive model that had not been sufficiently elaborated upon.[79]Compared to Pask's EXTEND unit, Thoughtsticker was said to exteriorize the innovation of ideas in learning, whereas EXTEND merely permitted and recorded the product of such a process.[80]
Pask wrote extensively and contributed to a variety of institutions, journals, and publishing houses. Many items in the following list of publications have been identified at the Pask archive at theUniversity of Vienna.[Footnote 1]
|
https://en.wikipedia.org/wiki/Gordon_Pask
|
TheInternational Conference on Concurrency Theory(CONCUR) is anacademic conferencein the field ofcomputer science, with focus on the theory ofconcurrencyand its applications. It is the flagship conference forconcurrency theoryaccording to theInternational Federation for Information ProcessingWorking Group on Concurrency Theory (WP 1.8).[1]The conference is organised annually since 1988. Since 2015, papers presented at CONCUR are published in theLIPIcs–Leibniz International Proceedings in Informatics, a "series of high-quality conference proceedings across all fields in informatics established in cooperation with SchlossDagstuhl–Leibniz Center for Informatics".[2][3]Before, CONCUR papers were published in the seriesLecture Notes in Computer Science.[4]
In 2020, the International Conference on Concurrency Theory (CONCUR) and theIFIP Working Group 1.8 on Concurrency Theoryestablished the CONCUR Test-of-Time Award.
The goal of the Award is to recognize important achievements in concurrency theory that
have stood the test of time, and were published at CONCUR since its first edition in 1990.[16]
Starting with CONCUR 2024, an award event will take
place every other year, and recognize one or two papers presented at CONCUR in the 4-year period from 20 to 17 years earlier.
From 2020 to 2023 two such award events are combined each year, in order to also recognize achievements that appeared
in the early editions of CONCUR.[17]
|
https://en.wikipedia.org/wiki/International_Conference_on_Concurrency_Theory
|
OpenMPis anapplication programming interface(API) that supports multi-platformshared-memorymultiprocessingprogramming inC,C++, andFortran,[3]on many platforms,instruction-set architecturesandoperating systems, includingSolaris,AIX,FreeBSD,HP-UX,Linux,macOS,WindowsandOpenHarmony. It consists of a set ofcompiler directives,library routines, andenvironment variablesthat influence run-time behavior.[2][4][5][6]
OpenMP is managed by thenonprofittechnologyconsortiumOpenMP Architecture Review Board(orOpenMP ARB), jointly defined by a broad swath of leading computer hardware and software vendors, includingArm,AMD,IBM,Intel,Cray,HP,Fujitsu,Nvidia,NEC,Red Hat,Texas Instruments, andOracle Corporation.[1]
OpenMP uses aportable, scalable model that givesprogrammersa simple and flexible interface for developing parallel applications for platforms ranging from the standarddesktop computerto thesupercomputer.
An application built with the hybrid model ofparallel programmingcan run on acomputer clusterusing both OpenMP andMessage Passing Interface(MPI), such that OpenMP is used for parallelismwithina (multi-core) node while MPI is used for parallelismbetweennodes. There have also been efforts to run OpenMP onsoftware distributed shared memorysystems,[7]to translate OpenMP into MPI[8][9]and to extend OpenMP for non-shared memory systems.[10]
OpenMP is an implementation ofmultithreading, a method of parallelizing whereby aprimarythread (a series of instructions executed consecutively)forksa specified number ofsub-threads and the system divides a task among them. The threads then runconcurrently, with theruntime environmentallocating threads to different processors.
The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed.[3]Each thread has anIDattached to it which can be obtained using afunction(calledomp_get_thread_num()). The thread ID is an integer, and the primary thread has an ID of0. After the execution of the parallelized code, the threadsjoinback into the primary thread, which continues onward to the end of the program.
By default, each thread executes the parallelized section of code independently.Work-sharing constructscan be used to divide a task among the threads so that each thread executes its allocated part of the code. Bothtask parallelismanddata parallelismcan be achieved using OpenMP in this way.
The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based onenvironment variables, or the code can do so using functions. The OpenMP functions are included in aheader filelabelledomp.hinC/C++.
The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.[citation needed]
Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-orientednumerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features inCilk,X10andChapel.[11]
Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept oftasksand thetaskconstruct,[12]significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0.[13]
Version 4.0 of the specification was released in July 2013.[14]It adds or improves the following features: support foraccelerators;atomics; error handling;thread affinity; tasking extensions; user definedreduction;SIMDsupport;Fortran 2003support.[15][full citation needed]
Version 5.2 of OpenMP was released in November 2021.[16]
Version 6.0 was released in November 2024.[17]
Note that not all compilers (and OSes) support the full set of features for the latest version/s.
The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables.
In C/C++, OpenMP uses#pragmas. The OpenMP specific pragmas are listed below.
The pragmaomp parallelis used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted asmaster threadwith thread ID 0.
Example (C program): Display "Hello, world." using multiple threads.
Use flag -fopenmp to compile using GCC:
Output on a computer with two cores, and thus two threads:
However, the output may also be garbled because of therace conditioncaused from the two threads sharing thestandard output.
Whetherprintfis atomic depends on the underlying implementation[18]unlike C++11'sstd::cout, which is thread-safe by default.[19]
Used to specify how to assign independent work to one or all of the threads.
Example: initialize the value of a large array in parallel, using each thread to do part of the work
This example isembarrassingly parallel, and depends only on the value ofi. The OpenMPparallel forflag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable.[20]For instance, with two worker threads, one thread might be handed a version ofithat runs from 0 to 49999 while the second gets a version running from 50000 to 99999.
Variant directives are one of the major features introduced in OpenMP 5.0 specification to facilitate programmers to improve performance portability. They enable adaptation of OpenMP pragmas and user code at compile time. The specification defines traits to describe active OpenMP constructs, execution devices, and functionality provided by an implementation, context selectors based on the traits and user-defined conditions, andmetadirectiveanddeclare directivedirectives for users to program the same code region with variant directives.
The mechanism provided by the two variant directives for selecting variants is more convenient to use than the C/C++ preprocessing since it directly supports variant selection in OpenMP and allows an OpenMP compiler to analyze and determine the final directive from variants and context.
Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoidrace conditionsand there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced asdata sharing attribute clausesby appending them to the OpenMP directive. The different types of clauses are:
Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc
A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example,OMP_NUM_THREADSis used to specify number of threads for an application.
OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions[21][22][23]), as well asIntel Parallel Studiofor various processors.[24]Oracle Solaris Studiocompilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers fromThe Portland Groupalso support OpenMP 2.5.GCChas also supported OpenMP since version 4.2.
Compilers with an implementation of OpenMP 3.0:
Several compilers support OpenMP 3.1:
Compilers supporting OpenMP 4.0:
Several Compilers supporting OpenMP 4.5:
Partial support for OpenMP 5.0:
Auto-parallelizingcompilers that generates source code annotated with OpenMP directives:
Several profilers and debuggers expressly support OpenMP:
Pros:
Cons:
One might expect to get anNtimesspeedupwhen running a program parallelized using OpenMP on aNprocessor platform. However, this seldom occurs for these reasons:
Some vendors recommend setting theprocessor affinityon OpenMP threads to associate them with particular processor cores.[46][47][48]This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).
A variety of benchmarks has been developed to demonstrate the use of OpenMP, test its performance and evaluate correctness.
Simple examples
Performance benchmarks include:
Correctness benchmarks include:
|
https://en.wikipedia.org/wiki/OpenMP
|
Incomputer science,partitioned global address space(PGAS) is aparallel programming modelparadigm. PGAS is typified by communication operations involving a global memoryaddress spaceabstraction that is logically partitioned, where a portion is local to each process, thread, orprocessing element.[1][2]The novelty of PGAS is that the portions of theshared memoryspace may have an affinity for a particular process, thereby exploitinglocality of referencein order to improve performance. A PGAS memory model is featured in various parallel programming languages and libraries, including:Coarray Fortran,Unified Parallel C,Split-C,Fortress,Chapel,X10,UPC++,Coarray C++,Global Arrays,DASHandSHMEM. The PGAS paradigm is now an integrated part of theFortranlanguage, as ofFortran 2008which standardized coarrays.
The various languages and libraries offering a PGAS memory model differ widely in other details, such as the base programming language and the mechanisms used to express parallelism. Many PGAS systems combine the advantages of aSPMDprogramming style for distributed memory systems (as employed byMPI) with the data referencing semantics of shared memory systems. In contrast tomessage passing, PGAS programming models frequently offer one-sided communication operations such as Remote Memory Access (RMA), whereby one processing element may directly access memory with affinity to a different (potentially remote) process, without explicit semantic involvement by the passive target process. PGAS offers more efficiency and scalability than traditional shared-memory approaches with a flat address space, because hardware-specificdata localitycan be explicitly exposed in the semantic partitioning of the address space.
A variant of the PGAS paradigm,asynchronous partitioned global address space(APGAS) augments the programming model with facilities for both local and remote asynchronous task creation.[3]Two programming languages that use this model areChapelandX10.
|
https://en.wikipedia.org/wiki/Partitioned_global_address_space
|
Pony(also referred to asponylang) is afreeandopen source, object-oriented,actor model,capabilities-secure, high performance programming language.[6][7]Pony's reference capabilities allow even mutable data to be safelypassed by referencebetween actors.Garbage collectionis performedconcurrently, per-actor, which eliminates the need to pause program execution or"stop the world".[8][9][10]Sylvan Clebsch is the original creator of the language.[11][12]It is now being maintained and developed by members of the Pony team.[13]
The language was created by Sylvan Clebsch, while aPhDstudent atImperial College London. His professor at that time wasSophia Drossopoulou, who is also well known for her contributions to computer programming, and as a lecturer. According to developers who have talked to Sylvan, he was frustrated with not having a high performance language that could run concurrent code securely, safely, and more simply.[14]
At its core, Pony is a systems language designed around safety and performance.
In Pony, instead of a main function, there is a mainactor. The creation of this actor serves as the entry point into the Pony program.[6][17]
There are no global variables in Pony, meaning everything must be contained within an instance of a class or an actor.[14]As such, even the environment that allows for printing tostandard outputis passed as a parameter.[14][6]
|
https://en.wikipedia.org/wiki/Pony_(programming_language)
|
Incomputing, aprocessis theinstanceof acomputer programthat is being executed by one or manythreads. There are many different process models, some of which are light weight, but almost all processes (even entirevirtual machines) are rooted in anoperating system(OS) process which comprises the program code, assigned system resources, physical and logical access permissions, and data structures to initiate, control and coordinate execution activity. Depending on the OS, a process may be made up of multiple threads of execution that execute instructionsconcurrently.[1][2]
While a computer program is a passive collection ofinstructionstypically stored in a file on disk, a process is the execution of those instructions after being loaded from the disk into memory. Several processes may be associated with the same program; for example, opening up several instances of the same program often results in more than one process being executed.
Multitaskingis a method to allow multiple processes to shareprocessors(CPUs) and other system resources. Each CPU (core) executes a single process at a time. However, multitasking allows each processor toswitchbetween tasks that are being executed without having to wait for each task to finish (preemption). Depending on the operating system implementation, switches could be performed when tasks initiate and wait for completion ofinput/outputoperations, when a task voluntarily yields the CPU, on hardwareinterrupts, and when the operating system scheduler decides that a process has expired its fair share of CPU time (e.g, by theCompletely Fair Schedulerof theLinux kernel).
A common form of multitasking is provided by CPU'stime-sharingthat is a method for interleaving the execution of users' processes and threads, and even of independent kernel tasks – although the latter feature is feasible only in preemptivekernelssuch asLinux. Preemption has an important side effect for interactive processes that are given higher priority with respect to CPU bound processes, therefore users are immediately assigned computing resources at the simple pressing of a key or when moving a mouse. Furthermore, applications like video and music reproduction are given some kind of real-time priority, preempting any other lower priority process. In time-sharing systems,context switchesare performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor. This seemingly-simultaneous execution of multiple processes is calledconcurrency.
For security and reliability, most modernoperating systemsprevent directcommunicationbetween independent processes, providing strictly mediated and controlled inter-process communication.
In general, a computer system process consists of (or is said toown) the following resources:
The operating system holds most of this information about active processes in data structures calledprocess control blocks. Any subset of the resources, typically at least the processor state, may be associated with each of the process'threadsin operating systems that support threads orchildprocesses.
The operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures (e.g.,deadlockorthrashing). The operating system may also provide mechanisms forinter-process communicationto enable processes to interact in safe and predictable ways.
Amultitaskingoperating systemmay just switch between processes to give the appearance of many processesexecutingsimultaneously (that is, inparallel), though in fact only one process can be executing at any one time on a singleCPU(unless the CPU has multiple cores, thenmultithreadingor other similar technologies can be used).[a]
It is usual to associate a single process with a main program, and child processes with any spin-off, parallel processes, which behave likeasynchronoussubroutines. A process is said toownresources, of which animageof its program (in memory) is one such resource. However, in multiprocessing systemsmanyprocesses may run off of, or share, the samereentrantprogram at the same location in memory, but each process is said to own its ownimageof the program.
Processes are often called "tasks" inembeddedoperating systems. The sense of "process" (or task) is "something that takes up time", as opposed to "memory", which is "something that takes up space".[b]
The above description applies to both processes managed by an operating system, and processes as defined byprocess calculi.
If a process requests something for which it must wait, it will be blocked. When the process is in theblocked state, it is eligible for swapping to disk, but this is transparent in avirtual memorysystem, where regions of a process's memory may be really on disk and not inmain memoryat any time. Even portions of active processes/tasks (executing programs) are eligible for swapping to disk, if the portions have not been used recently. Not all parts of an executing program and its data have to be in physical memory for the associated process to be active.
An operating systemkernelthat allows multitasking needs processes to havecertain states. Names for these states are not standardised, but they have similar functionality.[1]
When processes need to communicate with each other they must share parts of theiraddress spacesor use other forms of inter-process communication (IPC).
For instance in ashellpipeline, the output of the first process needs to pass to the second one, and so on. Another example is a task that has been decomposed into cooperating but partially independent processes which can run simultaneously (i.e., using concurrency, or true parallelism – the latter model is a particular case of concurrent execution and is feasible whenever multiple CPU cores are available for the processes that are ready to run).
It is even possible for two or more processes to be running on different machines that may run different operating system (OS), therefore some mechanisms for communication and synchronization (calledcommunications protocolsfor distributed computing) are needed (e.g., theMessage Passing Interface{MPI}).
By the early 1960s, computer control software had evolved frommonitor control software, for exampleIBSYS, toexecutive control software. Over time, computers got faster whilecomputer timewas still neither cheap nor fully utilized; such an environment mademultiprogrammingpossible and necessary. Multiprogramming means that several programs runconcurrently. At first, more than one program ran on a single processor, as a result of underlyinguniprocessorcomputer architecture, and they shared scarce and limited hardware resources; consequently, the concurrency was of aserialnature. On later systems withmultiple processors, multiple programs may run concurrently inparallel.
Programs consist of sequences of instructions for processors. A single processor can run only one instruction at a time: it is impossible to run more programs at the same time. A program might need someresource, such as an input device, which has a large delay, or a program might start some slow operation, such as sending output to a printer. This would lead to processor being "idle" (unused). To keep the processor busy at all times, the execution of such a program is halted and the operating system switches the processor to run another program. To the user, it will appear that the programs run at the same time (hence the term "parallel").
Shortly thereafter, the notion of a "program" was expanded to the notion of an "executing program and its context". The concept of a process was born, which also became necessary with the invention ofre-entrant code.Threadscame somewhat later. However, with the advent of concepts such astime-sharing,computer networks, and multiple-CPUshared memorycomputers, the old "multiprogramming" gave way to truemultitasking,multiprocessingand, later,multithreading.
|
https://en.wikipedia.org/wiki/Process_(computing)
|
Rustis ageneral-purposeprogramming languageemphasizingperformance,type safety, andconcurrency. It enforcesmemory safety, meaning that allreferencespoint to valid memory. It does so without a conventionalgarbage collector; instead, memory safety errors anddata racesare prevented by the "borrow checker", which tracks theobject lifetimeof referencesat compile time.
Rust does not enforce aprogramming paradigm, but was influenced by ideas fromfunctional programming, includingimmutability,higher-order functions,algebraic data types, andpattern matching. It also supportsobject-oriented programmingvia structs,enums, traits, and methods. It is popular forsystems programming.[13][14][15]
Software developer Graydon Hoare created Rust as a personal project while working atMozillaResearch in 2006. Mozilla officially sponsored the project in 2009. In the years following the first stable release in May 2015, Rust was adopted by companies includingAmazon,Discord,Dropbox,Google(Alphabet),Meta, andMicrosoft. In December 2022, it became the first language other thanCandassemblyto be supported in the development of theLinux kernel.
Rust has been noted for its rapid adoption, and has been studied inprogramming language theoryresearch.
Rust began as a personal project byMozillaemployee Graydon Hoare in 2006.[16]Hoare has stated that Rust was named for thegroup of fungithat are "over-engineered for survival".[16]During the time period between 2006 and 2009, Rust was not publicized to others at Mozilla and was written in Hoare's free time;[17]: 7:50Hoare began speaking about the language around 2009 after a small group at Mozilla became interested in the project.[18]Hoare emphasized prioritizing good ideas from old languages over new development, citing languages includingCLU(1974),BETA(1975),Mesa(1977),NIL(1981),Erlang(1987),Newsqueak(1988),Napier(1988),Hermes(1990),Sather(1990),Alef(1992), andLimbo(1996) as influences, stating "many older languages [are] better than new ones", and describing the language as "technology from the past come to save the future from itself."[17]: 8:17[18]Early Rust developer Manish Goregaokar similarly described Rust as being based on "mostly decades-old research."[16]
During the early years, the Rustcompilerwas written in about 38,000 lines ofOCaml.[17]: 15:34[19]Early Rust contained features such as explicitobject-oriented programmingvia anobjkeyword (later removed),[17]: 10:08and atypestatessystem that would allow variables of a type to be tracked along with state changes (such as going from uninitialized to initialized).[17]: 13:12
Mozilla officially sponsored the Rust project in 2009.[16]Brendan Eichand other executives, intrigued by the possibility of using Rust for a safeweb browserengine, placed engineers on the project including Patrick Walton, Niko Matsakis, Felix Klock, and Manish Goregaokar.[16]A conference room taken by the project developers was dubbed "the nerd cave," with a sign placed outside the door.[16]
During this time period, work had shifted from the initial OCaml compiler to aself-hosting compiler,i.e., written in Rust, based onLLVM.[20][note 4]The Rust ownership system was also in place by 2010.[16]The Rust logo was developed in 2011 based on a bicyclechainring.[22]
The first public release, Rust 0.1 was released on January 20, 2012[23]for Windows, Linux, and MacOS.[24]The early 2010s saw increasing involvement from open source volunteers outside of Mozilla and outside of the United States. At Mozilla, executives would eventually employ over a dozen engineers to work on Rust full time over the next decade.[16]
The years from 2012 to 2015 were marked by substantial changes to the Rusttype system, especially, removal of the typestate system, consolidation of other language features, and the removal of thegarbage collector.[17]: 18:36[16]Memory management through the ownership system was gradually consolidated and expanded to prevent memory-related bugs. By 2013, the garbage collector feature was rarely used, and was removed by the team in favor of the ownership system.[16]Other changes during this time included the removal ofpure functions, which were declared by an explicitpureannotation, in March 2013.[25]Specialized syntax support forchannelsand various pointer types were removed to simplify the language.[17]: 22:32
Rust's expansion and consolidation was influenced by developers coming fromC++(e.g., low-level performance of features),scripting languages(e.g., Cargo and package management), andfunctional programming(e.g., type systems development).[17]: 30:50
Graydon Hoare stepped down from Rust in 2013.[16]This allowed it to evolve organically under a more federated governance structure, with a "core team" of initially six people,[17]: 21:45around 30-40 developers total across various other teams,[17]: 22:22and aRequest for Comments(RFC) process for new language features added in March 2014.[17]: 33:47The core team would grow to nine people by 2016[17]: 21:45with over 1600 proposed RFCs.[17]: 34:08
According to Andrew Binstock writing forDr. Dobb's Journalin January 2014, while Rust was "widely viewed as a remarkably elegant language", adoption slowed because it radically changed from version to version.[26]Rust development at this time was focused on finalizing the language features and moving towards 1.0 so it could begin promisingbackward compatibility.[17]: 41:26
Six years after Mozilla sponsored its development, the firststable release, Rust 1.0, was published on May 15, 2015.[16]A year after the release, the Rust compiler had accumulated over 1,400 contributors and there were over 5,000 third-party libraries published on the Rust package management website Crates.io.[17]: 43:15
The development of theServo browser enginecontinued in parallel with Rust, jointly funded by Mozilla andSamsung.[27]The teams behind the two projects worked in close collaboration; new features in Rust were tested out by the Servo team, and new features in Servo were used to give feedback back to the Rust team.[17]: 5:41The first version of Servo was released in 2016.[16]TheFirefoxweb browser shipped with Rust code as of 2016 (version 45),[17]: 53:30[28]but components of Servo did not appear in Firefox until September 2017 (version 57) as part of theGeckoandQuantumprojects.[29]
Improvements were made to the Rust toolchain ecosystem during the years following 1.0 includingRustfmt,integrated development environmentintegration,[17]: 44:56a regular compiler testing and release cycle,[17]: 46:48a communitycode of conduct, and community discussion organized through anIRCchat.[17]: 50:36
The earliest adoption outside of Mozilla was by individual projects at Samsung,Facebook(nowMeta Platforms),Dropbox, and others including Tilde, Inc. (the company behindember.js).[17]: 55:44[16]Amazon Web Servicesfollowed in 2020.[16]Engineers cited performance, lack of a garbage collector, safety, and pleasantness of working in the language as reasons for the adoption, while acknowledging that it was a risky bet as Rust was new technology. Amazon developers cited the fact that Rustuses half as much electricityas similar code written inJava, behind onlyC,[16]as found by a study at theUniversity of Minho,NOVA University Lisbon, and theUniversity of Coimbra.[30][note 5]
In August 2020, Mozilla laid off 250 of its 1,000 employees worldwide, as part of a corporate restructuring caused by theCOVID-19 pandemic.[31][32]The team behind Servo was disbanded. The event raised concerns about the future of Rust, due to the overlap between the two projects.[33]In the following week, the Rust Core Team acknowledged the severe impact of the layoffs and announced that plans for a Rust foundation were underway. The first goal of the foundation would be to take ownership of alltrademarksanddomain names, and take financial responsibility for their costs.[34]
On February 8, 2021, the formation of theRust Foundationwas announced by five founding companies:Amazon Web Services,Google,Huawei,Microsoft, andMozilla.[35][36]The foundation, led by Shane Miller for its first two years, offered $20,000 grants and other support for programmers working on major Rust features.[16]In ablogpost published on April 6, 2021, Google announced support for Rust within theAndroid Open Source Projectas an alternative to C/C++.[37]
On November 22, 2021, the Moderation Team, which was responsible for enforcing the community code of conduct, announced their resignation "in protest of the Core Team placing themselves unaccountable to anyone but themselves".[38]In May 2022, the Rust Core Team, other lead programmers, and certain members of the Rust Foundation board implemented governance reforms in response to the incident.[39]
The Rust Foundation posted a draft for a new trademark policy on April 6, 2023, including rules for how the Rust logo and name can be used, which resulted in negative reactions from Rust users and contributors.[40]
On February 26, 2024, the U.S.White Housereleased a 19-page press report urging software development to move to memory-safe programming languages; specifically, moving away from C and C++ and encouraging languages like C#, Go, Java, Ruby, Swift, and Rust.[41][42]The report was widely interpreted as increasing interest in Rust.[43][44]The report was released through theOffice of the National Cyber Director.[41][45]
Rust'ssyntaxis similar to that ofCand C++,[46][47]although many of its features were influenced byfunctional programminglanguages such asOCaml.[48]Hoare has described Rust as targeted at frustrated C++ developers and emphasized features such as safety, control ofmemory layout, andconcurrency.[18]Safety in Rust includes the guarantees of memory safety, type safety, and lack of data races.
Below is a"Hello, World!" programin Rust. Thefnkeyword denotes afunction, and theprintln!macro(see§ Macros) prints the message tostandard output.[49]Statementsin Rust are separated bysemicolons.
Variablesin Rust are defined through theletkeyword.[50]The example below assigns a value to the variable with namefooand outputs its value.
Variables areimmutableby default, but adding themutkeyword allows the variable to be mutated.[51]The following example uses//, which denotes the start of acomment.[52]
Multipleletexpressions can define multiple variables with the same name, known asvariable shadowing. Variable shadowing allows transforming variables without having to name the variables differently.[53]The example below declares a new variable with the same name that is double the original value:
Variable shadowing is also possible for values of different types. For example, going from a string to its length:
Ablock expressionis delimited bycurly brackets. When the last expression inside a block does not end with a semicolon, the block evaluates to the value of that trailing expression:[54]
Trailing expressions of function bodies are used as the return value:[55]
Anifconditional expressionexecutes code based on whether the given value istrue.elsecan be used for when the value evaluates tofalse, andelseifcan be used for combining multiple expressions.[56]
ifandelseblocks can evaluate to a value, which can then be assigned to a variable:[56]
whilecan be used to repeat a block of code while a condition is met.[57]
For loopsin Rust loop over elements of a collection.[58]forexpressions work over anyiteratortype.
In the above code,4..=10is a value of typeRangewhich implements theIteratortrait. The code within the curly braces is applied to each element returned by the iterator.
Iterators can be combined with functions over iterators likemap,filter, andsum. For example, the following adds up all numbers between 1 and 100 that are multiples of 3:
More generally, theloopkeyword allows repeating a portion of code until abreakoccurs.breakmay optionally exit the loop with a value. In the case of nested loops, labels denoted by'label_namecan be used to break an outer loop rather than the innermost loop.[59]
Thematchandifletexpressions can be used forpattern matching. For example,matchcan be used to double an optional integer value if present, and return zero otherwise:[60]
Equivalently, this can be written withifletandelse:
Rust isstrongly typedandstatically typed, meaning that the types of all variables must be known at compilation time. Assigning a value of a particular type to a differently typed variable causes acompilation error.Type inferenceis used to determine the type of variables if unspecified.[61]
The default integer type isi32, and the defaultfloating pointtype isf64. If the type of aliteralnumber is not explicitly provided, it is either inferred from the context or the default type is used.[62]
Integer typesin Rust are named based on thesignednessand the number of bits the type takes. For example,i32is a signed integer that takes 32 bits of storage, whereasu8is unsigned and only takes 8 bits of storage.isizeandusizetake storage depending on the architecture of the computer that runs the code, for example, on computers with32-bit architectures, both types will take up 32 bits of space.
By default, integer literals are in base-10, but differentradicesare supported with prefixes, for example,0b11forbinary numbers,0o567foroctals, and0xDBforhexadecimals. By default, integer literals default toi32as its type. Suffixes such as4u32can be used to explicitly set the type of a literal.[63]Byte literals such asb'X'are available to represent theASCIIvalue (as au8) of a specific character.[64]
TheBoolean typeis referred to asboolwhich can take a value of eithertrueorfalse. Achartakes up 32 bits of space and represents a Unicode scalar value: aUnicode codepointthat is not asurrogate.[65]IEEE 754floating point numbers are supported withf32forsingle precision floatsandf64fordouble precision floats.[66]
User-defined types are created with thestructorenumkeywords. Thestructkeyword is used to denote arecord typethat groups multiple related values.[67]enums can take on different variants at runtime, with its capabilities similar toalgebraic data typesfound in functional programming languages.[68]Both records and enum variants can containfieldswith different types.[69]Alternative names, or aliases, for the same type can be defined with thetypekeyword.[70]
Theimplkeyword can define methods for a user-defined type. Data and functions are defined separately. Implementations fulfill a role similar to that ofclasseswithin other languages.[71]
Optionvalues are handled usingsyntactic sugar, such as theif letconstruction, to access the inner value (in this case, a string):[86]
(or possibly null pointer if wrapped in option)[76]
Rust does not usenull pointersto indicate a lack of data, as doing so can lead tonull dereferencing. Accordingly, the basic&and&mutreferences are guaranteed to not be null. Rust instead usesOptionfor this purpose:Some(T)indicates that a value is present, andNoneis analogous to the null pointer.[87]Optionimplements a "null pointer optimization", avoiding any spatial overhead for types that cannot have a null value (references or theNonZerotypes, for example).[88]
Unlike references, the raw pointer types*constand*mutmay be null; however, it is impossible to dereference them unless the code is explicitly declared unsafe through the use of anunsafeblock. Unlike dereferencing, the creation of raw pointers is allowed inside of safe Rust code.[89]
Rust provides no implicit type conversion (coercion) between primitive types. But, explicit type conversion (casting) can be performed using theaskeyword.[90]
Rust's ownership system consists of rules that ensure memory safety without using a garbage collector. At compile time, each value must be attached to a variable called theownerof that value, and every value must have exactly one owner.[91]Values are moved between different owners through assignment or passing a value as a function parameter. Values can also beborrowed,meaning they are temporarily passed to a different function before being returned to the owner.[92]With these rules, Rust can prevent the creation and use ofdangling pointers:[92][93]
Because of these ownership rules, Rust types are known aslinearoraffinetypes, meaning each value can be used exactly once. This enforces a form ofsoftware fault isolationas the owner of a value is solely responsible for its correctness and deallocation.[94]
When a value goes out of scope, it isdroppedby running itsdestructor. The destructor may be programmatically defined through implementing theDroptrait. This helps manage resources such as file handles, network sockets, andlocks, since when objects are dropped, the resources associated with them are closed or released automatically.[95]
Object lifetimerefers to the period of time during which areferenceis valid; that is, the time between the object creation and destruction.[96]Theselifetimesare implicitly associated with all Rust reference types. While often inferred, they can also be indicated explicitly with named lifetime parameters (often denoted'a,'b, and so on).[97]
Lifetimes in Rust can be thought of aslexically scoped, meaning that the duration of an object lifetime is inferred from the set of locations in the source code (i.e., function, line, and column numbers) for which a variable is valid.[98]For example, a reference to a local variable has a lifetime corresponding to the block it is defined in:[98]
The borrow checker in the Rust compiler then enforces that references are only used in the locations of the source code where the associated lifetime is valid.[99][100]In the example above, storing a reference to variablexinris valid, as variablexhas a longer lifetime ('a) than variabler('b). However, whenxhas a shorter lifetime, the borrow checker would reject the program:
Since the lifetime of the referenced variable ('b) is shorter than the lifetime of the variable holding the reference ('a), the borrow checker errors, preventingxfrom being used from outside its scope.[101]
Lifetimes can be indicated using explicitlifetime parameterson function arguments. For example, the following code specifies that the reference returned by the function has the same lifetime asoriginal(andnotnecessarily the same lifetime asprefix):[102]
When user-defined types hold references to data, they also need to use lifetime parameters. The example below parses some configuration options from a string and creates a struct containing the options. The functionparse_configalso showcases lifetime elision, which reduces the need for explicitly defining lifetime parameters.[103]
In the compiler, ownership and lifetimes work together to prevent memory safety issues such as dangling pointers.[104][105]
Rust's more advanced features include the use ofgeneric functions. A generic function is givengenericparameters, which allow the same function to be applied to different variable types. This capability reducesduplicate code[106]and is known asparametric polymorphism.
The following program calculates the sum of two things, for which addition is implemented using a generic function:
At compile time, polymorphic functions likesumareinstantiatedwith the specific types the code requires; in this case, sum of integers and sum of floats.
Generics can be used in functions to allow implementing a behavior for different types without repeating the same code. Generic functions can be written in relation to other generics, without knowing the actual type.[107]
Rust's type system supports a mechanism called traits, inspired bytype classesin theHaskelllanguage,[6]to define shared behavior between different types. For example, theAddtrait can be implemented for floats and integers, which can be added; and theDisplayorDebugtraits can be implemented for any type that can be converted to a string. Traits can be used to provide a set of common behavior for different types without knowing the actual type. This facility is known asad hoc polymorphism.
Generic functions can constrain the generic type to implement a particular trait or traits; for example, anadd_onefunction might require the type to implementAdd. This means that a generic function can be type-checked as soon as it is defined. The implementation of generics is similar to the typical implementation of C++ templates: a separate copy of the code is generated for each instantiation. This is calledmonomorphizationand contrasts with thetype erasurescheme typically used in Java and Haskell. Type erasure is also available via the keyworddyn(short for dynamic).[108]Because monomorphization duplicates the code for each type used, it can result in more optimized code for specific-use cases, but compile time and size of the output binary are also increased.[109]
In addition to defining methods for a user-defined type, theimplkeyword can be used to implement a trait for a type.[71]Traits can provide additional derived methods when implemented.[110]For example, the traitIteratorrequires that thenextmethod be defined for the type. Once thenextmethod is defined, the trait can provide common functional helper methods over the iterator, such asmaporfilter.[111]
Rust traits are implemented usingstatic dispatch, meaning that the type of all values is known at compile time; however, Rust also uses a feature known astrait objectsto accomplishdynamic dispatch, a type of polymorphism where the implementation of a polymorphic operation is chosen atruntime. This allows for behavior similar toduck typing, where all data types that implement a given trait can be treated as functionally equivalent.[112]Trait objects are declared using the syntaxdyn TrwhereTris a trait. Trait objects are dynamically sized, therefore they must be put behind a pointer, such asBox.[113]The following example creates a list of objects where each object can be printed out using theDisplaytrait:
If an element in the list does not implement theDisplaytrait, it will cause a compile-time error.[114]
Rust is designed to bememory safe. It does not permit null pointers,dangling pointers, ordata races.[115][116][117][118]Data values can be initialized only through a fixed set of forms, all of which require their inputs to be already initialized.[119]
Unsafe code can subvert some of these restrictions, using theunsafekeyword.[89]Unsafe code may also be used for low-level functionality, such asvolatile memory access, architecture-specific intrinsics,type punning, and inline assembly.[120]
Rust does not usegarbage collection. Memory and other resources are instead managed through the "resource acquisition is initialization" convention,[121]with optionalreference counting. Rust provides deterministic management of resources, with very lowoverhead.[122]Values areallocated on the stackby default, and alldynamic allocationsmust be explicit.[123]
The built-in reference types using the&symbol do not involve run-time reference counting. The safety and validity of the underlying pointers is verified at compile time, preventingdangling pointersand other forms ofundefined behavior.[124]Rust's type system separates shared,immutablereferences of the form&Tfrom unique, mutable references of the form&mut T. A mutable reference can be coerced to an immutable reference, but not vice versa.[125]
Macros allow generation and transformation of Rust code to reduce repetition. Macros come in two forms, withdeclarative macrosdefined throughmacro_rules!, andprocedural macros, which are defined in separate crates.[126][127]
A declarative macro (also called a "macro by example") is a macro, defined using themacro_rules!keyword, that uses pattern matching to determine its expansion.[128][129]Below is an example that sums over all its arguments:
Procedural macros are Rust functions that run and modify the compiler's inputtokenstream, before any other components are compiled. They are generally more flexible than declarative macros, but are more difficult to maintain due to their complexity.[130][131]
Procedural macros come in three flavors:
Rust has aforeign function interface(FFI) that can be used both to call code written in languages such asCfrom Rust and to call Rust code from those languages. As of 2024[update], an external library called CXX exists for calling to or from C++.[132]Rust and C differ in how they lay out structs in memory, so Rust structs may be given a#[repr(C)]attribute, forcing the same layout as the equivalent C struct.[133]
The Rust ecosystem includes its compiler, itsstandard library, and additional components for software development. Component installation is typically managed byrustup, a Rusttoolchaininstaller developed by the Rust project.[134]
The Rust compiler,rustc, translates Rust code into low-level LLVMIR. LLVM is then invoked as a subcomponent to applyoptimizationsand translate the resulting IR intoobject code. Alinkeris then used to combine the objects into a single executable image or binary file.[135]
Other than LLVM, the compiler also supports using alternative backends such asGCCandCraneliftfor code generation.[136]The intention of those alternative backends is to increase platform coverage of Rust or to improve compilation times.[137][138]
The Rust standard library defines and implements many widely used custom data types, including core data structures such asVec,Option, andHashMap, as well assmart pointertypes. Rust also provides a way to exclude most of the standard library using the attribute#![no_std]; this enables applications, such as embedded devices, which want to remove dependency code or provide their own core data structures. Internally, the standard library is divided into three parts,core,alloc, andstd, wherestdandallocare excluded by#![no_std].[139]
Cargo is Rust'sbuild systemandpackage manager. It downloads, compiles, distributes, and uploads packages—calledcrates—that are maintained in an official registry. It also acts as a front-end for Clippy and other Rust components.[140]
By default, Cargo sources its dependencies from the user-contributed registrycrates.io, butGitrepositories, crates in the local filesystem, and other external sources can also be specified as dependencies.[141]
Rustfmt is acode formatterfor Rust. It formats whitespace andindentationto produce code in accordance with a commonstyle, unless otherwise specified. It can be invoked as a standalone program, or from a Rust project through Cargo.[142]
Clippy is Rust's built-inlintingtool to improve the correctness, performance, and readability of Rust code. As of 2024[update], it has more than 700 rules.[143][144]
Following Rust 1.0, new features are developed innightlyversions which are released daily. During each six-week release cycle, changes to nightly versions are released to beta, while changes from the previous beta version are released to a new stable version.[145]
Every two or three years, a new "edition" is produced. Editions are released to allow making limitedbreaking changes, such as promotingawaitto a keyword to supportasync/awaitfeatures. Crates targeting different editions can interoperate with each other, so a crate can upgrade to a new edition even if its callers or its dependencies still target older editions. Migration to a new edition can be assisted with automated tooling.[146]
rust-analyzeris a collection of utilities that providesIntegrated development environments(IDEs) andtext editorswith information about a Rust project through theLanguage Server Protocol. This enables features includingautocompletion, and the display of compilation errors while editing.[147]
Since it performs no garbage collection, Rust is often faster than other memory-safe languages.[148][94][149]Most of Rust's memory safety guarantees impose no runtime overhead,[150]with the exception ofarray indexingwhich is checked at runtime by default.[151]Performance impact of array indexing bounds checks varies, but can be significant in some cases.[151]
Rust provides two "modes": safe and unsafe. Safe mode is the "normal" one, in which most Rust is written. In unsafe mode, the developer is responsible for the code's memory safety, which is used by developers for cases where the compiler is too restrictive.[152]
Many of Rust's features are so-calledzero-cost abstractions, meaning they are optimized away at compile time and incur no runtime penalty.[153]The ownership and borrowing system permitszero-copyimplementations for some performance-sensitive tasks, such asparsing.[154]Static dispatchis used by default to eliminatemethod calls, except for methods called on dynamic trait objects.[155]The compiler also usesinline expansionto eliminatefunction callsand statically-dispatched method invocations.[156]
Since Rust usesLLVM, all performance improvements in LLVM apply to Rust also.[157]Unlike C and C++, Rust allows for reordering struct and enum elements[158]to reduce the sizes of structures in memory, for better memory alignment, and to improvecacheaccess efficiency.[133]
Rust is used in software across different domains. Components from the Servo browser engine (funded byMozillaandSamsung) were incorporated in theGeckobrowser engine underlyingFirefox.[159]In January 2023, Google (Alphabet) announced support for using third party Rust libraries inChromium.[160][161]
Rust is used in severalbackendsoftware projects of largeweb services.OpenDNS, aDNSresolution service owned byCisco, uses Rust internally.[162][163]Amazon Web Servicesuses Rust in "performance-sensitive components" of its several services. In 2019, AWSopen-sourcedFirecracker, a virtualization solution primarily written in Rust.[164]Microsoft AzureIoT Edge, a platform used to run Azure services onIoTdevices, has components implemented in Rust.[165]Microsoft also uses Rust to run containerized modules withWebAssemblyandKubernetes.[166]Cloudflare, a company providingcontent delivery networkservices, used Rust to build a newweb proxynamed Pingora for increased performance and efficiency.[167]Thenpm package managerused Rust for its production authentication service in 2019.[168][169][170]
In operating systems, theRust for Linuxproject, launched in 2020, merged initial support into theLinux kernelversion 6.1 in late 2022.[171][172][173]The project is active with a team of 6–7 developers, and has added more Rust code with kernel releases from 2022 to 2024,[174]aiming to demonstrate theminimum viabilityof the project and resolve key compatibility blockers.[171][175]The first drivers written in Rust were merged into the kernel for version 6.8.[171]TheAndroiddevelopers used Rust in 2021 to rewrite existing components.[176][177]Microsofthas rewritten parts ofWindowsin Rust.[178]The r9 project aims to re-implementPlan 9 from Bell Labsin Rust.[179]Rust has been used in the development of new operating systems such asRedox, a "Unix-like" operating system andmicrokernel,[180]Theseus, an experimental operating system with modular state management,[181][182]and most ofFuchsia.[183]Rust is also used for command-line tools and operating system components, includingstratisd, afile systemmanager[184][185]and COSMIC, adesktop environmentbySystem76.[186]
In web development,Deno, a secure runtime forJavaScriptandTypeScript, is built on top ofV8using Rust and Tokio.[187]Other notable adoptions in this space includeRuffle, an open-sourceSWFemulator,[188]andPolkadot, an open sourceblockchainandcryptocurrencyplatform.[189]
Discord, aninstant messagingsoftware company, rewrote parts of its system in Rust for increased performance in 2020. In the same year, Dropbox announced that itsfile synchronizationhad been rewritten in Rust.Facebook(Meta) used Rust to redesign its system that manages source code for internal projects.[16]
In the 2024Stack OverflowDeveloper Survey, 12.6% of respondents had recently done extensive development in Rust.[190]The survey named Rust the "most admired programming language" every year from 2016 to 2024 (inclusive), based on the number of existing developers interested in continuing to work in the same language.[191][note 7]In 2024, Rust was the 6th "most wanted technology", with 28.7% of developers not currently working in Rust expressing an interest in doing so.[190]
Rust has been studied in academic research, both for properties of the language itself as well as the utility the language provides for writing software used for research. Its features around safety[192][152]and performance[193]have been examined.
In a journal article published toProceedings of the International Astronomical Union, astrophysicists Blanco-Cuaresma and Bolmont re-implemented programs responsible for simulating multi-planet systems in Rust, and found it to be a competitive programming language for its "speed and accuracy".[14]Likewise, an article published onNatureshared several stories of bioinformaticians using Rust for its performance and safety.[140]However, both articles have cited Rust's unique concepts, including its ownership system, being difficult to learn as one of the main drawbacks to adopting Rust.
According to oneMIT Technology Reviewarticle, the Rust community was seen as "unusually friendly" to newcomers[16]and particularly attracted people from thequeer community, partly due to itscode of conductwhich outlined a set of expectations for Rust community members to follow.[16]Inclusiveness of the community has been cited as an important factor for some Rust developers.[140]Demographic data on the community has been collected and published by the Rust official blog.[196]
TheRust Foundationis anon-profitmembership organizationincorporated inUnited States, with the primary purposes of backing the technical project as alegal entityand helping to manage the trademark and infrastructure assets.[197][47]
It was established on February 8, 2021, with five founding corporate members (Amazon Web Services, Huawei, Google, Microsoft, and Mozilla).[198]The foundation's board is chaired by Shane Miller.[199]Starting in late 2021, its Executive Director and CEO is Rebecca Rumbul.[200]Prior to this, Ashley Williams was interim executive director.[47]
The Rust project is composed ofteamsthat are responsible for different subareas of the development. The compiler team develops, manages, and optimizes compiler internals; and the language team designs new language features and helps implement them. The Rust project website lists 6 top-level teams as of July 2024[update].[201]Representatives among teams form the Leadership council, which oversees the Rust project as a whole.[202]
|
https://en.wikipedia.org/wiki/Rust_(programming_language)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.