Number
int64 1
7.61k
| Text
stringlengths 2
3.11k
|
|---|---|
4,201
|
In the 2020s, GPUs have been increasingly used for calculations involving embarrassingly parallel problems, such as training of neural networks on enormous datasets that are needed for large language models. Specialized processing cores on some modern workstation's GPUs are dedicated for deep learning since they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are expected to appear in consumer cards, as well.
|
4,202
|
Many companies have produced GPUs under a number of brand names. In 2009, Intel, Nvidia, and AMD/ATI were the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition, Matrox produces GPUs. Modern smartphones use mostly Adreno GPUs from Qualcomm, PowerVR GPUs from Imagination Technologies, and Mali GPUs from ARM.
|
4,203
|
Modern GPUs have traditionally used most of their transistors to do calculations related to 3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities . Newer cards such as AMD/ATI HD5000–HD7000 lack dedicated 2D acceleration; it is emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons. Later, units were added to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations that are supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces.
|
4,204
|
Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in the semiconductor device fabrication, the clock signal frequency, and the number and size of various on-chip memory caches. Performance is also affected by the number of streaming multiprocessors for NVidia GPUs, or compute units for AMD GPUs, or Xe cores for Intel discrete GPUs, which describe the number of core on-silicon processor units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second ; GPUs in the 2010s and 2020s typically deliver performance measured in teraflops . This is an estimated performance measure, as other factors can affect the actual display rate.
|
4,205
|
Most GPUs made since 1995 support the YUV color space and hardware overlays, important for digital video playback, and many GPUs made since 2000 also support MPEG primitives such as motion compensation and iDCT. This hardware-accelerated video decoding, in which portions of the video decoding process and video post-processing are offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding", or "GPU hardware assisted video decoding".
|
4,206
|
Recent graphics cards decode high-definition video on the card, offloading the central processing unit. The most common APIs for GPU accelerated video decoding are DxVA for Microsoft Windows operating systems and VDPAU, VAAPI, XvMC, and XvBA for Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded with MPEG-1, MPEG-2, MPEG-4 ASP , MPEG-4 AVC , VC-1, WMV3/WMV9, Xvid / OpenDivX , and DivX 5 codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2.
|
4,207
|
There are several dedicated hardware video decoding and encoding solutions.
|
4,208
|
Video decoding processes that can be accelerated by modern GPU hardware are:
|
4,209
|
These operations also have applications in video editing, encoding, and transcoding.
|
4,210
|
An earlier GPU may support one or more 2D graphics API for 2D acceleration, such as GDI and DirectDraw.
|
4,211
|
A GPU can support one or more 3D graphics API, such as DirectX, Metal, OpenGL, OpenGL ES, Vulkan.
|
4,212
|
In the 1970s, the term "GPU" originally stood for graphics processor unit and described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output. In 1994, Sony used the term in reference to the PlayStation console's Toshiba-designed Sony GPU. The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU". It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002. The AMD Alveo MA35D features dual VPU’s, each using the 5 nm process in 2023.
|
4,213
|
In personal computers, there are two main forms of GPUs. Each has many synonyms:
|
4,214
|
Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations:
|
4,215
|
Gaming
GeForce GTX, RTX
Nvidia Titan
Radeon HD, R5, R7, R9, RX, Vega and Navi series
Radeon VII
Intel Arc
|
4,216
|
GeForce GTX, RTX
|
4,217
|
Nvidia Titan
|
4,218
|
Radeon HD, R5, R7, R9, RX, Vega and Navi series
|
4,219
|
Radeon VII
|
4,220
|
Intel Arc
|
4,221
|
Cloud Gaming
Nvidia GRID
Radeon Sky
|
4,222
|
Nvidia GRID
|
4,223
|
Radeon Sky
|
4,224
|
Workstation
Nvidia Quadro
Nvidia RTX
AMD FirePro
AMD Radeon Pro
Intel Arc Pro
|
4,225
|
Nvidia Quadro
|
4,226
|
Nvidia RTX
|
4,227
|
AMD FirePro
|
4,228
|
AMD Radeon Pro
|
4,229
|
Intel Arc Pro
|
4,230
|
Cloud Workstation
Nvidia Tesla
AMD FireStream
|
4,231
|
Nvidia Tesla
|
4,232
|
AMD FireStream
|
4,233
|
Artificial Intelligence training and Cloud
Nvidia Tesla
AMD Radeon Instinct
|
4,234
|
Nvidia Tesla
|
4,235
|
AMD Radeon Instinct
|
4,236
|
Automated/Driverless car
Nvidia Drive PX
|
4,237
|
Nvidia Drive PX
|
4,238
|
Dedicated graphics processing units are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. This RAM is usually specially selected for the expected serial workload of the graphics card . Sometimes systems with dedicated discrete GPUs were called "DIS" systems as opposed to "UMA" systems . Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.
|
4,239
|
Graphics cards with dedicated GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express or Accelerated Graphics Port . They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available.
|
4,240
|
Technologies such as SLI and NVLink by Nvidia and CrossFire by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them. Multiple GPUs are still used on supercomputers , on workstations to accelerate video and 3D rendering, for VFX, GPGPU workloads and for simulations, and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs.
|
4,241
|
Integrated graphics processing unit , integrated graphics, shared graphics solutions, integrated graphics processors , or unified memory architecture use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of its northbridge chipset, or on the same die with the CPU . On certain motherboards, AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007 computers with integrated graphics account for about 90% of all PC shipments. They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004. However, modern integrated graphics processors such as AMD Accelerated Processing Unit and Intel Graphics Technology can handle 2D graphics or low-stress 3D graphics.
|
4,242
|
Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between its VRAM and GPU core. This memory bus bandwidth can limit the performance of the GPU, though multi-channel memory can mitigate this deficiency. Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.
|
4,243
|
On systems with "Unified Memory Architecture" , including modern AMD processors with integrated graphics, modern Intel processors with integrated graphics, Apple processors, the PS5 and Xbox Series , the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs and thanks to zero copy transfers, removes the need for either copying data over a bus between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data.
|
4,244
|
Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's HyperMemory and Nvidia's TurboCache.
|
4,245
|
Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the high latency of the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory.
|
4,246
|
It is common to use a general purpose graphics processing unit as a modified form of stream processor , running compute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete GPU designers, AMD and Nvidia, are pursuing this approach with an array of applications. Both Nvidia and AMD teamed with Stanford University to create a GPU-based client for the Folding@home distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.
|
4,247
|
GPGPUs can be used for many types of embarrassingly parallel tasks including ray tracing. They are generally suited to high-throughput computations that exhibit data-parallelism to exploit the wide vector width SIMD architecture of the GPU.
|
4,248
|
GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.
|
4,249
|
GPUs support API extensions to the C programming language such as OpenCL and OpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards: AMD APP SDK from AMD, and CUDA from Nvidia. These allow functions called compute kernels to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.
|
4,250
|
Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general, and for accelerating the fitness evaluation in genetic programming in particular. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's SIMD architecture. However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there. Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs.
|
4,251
|
An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit , but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering.
|
4,252
|
Therefore, it is desirable to attach a GPU to some external bus of a notebook. PCI Express is the only bus used for this purpose. The port may be, for example, an ExpressCard or mPCIe port , a Thunderbolt 1, 2, or 3 port , or an OCuLink port. Those ports are only available on certain notebook systems. eGPU enclosures include their own power supply , because powerful GPUs can consume hundreds of watts.
|
4,253
|
Official vendor support for external GPUs has gained traction. A milestone was Apple's decision to support external GPUs with MacOS High Sierra 10.13.4. Several major hardware vendors released Thunderbolt 3 eGPU enclosures. This support fuels eGPU implementations by enthusiasts.
|
4,254
|
With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design.
|
4,255
|
In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of integrated GPUs totaled around 75.5 million units, down 19% year-over-year.
|
4,256
|
While a superscalar CPU is typically also pipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former executes multiple instructions in parallel by using multiple execution units, whereas the latter executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases.
|
4,257
|
The superscalar technique is traditionally associated with several identifying characteristics :
|
4,258
|
Seymour Cray's CDC 6600 from 1964 is often mentioned as the first superscalar design. The 1967 IBM System/360 Model 91 was another superscalar mainframe. The Intel i960CA , the AMD 29000-series 29050 , and the Motorola MC88110 , microprocessors were the first commercial single-chip superscalar microprocessors. RISC microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units .
|
4,259
|
Except for CPUs used in low-power applications, embedded systems, and battery-powered devices, essentially all general-purpose CPUs developed since about 1998 are superscalar.
|
4,260
|
The P5 Pentium was the first superscalar x86 processor; the Nx586, P6 Pentium Pro and AMD K5 were among the first designs which decode x86-instructions asynchronously into dynamic microcode-like micro-op sequences prior to actual execution on a superscalar microarchitecture; this opened up for dynamic scheduling of buffered partial instructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium; it also simplified speculative execution and allowed higher clock frequencies compared to designs such as the advanced Cyrix 6x86.
|
4,261
|
The simplest processors are scalar processors. Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by a vector processor operates simultaneously on many data items. An analogy is the difference between scalar and vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently.
|
4,262
|
Superscalar CPU design emphasizes improving the instruction dispatcher accuracy, and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased. While early superscalar CPUs would have two ALUs and a single FPU, a later design such as the PowerPC 970 includes four ALUs, two FPUs, and two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design.
|
4,263
|
A superscalar processor usually sustains an execution rate in excess of one instruction per machine cycle. But merely processing multiple instructions concurrently does not make an architecture superscalar, since pipelined, multiprocessor or multi-core architectures also achieve that, but with different methods.
|
4,264
|
In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.
|
4,265
|
Available performance improvement from superscalar techniques is limited by three key areas:
|
4,266
|
Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. The instructions a = b + c; d = e + f can be run in parallel because none of the results depend on other calculations. However, the instructions a = b + c; b = e + f might not be runnable in parallel, depending on the order in which the instructions complete while they move through the units.
|
4,267
|
Although the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results.
|
4,268
|
No matter how advanced the semiconductor process or how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units , the burden of checking instruction dependencies grows rapidly, as does the complexity of register renaming circuitry to mitigate some dependencies. Collectively the power consumption, complexity and gate delay costs limit the achievable superscalar speedup.
|
4,269
|
However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup. Thus the degree of intrinsic parallelism in the code stream forms a second limitation.
|
4,270
|
Collectively, these limits drive investigation into alternative architectural changes such as very long instruction word , explicitly parallel instruction computing , simultaneous multithreading , and multi-core computing.
|
4,271
|
With VLIW, the burdensome task of dependency checking by hardware logic at run time is removed and delegated to the compiler. Explicitly parallel instruction computing is like VLIW with extra cache prefetching instructions.
|
4,272
|
Simultaneous multithreading is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures.
|
4,273
|
Superscalar processors differ from multi-core processors in that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as the ALU, integer multiplier, integer shifter, FPU, etc. There may be multiple versions of each execution unit to enable execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions from multiple threads, one thread per processing unit . It also differs from a pipelined processor, where the multiple instructions can concurrently be in various stages of execution, assembly-line fashion.
|
4,274
|
The various alternative techniques are not mutually exclusive—they can be combined in a single processor. Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also include vector capability.
|
4,275
|
In imperative programming languages, the term "conditional statement" is usually used, whereas in functional programming, the terms "conditional expression" or "conditional construct" are preferred, because these terms all have distinct meanings.
|
4,276
|
The if–then construct is common across many programming languages. Although the syntax varies from language to language, the basic structure looks like this:
|
4,277
|
For example:
|
4,278
|
In the example code above, the part represented by constitutes a conditional expression, having intrinsic value but having no intrinsic meaning. In contrast, the combination of this expression, the If and Then surrounding it, and the consequent that follows afterward constitute a conditional statement, having intrinsic meaning but no intrinsic value.
|
4,279
|
When an interpreter finds an If, it expects a Boolean condition – for example, x > 0, which means "the variable x contains a number that is greater than zero" – and evaluates that condition. If the condition is true, the statements following the then are executed. Otherwise, the execution continues in the following branch – either in the else block , or if there is no else branch, then after the end If.
|
4,280
|
After either branch has been executed, control returns to the point after the end If.
|
4,281
|
In early programming languages, especially some dialects of BASIC in the 1980s home computers, an if–then statement could only contain GOTO statements . This led to a hard-to-read style of programming known as spaghetti programming, with programs in this style called spaghetti code. As a result, structured programming, which allows arbitrary statements to be put in statement blocks inside an if statement, gained in popularity, until it became the norm even in most BASIC programming circles. Such mechanisms and principles were based on the older but more advanced ALGOL family of languages, and ALGOL-like languages such as Pascal and Modula-2 influenced modern BASIC variants for many years. While it is possible while using only GOTO statements in if–then statements to write programs that are not spaghetti code and are just as well structured and readable as programs written in a structured programming language, structured programming makes this easier and enforces it. Structured if–then–else statements like the example above are one of the key elements of structured programming, and they are present in most popular high-level programming languages such as C, Java, JavaScript and Visual Basic .
|
4,282
|
The else keyword is made to target a specific if–then statement preceding it, but for nested if–then statements, classic programming languages such as ALGOL 60 struggled to define which specific statement to target. Without clear boundaries for which statement is which, an else keyword could target any preceding if–then statement in the nest, as parsed.
|
4,283
|
can be parsed as
|
4,284
|
depending on whether the else is associated with the first if or second if. This is known as the dangling else problem, and is resolved in various ways, depending on the language .
|
4,285
|
By using else if, it is possible to combine several conditions. Only the statements following the first condition that is found to be true will be executed. All other statements will be skipped.
|
4,286
|
For example, for a shop offering as much as a 30% discount for an item:
|
4,287
|
In the example above, if the discount is 10%, then the first if statement will be evaluated as true and "you have to pay $30" will be printed out. All other statements below that first if statement will be skipped.
|
4,288
|
The elseif statement, in the Ada language for example, is simply syntactic sugar for else followed by if. In Ada, the difference is that only one end if is needed, if one uses elseif instead of else followed by if. PHP uses the elseif keyword both for its curly brackets or colon syntaxes. Perl provides the keyword elsif to avoid the large number of braces that would be required by multiple if and else statements. Python uses the special keyword elif because structure is denoted by indentation rather than braces, so a repeated use of else and if would require increased indentation after every condition. Some implementations of BASIC, such as Visual Basic, use ElseIf too. Similarly, the earlier UNIX shells use elif too, but giving the choice of delimiting with spaces, line breaks, or both.
|
4,289
|
However, in many languages more directly descended from Algol, such as Simula, Pascal, BCPL and C, this special syntax for the else if construct is not present, nor is it present in the many syntactical derivatives of C, such as Java, ECMAScript, and so on. This works because in these languages, any single statement can follow a conditional without being enclosed in a block.
|
4,290
|
This design choice has a slight "cost". Each else if branch effectively adds an extra nesting level. This complicates the job for the compiler , because the compiler must analyse and implement arbitrarily long else if chains recursively.
|
4,291
|
If all terms in the sequence of conditionals are testing the value of a single expression , an alternative is the switch statement, also called case-statement or select-statement. Conversely, in languages that do not have a switch statement, these can be produced by a sequence of else if statements.
|
4,292
|
Many languages support if expressions, which are similar to if statements, but return a value as a result. Thus, they are true expressions , not statements .
|
4,293
|
ALGOL 60 and some other members of the ALGOL family allow if–then–else as an expression:
|
4,294
|
In dialects of Lisp – Scheme, Racket and Common Lisp – the first of which was inspired to a great extent by ALGOL:
|
4,295
|
In Haskell 98, there is only an if expression, no if statement, and the else part is compulsory, as every expression must have some value. Logic that would be expressed with conditionals in other languages is usually expressed with pattern matching in recursive functions.
|
4,296
|
Because Haskell is lazy, it is possible to write control structures, such as if, as ordinary expressions; the lazy evaluation means that an if function can evaluate only the condition and proper branch . It can be written like this:
|
4,297
|
C and C-like languages have a special ternary operator for conditional expressions with a function that may be described by a template like this:
|
4,298
|
This means that it can be inlined into expressions, unlike if-statements, in C-like languages:
|
4,299
|
which can be compared to the Algol-family if–then–else expressions .
|
4,300
|
To accomplish the same using an if-statement, this would take more than one line of code , and require mentioning "my_variable" twice:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.